text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
---|---|---|---|---|
Introduction
Building your own computer vision model from scratch can be fun and fulfilling. You get to decide your preferred choice of machine learning framework and platform for training and deployment, design your data pipeline and neural network architecture, write custom training and inference scripts, and fine-tune your model algorithm’s hyperparameters to get the optimal model performance.
On the other hand, this can also be a daunting task for someone who has no or little computer vision and machine learning expertise. This post shows a step-by-step guide on how to build a natural flower classifier using Amazon Rekognition Custom Labels with AWS best practices.
Amazon Rekognition Custom Labels Overview
Amazon Rekognition Custom Labels is a feature of Amazon Rekognition, one of the AWS AI services for automated image and video analysis with machine learning. It provides Automated Machine Learning (AutoML) capability for custom computer vision end-to-end machine learning workflows.
It is suitable for anyone who wants to quickly build a custom computer vision model to classify images, detect objects and scenes unique to their use cases. No machine learning expertise is required.
Prerequisites
For this walkthrough, you should have the following prerequisites:
- An AWS account — You can create a new account if you don’t have one yet.
- AWS CLI — You should install or upgrade to the latest AWS Command Line Interface (AWS CLI) version 2.
Creating Least Privilege Access IAM User & Policies
As a security best practice, it is strongly recommended not to use the AWS account root user for any task where it is not required. Instead, create a new IAM (Identity and Access Management) user and grant the required permissions for the IAM user based on the principle of least privilege using identity-based policy. This adheres to the IAM best practices under the Security Pillar in the Machine Learning Lens for the AWS Well-Architected Framework.
In this walkthrough, the new IAM user requires both Programmatic access and AWS Management Console access.
A new customer-managed policy is created to define the set of permissions required for the IAM user. Besides, a bucket policy is also needed for an existing S3 bucket (in this case, my-rekognition-custom-labels-bucket), which is storing the natural flower dataset for access control. This existing bucket can be created by any user other than the new IAM user.
The policy’s definition in JSON format is as shown.
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:ListAllMyBuckets" ], "Resource": "*" }, { "Sid": "s3Policies", "Effect": "Allow", "Action": [ "s3:ListBucket", "s3:ListBucketVersions", "s3:CreateBucket", "s3:GetBucketAcl", "s3:GetBucketLocation", "s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectVersion", "s3:GetObjectTagging", "s3:GetBucketVersioning", "s3:GetObjectVersionTagging", "s3:PutBucketCORS", "s3:PutLifecycleConfiguration", "s3:PutBucketPolicy", "s3:PutObject", "s3:PutObjectTagging", "s3:PutBucketVersioning", "s3:PutObjectVersionTagging" ], "Resource": "arn:aws:s3:::custom-labels-console*" }, { "Sid": "rekognitionPolicies", "Effect": "Allow", "Action": [ "rekognition:CreateProject", "rekognition:CreateProjectVersion", "rekognition:StartProjectVersion", "rekognition:StopProjectVersion", "rekognition:DescribeProjects", "rekognition:DescribeProjectVersions", "rekognition:DetectCustomLabels", "rekognition:DeleteProject", "rekognition:DeleteProjectVersion" ], "Resource": "*" }, { "Sid": "groundTruthPolicies", "Effect": "Allow", "Action": [ "groundtruthlabeling:*" ], "Resource": "*" }, { "Sid": "s3ExternalBucketPolicies", "Effect": "Allow", "Action": [ "s3:GetBucketAcl", "s3:GetBucketLocation", "s3:GetObject", "s3:GetObjectAcl", "s3:GetObjectVersion", "s3:GetObjectTagging", "s3:ListBucket", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::my-rekognition-custom-labels-bucket/*" ] } ] }
{ :::my-rekognition-custom-labels-bucket" }, { :::my-rekognition-custom-labels-bucket/*" }, { "Sid": "AWSRekognitionS3ACLBucketWrite20191011", "Effect": "Allow", "Principal": { "Service": "rekognition.amazonaws.com" }, "Action": "s3:GetBucketAcl", "Resource": "arn:aws:s3:::my-rekognition-custom-labels-bucket" }, { "Sid": "AWSRekognitionS3PutObject20191011", "Effect": "Allow", "Principal": { "Service": "rekognition.amazonaws.com" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::my-rekognition-custom-labels-bucket/*", "Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } } } ] }
Flower Dataset
We use the Oxford Flower 102 dataset from the Oxford 102 Flower PyTorch Kaggle competition for building the natural flower classifier using Amazon Rekognition Custom Labels. We use this instead of the original dataset from the Visual Geometry Group, University of Oxford, because it has already been split into train, valid, test datasets, and more importantly, the data has been labelled with respective flower category numbers accordingly for train and valid.
This dataset has a total of 8,189 flower images, where the train split has 6,552 images (80%), the valid split has 818 images (10%), and the test split has 819 images (10%). The code snippet below helps to convert each of the 102 flower category numbers to their respective flower category name.
import os import json with open('cat_to_name.json', 'r') as flower_cat: data = flower_cat.read() flower_types = json.loads(data) for cur_dir_name, new_dir_name in flower_types.items(): os.rename(cur_dir_name, new_dir_name)
The dataset bucket should have the same folder structure, as shown below, with both train and valid folders. Each should have 102 folders beneath where each folder name corresponds to a specific flower category name.
Creating a New Flower Classifier Project
After the necessary setup has been completed, you can sign in to the AWS management console as the IAM user. Follow the steps in this guide to create your new project for Amazon Rekognition Custom Labels.
Creating New Training and Test Datasets
We create new training and test datasets for the flower classifier project in Amazon Rekognition Custom Labels by importing images from the S3 bucket. It is important to give the dataset a clear and distinctive name to distinguish between different datasets as well as training or test.
For the training dataset, the S3 folder location is set to the S3 train folder path as below. Similarly, for the test dataset, the S3 folder location is set to the S3 valid folder path.
s3://my-rekognition-custom-labels-bucket/datasets/oxford_flowers_102/train/
s3://my-rekognition-custom-labels-bucket/datasets/oxford_flowers_102/valid/
train |- alpine sea holly | |- image_06969.jpg | |- image_06970.jpg | |- ... |- anthurium | |- image_01964.jpg | |- image_01965.jpg | |- ... |...
valid |- alpine sea holly | |- image_06977.jpg | |- image_06978.jpg | |- ... |- anthurium | |- image_01972.jpg | |- image_01975.jpg | |- ... |...
All the images in both training and test datasets are organized into folder names that represent their respective flower category labels. Please make sure to enable Automatic Labeling by checking the box as shown above as Amazon Rekognition Custom Labels supports automatic labeling of these images in such structures. This can save a lot of time and effort from manually labeling large image datasets.
You can safely disregard the “Make sure that your S3 bucket is correctly configured” message as you should have applied the bucket policy earlier. Please make sure that your bucket name is correct if you use a different name than the one in this example.
After you create the training and test datasets, you should use the datasets as listed.
When you click into either of the datasets, you should find that all the images are labeled accordingly. You can click on any of the labels to inspect the images of that label. You can also search for a label in the search text box on the left.
Training New Flower Classifier Model
You can train a new model in the Amazon Rekognition Custom Labels console by following this guide. To create a test dataset, you should use the “Choose an existing test dataset” option, as shown below, since it should have been created in the previous section.
The training based on this flower dataset could take more than an hour (approximately 1 hour and 20 minutes in this case) to complete.
Evaluating the Trained Model Performance
After the flower classifier model is trained, you can review the model performance by accessing the Evaluation Results in the console, as shown. You can better understand the metrics for evaluating the model performance from this guide. You should be able to achieve similar model performance evaluation results with the same datasets in Amazon Rekognition Custom Labels.
The Per Label Performance is a great feature that allows you to analyze the performance metrics at per label level so that it’s faster and easier for you to find out which labels are performing better or poorer than the average.
Besides, you can also review and filter the results (True Positive, False Positive, False Negative) of the test images to understand where the model is making incorrect predictions. This information helps you to improve your model’s performance by indicating how to change or add images to your training or test dataset.
Starting the Flower Classifier Model
When you are happy with the performance of your trained flower classifier model, you can use it to predict flowers of your choice. Before you can use it, you need to start the model. At the bottom section of the model evaluation results page, there are sample AWS CLI commands on how to start, stop, and analyze flower images with your model. You can refer to this guide for the detailed step to start the model and set up the AWS CLI for the IAM user.
To start the model, use the AWS CLI command, as shown below. Note that you should change the command line arguments based on your setup or preference. The named profile is specific to the IAM user created for Amazon Rekognition Custom Labels.
aws rekognition start-project-version \ --project-version-arn "MODEL_ARN" \ --min-inference-units 1 \ --region us-east-1 \ --profile customlabels-iam
Starting the model takes a while (approximately 15 minutes in this case) to complete. You should see the model status shows as RUNNING in the console, as shown.
Classifying with Unseen Flower Images
After the model is running, you can use it to predict the flower types of images that do not exist in both the training and test datasets to determine how well your model can perform on supported flower types, which it has not seen before. You can use the AWS CLI command below to determine the predicted label of your image.
aws rekognition detect-custom-labels \ --project-version-arn "MODEL_ARN" \ --image '{"S3Object": {"Bucket": "BUCKET_NAME", "Name": "IMAGE_PATH"}}' \ --region us-east-1 \ --profile customlabels-iam
Here are some of the prediction results with datasets that are self-taken or independent from both the training and test datasets.
{ "CustomLabels": [ { "Name": "rose", "Confidence": 99.93900299072266 } ] }
{ "CustomLabels": [ { "Name": "lotus", "Confidence": 99.7560043334961 } ] }
{ "CustomLabels": [ { "Name": "moon orchid", "Confidence": 98.02899932861328 } ] }
{ "CustomLabels": [ { "Name": "hibiscus", "Confidence": 98.11100006103516 } ] }
{ "CustomLabels": [ { "Name": "sunflower", "Confidence": 99.86699676513672 } ] }
{ "CustomLabels": [] }
{ "CustomLabels": [] }
Cleaning Up Resources
You are charged for the amount of time your model is running. If you have finished using the model, you should stop it. You can use the AWS CLI command below to stop the model to avoid unnecessary costs incurred.
You should also delete the Custom Labels project and datasets in the S3 bucket if they are no longer needed to save costs as well.
aws rekognition stop-project-version \ --project-version-arn "MODEL_ARN" \ --region us-east-1 \ --profile customlabels-iam
Stopping the model is faster than starting the model. It takes approximately 5 minutes in this case. You should see the model status shows STOPPED in the console.
Conclusions and Next Steps
This post shows the complete step-by-step walkthrough to create a natural flower classifier using Amazon Rekognition Custom Labels with AWS best practices based on the AWS Well-Architected Framework. It also shows that you can build a high-performance custom computer vision model with Amazon Rekognition Custom Labels without machine learning expertise.
The model built in this walkthrough has an F1 score of 0.997, which is not easy to achieve for the same dataset if build from scratch even with extensive machine learning expertise. It is also able to perform well on the samples of the unseen natural flowers and is expected not able to predict on the samples of artificial flowers.
If you are interested in building a natural flower classifier from scratch, you might be interested in my post: Build, Train and Deploy A Real-World Flower Classifier of 102 Flower Types — With TensorFlow 2.3, Amazon SageMaker Python SDK 2.x, and Custom SageMaker Training & Serving Docker Containers.
Top comments (0) | https://dev.to/aws-heroes/building-natural-flower-classifier-using-amazon-rekognition-custom-labels-the-complete-guide-with-aws-best-practices-4nbk | CC-MAIN-2022-40 | en | refinedweb |
NAME
clearenv - clear the environment
SYNOPSIS
#include <stdlib.h>
int clearenv(void);
clearenv():
/* Glibc since 2.19: */ _DEFAULT_SOURCE
|| /* Glibc <=().
NOTES
On systems where clearenv() is unavailable, the assignment
environ = NULL;
will probably do.
The clearenv() function may be useful in security-conscious applications 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://man.archlinux.org/man/clearenv.3.en | CC-MAIN-2022-40 | en | refinedweb |
Mythic¶
The Mythic group contains an abstraction for writing individual pieces of business logic into packages. These packages are Myths and contain definitions for Redux actions, reducers, and epics.
Table of contents
- /mythic-configuration
- /mythic-multiselect
- /mythic-notifications
- /mythic-windowing
- /myths
/mythic-configuration¶
The
@nteract/mythic-configuration package is still in development. For more information, reach out to the community on GitHub.
Examples of /mythic-configuration¶
Initialize the package by including the
configuration package. Memory saves the configuration by default.
Example:
To use a config file, dispatch a
setConfigFile action following the code below.
import { configuration } from "@nteract/mythic-configuration"; import { makeConfigureStore } from "@nteract/myths"; export const configureStore = makeConfigureStore({ packages: [configuration], }); store.dispatch(setConfigFile("/etc/app.conf"));
The package saves any configuration options to that file and tracks it. Any additional options load as the file changes.
Configuration options are available after initialization.
Example:
export const { selector: tabSize, action: setTabSize, } = defineConfigOption({ label: "Tab Size", key: "codeMirror.tabSize", values: [ {label: "2 Spaces", value: 2}, {label: "3 Spaces", value: 3}, {label: "4 Spaces", value: 4}, ], defaultValue: 4, }); const currentValue = tabSize(store.getState()); store.dispatch(setTabSize(2));
To get an object from all config options with a common prefix, use
createConfigCollection.
Example:
const codeMirrorConfig = createConfigCollection({ key: "codeMirror", });
The
codeMirrorConfig() provides an above option
{tabSize: 4}, with proper default values.
All options are available with the code below.
import { allConfigOptions } from "@nteract/mythic-configuration"; const options = allConfigOptions(); const optionsWithCurrentValues = allConfigOptions(store.getState());
In order to change the key of a config option, deprecate the old key with the following code. This changes the old key to the new key, unless the new key already has a value.
Example:
createDeprecatedConfigOption({ key: "cursorBlinkRate", changeTo: (value: number) => ({ "codeMirror.cursorBlinkRate": value, }), });
API for /mythic-configuration¶
The API for the
@nteract/mythic-configuration package is still in development. For more information, reach out to the community on GitHub.
Example:
import { RootState } from "@nteract/myths"; import { ConfigurationState, setConfigAtKey } from "@nteract/mythic-configuration"; export interface ConfigurationOptionDefinition<TYPE = any> { label: string; key: string; defaultValue: TYPE; valuesFrom?: string; values?: Array<{ label: string; value: TYPE; }>; } export interface ConfigurationOption<TYPE = any> extends ConfigurationOptionDefinition<TYPE> { value?: TYPE; selector: (state: HasPrivateConfigurationState) => TYPE; action: (value: TYPE) => typeof setConfigAtKey.action; } export type HasPrivateConfigurationState = RootState<"configuration", ConfigurationState>;
/mythic-multiselect¶
The
@nteract/mythic-multiselect package implements a simple method of keeping track of multiple selected cells using the
myths framework.
Examples of /mythic-multiselect¶
Initialize the package by including the
notifications package in your store and rendering the
<NotificationsRoot/>:
Example:
import { multiselect, selectCell, unselectCell, clearSelectedCells, } from "@nteract/mythic-multiselect"; store.dispatch( selectCell({ contentRef: "content", id: "cellID", }) );
API for /mythic-multiselect¶
This content is still in development. For more information, reach out to the community on GitHub.
/mythic-notifications¶
The
@nteract/mythic-notifications package implements a notification system based on
blueprintjs, using the
myths framework.
Examples of /mythic-notifications¶
Initialize the package by including the
notifications package in your store and rendering the
<NotificationsRoot/>:
Example:
import { notifications, NotificationRoot } from "@nteract/mythic-notifications"; import { makeConfigureStore } from "@nteract/myths"; export const configureStore = makeConfigureStore({ packages: [notifications], }); export const App = () => <> {/* ... */} <NotificationRoot darkTheme={false} /> </>
Then dispatch actions made by
sendNotification.create:
import { sendNotification } from "@nteract/mythic-notifications"; store.dispatch(sendNotification.create({ title: "Hello World!", message: <em>Hi out there!</em>, level: "info", }));
API for /mythic-notifications¶
import { IconName } from "@blueprintjs/core"; export interface NotificationMessage { key?: string; icon?: IconName; title?: string; message: string | JSX.Element; level: "error" | "warning" | "info" | "success" | "in-progress"; action?: { icon?: IconName; label: string; callback: () => void; }; }
/mythic-windowing¶
The
@nteract/mythic-windowing package implements a windowing system based on
electron, using the
myths framework.
Examples of /mythic-windowing¶
Initialize the package by including the
windowing package in your store:
Example:
import { windowing, setWindowingBackend, electronBackend } from "@nteract/mythic-windowing"; import { makeConfigureStore } from "@nteract/myths"; export const configureStore = makeConfigureStore({ packages: [windowing], }); store.dispatch(setWindowingBackend.create(electronBackend)); const electronReady$ = new Observable((observer) => { (app as any).on("ready", launchInfo => observer.next(launchInfo)); }); electronReady$ .subscribe( () => store.dispatch( showWindow.create({ id: "splash", kind: "splash", width: 565, height: 233, path: join(__dirname, "..", "static", "splash.html"), }) ), (err) => console.error(err), () => store.dispatch( closeWindow.create("splash") ), );
API for /mythic-windowing¶
This content is still in development. For more information, reach out to the community on GitHub.
/myths¶
The
myths framework allows for integrating sets of closely related actions, reducers and epics. Myths allow close relationships where DRY and dependencies are minimized. Myths provide structured way to avoid boilerplate code.
Myths build on top of the Redux and RxJS libraries.
Redux helps to maintain the application state. In Redux, actions and reducers provide predictable state management. The state changes only when dispatching an action to a reducer.
In Redux-Observable, an epic is a function that takes in a stream of actions and returns a stream of actions.
Examples of /myths¶
This content is still in development. For more information, reach out to the community on GitHub.
MythicPackage¶
Create a
MythicPackage with a name, a type for its private state, and the initial state.
The example below creates a
MythicPackage named
"iCanAdd" which uses the
number
type for its private state
sum and an initial state of
sum as
0:
Example:
export const iCanAdd = createMythicPackage("iCanAdd")< { sum: number; } >({ initialState: { sum: 0, }, });
Myth¶
Next, use the
MythicPackage to create a
Myth with a name, a type for its payload, and optionally a reducer operating on its package's private state.
In example below, the
MythicPackage named
iCanAdd creates a
Myth named
"addToSum".
Example:
export const addToSum = iCanAdd.createMyth("addToSum")<number>({ reduce: (state, action) => state.set("sum", state.get("sum") + action.payload), });
A package can have any number of myths.
Action¶
To create an action based on a myth, use its
create function and dispatch this action normally.
Example:
store.dispatch(addToSum.create(8));
Store¶
A set of mythic packages yields a store. This store has all the appropriate reducers and epics in place.
Example:
type NonPrivateState = { foo: string }; const configureStore = makeConfigureStore<NonPrivateState>()({ packages: [ iCanAdd, ], }); export const store = configureStore({ foo: "bar" });
Definition of epics¶
Define epics using two different shorthand methods.
Example:
export const addToSum = iCanAdd.createMyth("addToSum")<number>({ reduce: (state, action) => state.set("sum", state.get("sum") + action.payload), thenDispatch: [ (action, state) => state.get("sum") - action.payload < 100 && 100 <= state.get("sum") ? of(sendNotification.create({message: "Just passed 100!"})) : EMPTY, ], andAlso: [ { // Halve the sum every time an error action happens when: action => action.error ?? false, dispatch: (action, state, addToSum_) => of(addToSum_.create(-state.get("sum") / 2)), }, ], });
The first method uses
thenDispatch: [] to define actions. These dispatch at the same time as defined type actions. The second method uses
andAlso: [] to generate actions based on a custom predicate.
Defining the type means the type is not available for reference yet. The type passes as the third argument to the dispatch function.
Testing¶
To test the actions of a mythic package, use the
testMarbles(...) method.
NOTE: This only tests the epics without evaluating reducers. | https://docs.nteract.io/groups/mythic-group/ | CC-MAIN-2022-40 | en | refinedweb |
I don’t know if this was “the” official use case, but the following produces a warning in Java (that can further produce compile errors if mixed with
return statements, leading to unreachable code):
while (1 == 2) { // Note that "if" is treated differently System.out.println("Unreachable code"); }
However, this is legal:
while .
JLS points out
if (false) does not trigger “unreachable code” for the specific reason that this would break support for debugging flags, i.e., basically this use case (h/t @auselen). (
static final boolean DEBUG = false; for instance).
I replaced
while for
if, producing a more obscure use case. I believe you can trip up your IDE, like Eclipse, with this behavior, but this edit is 4 years into the future, and I don’t have an Eclipse environment to play with.
Uses of Android UserManager.isUserAGoat()- Answer #2:
Android R Update:
From Android R, this method always returns false. Google says that this is done “to protect goat privacy”:
/** * Used to determine whether the user making this call is subject to * teleportations. * * <p>As of {@link android.os.Build.VERSION_CODES#LOLLIPOP}, this method can * now automatically identify goats using advanced goat recognition technology.</p> * * <p>As of {@link android.os.Build.VERSION_CODES#R}, this method always returns * {@code false} in order to protect goat privacy.</p> * * @return Returns whether the user making this call is a goat. */ public boolean isUserAGoat() { if (mContext.getApplicationInfo().targetSdkVersion >= Build.VERSION_CODES.R) { return false; } return mContext.getPackageManager() .isPackageAvailable("com.coffeestainstudios.goatsimulator"); }
Previous answer:
From their source, the method used to return
false until it was changed in API 21.
/** *.
In API 21 the implementation was changed to check if there is an installed app with the package
com.coffeestainstudios.goatsimulator
/** *"); }
Answer #3:
This appears to be an inside joke at Google. It’s also featured in the Google Chrome task manager. It has no purpose, other than some engineers finding it amusing. Which is a purpose by itself, if you will.
- In Chrome, open the Task Manager with Shift+Esc.
- Right click to add the
Goats Teleportedcolumn.
- Wonder.
There is even a huge Chromium bug report about too many teleported goats.
The following Chromium source code snippet is stolen from the HN comments.
int TaskManagerModel::GetGoatsTeleported(int index) const { int seed = goat_salt_ * (index + 1); return (seed >> 16) & 255; }
Answer #4:
Complementing the first’s particular case, will clog the breakpoint mark, making it difficult to enable/disable it. If the method is used as a convention, all the invocations could be later filtered by some script (during the commit phase maybe?).
Google guys are heavy Eclipse users (they provide several of their projects as Eclipse plugins: Android SDK, GAE, etc), so the first answer and this complementary answer make a lot of sense (at least for me).
Answer #5:
In the discipline of speech recognition, users are divided into goats and sheep. the future to be able to configure the speech recognition engine for goats’ needs.
Hope you learned something from this post.
Follow Programming Articles for more! | https://programming-articles.com/what-are-the-proper-use-cases-for-android-usermanager-isuseragoat/ | CC-MAIN-2022-40 | en | refinedweb |
The Atlassian Community can help you and your team get more value out of Atlassian products and practices.
I am running a fairly simple JQL, counting to results and returning them in a JSON format:
It works for a few days, and then it stops and I get this:
[{"target":"jira Open Tickets","datapoints":[[0]]},{"target":"jira Open Procurement","datapoints":[[0]]}]
I simply disabled and re-enable and boom!
[{"target":"jira Open Tickets","datapoints":[[16]]},{"target":"jira Open Procurement","datapoints":[[10]]}]
So it's strange it seems happy and it works and then later returns null. I was curious if anyone could see any issues. I am very very new to Java..component.ComponentAccessor
import com.atlassian.jira.issue.search.SearchProvider
import com.atlassian.jira.jql.parser.JqlQueryParser
import com.atlassian.jira.web.bean.PagerFilter
@BaseScript CustomEndpointDelegate delegate
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.bc.issue.search.SearchService
import com.atlassian.jira.jql.parser.JqlQueryParser
def jql= "project = PROJ and resolution = Unresolved ORDER BY 'Time to resolution' ASC"
def jqlProcurement = "project = PROJ AND status = 'Awaiting Delivery'"
def user = ComponentAccessor.jiraAuthenticationContext.loggedInUser
def jqlQueryParser = ComponentAccessor.getComponent(JqlQueryParser)
def searchService = ComponentAccessor.getComponent(SearchService)
def query = jqlQueryParser.parseQuery(jql)
def queryProcurement = jqlQueryParser.parseQuery(jqlProcurement)
long[] results
results = new long[1]
results[0] = searchService.searchCount(user,query)
long[] resultsProcurement
resultsProcurement = new long[1]
resultsProcurement[0] = searchService.searchCount(user,queryProcurement)
helpdeskOpen(httpMethod: "GET") { MultivaluedMap queryParams, String body ->
return Response.ok(new JsonBuilder([
["target": "jira Open Tickets","datapoints": [results]],
["target": "jira Open Procurement","datapoints": [resultsProcurement]]
]).toString()).build();
}
Do I follow this right that this is your registered REST Endpoint in ScriptRunner () and you are requesting the URL remotely to get the json?
Can you show the script how you are implementing the GET calls? My first hunch says this could be that you run this for a while, your authentication cookie expires, but because your instance allows anonymous access, the search will actually finish correctly without errors, but because it does the search now anonymously it will not find any relevant issues, hence, will think that 0 is correct for anonymous users.
You could also add logging to the script via
import org.apache.log4j.Level
log.setLevel(Level.DEBUG)
log.debug("Currently processing search for user $user, jql is $jql and results are $results")
// or similar
Edit:
I'm also looking at the way you are getting the current user via JiraAuthenticationContext.getLoggedInUser()
The documentation says to use a different approach -
Not sure if there would be a difference between them but caught my eye.
Thank you for the help! I think it is definitely an authorization issue.
However, I am confused on why I even need a user at all, if I am just trying to make an unauthenticated API.
I am accessing via a simple curl/wget:
In the example you linked, it is just getting the user making the request and returning it in a json format. But is it not possible to simply not require a user to authenticate?
Also, after I authenticate, or disable/renable, my grafana server can call the API without issue, which would be a completely separate HTTP session? I am confused on how authentication is working, or why it is even needed.
But thank you for the reply! I dont really know enough about java, groovey, or scriptrunner and I am trying to learn.
Depends how the
searchService.searchCount(user,query)
method treats the user I think, because you are trying to provide the value in the script after all.
Now what I don't know is how the SeachService.searchCount(ApplicationUser, Query) handles the applicationUser object - because you are still trying to provide a value to it, which you are getting from a potentially wrong interface (so I don't know what can happen to the variable over time, or how SearchService reacts to it when it becomes null). You might want to test it with some debug logging to be totally sure what happens in your environment. But I think we can be pretty sure it just stops finding issues when that variable becomes null, there's nothing else in the script that could affecting the results.
I would personally just hard-code some functional account to it and you should theoretically be fine with that, so long as that user has full browse access to those issues.
There is a UserManager interface with .getUserByName(String) method to supply the user.
import com.atlassian.jira.user.util.UserManager
import com.atlassian.jira.user.ApplicationUser
UserManager userManager = ComponentAccessor.getUserManager()
ApplicationUser userToSearchWith = userManager.getUserByName("automation-account")
All in all hard-coding some users is never a good practice but hey, gets the job done to verify.
I am a quite baffled with one thing though, if you have an active session in your browser, and send some http requests with curl, then that should not be using any cookies or whatever unless you specifically supply them in that curl. I can only assume it must be the getLoggedInUser() thing.
Let me know in case I've missed something and/or should backtrack :). | https://community.atlassian.com/t5/Adaptavist-questions/JQL-restAPI-frequently-returns-null-values/qaq-p/1560040 | CC-MAIN-2022-40 | en | refinedweb |
Query:
I’m using JSLint to go through JavaScript, and it’s returning many suggestions to replace
== (two equals signs) with
=== (three equals signs) when doing things like comparing
idSele_UNVEHtype.value.length == 0 inside of an
if statement.
Is there a performance benefit to replacing
== with
===?
Any performance improvement would be welcomed as many comparison operators exist.
If no type conversion takes place, would there be a performance gain over
==?
What is the difference between == and === in JavaScript? Answer #1:
The strict equality operator (
===) behaves identically to the abstract equality operator (
==) except no type conversion is done, and the types must be the same to be considered equal.
The lack of transitivity is alarming. My advice is to never use the evil twins. Instead, always use
===and
!==. All of the comparisons just shown produce
falsewith
Answer #2:, and is, therefore, faster as it skips one step.
Which equals operator (== vs ===) should be used in JavaScript comparisons? Answer #3:
Here’s an interesting visualisation of the equality comparison between
== and
===.
var1 === var2
When using
=== for JavaScript equality testing, everything is as is.
Nothing gets converted before being evaluated.
var1 == var2
When using
== for JavaScript equality testing, some funky conversions take place.
Summary of equality in Javascript
Conclusion:
Unless you fully understand the funky conversions that take place with
==, always use
===.
Answer #4:: the special case….
== vs === in JavaScript – Answer #5:
In JavaScript, it means of the same value and type.
For example,
4 == "4" // will return true
but
4 === "4" // will return false
Answer #6:
Why
== is so unpredictable?
What do you get when you compare an empty string
"" with the number zero
0?
true
Yep, that’s right according to
== an empty string and the number zero are the same time.
And it doesn’t end there, here’s another one:
'0' == false // true
Things get really weird with arrays.
[1] == true // true [] == false // true [[]] == false // true [0] == false // true
Then weirder with strings
[1,2,3] == '1,2,3' // true - REALLY?! '\r\n\t' == 0 // true - Come on!
It get’s worse:
When is equal not equal?
let A = '' // empty string let B = 0 // zero let C = '0' // zero string A == B // true - ok... B == C // true - so far so good... A == C // **FALSE** - Plot twist!
Let me say that again:
(A == B) && (B == C) // true (A == C) // **FALSE**
And this is just the crazy stuff you get with primitives.
It’s a whole new level of crazy when you use
== with objects.
At this point your probably wondering…
Why does this happen?
Well it’s because unlike “triple equals” (
===) which just checks if two values are the same.
== does a whole bunch of other stuff.
It has special handling for functions, special handling for nulls, undefined, strings, you name it.
It get’s pretty wacky.
In fact, if you tried to write a function that does what
== does it would look something like this:
function isEqual(x, y) { // if `==` were a function if(typeof y === typeof x) return y === x; // treat null and undefined the same var xIsNothing = (y === undefined) || (y === null); var yIsNothing = (x === undefined) || (x === null); if(xIsNothing || yIsNothing) return (xIsNothing && yIsNothing); if(typeof y === "function" || typeof x === "function") { // if either value is a string // convert the function into a string and compare if(typeof x === "string") { return x === y.toString(); } else if(typeof y === "string") { return x.toString() === y; } return false; }; // actually the real `==` is even more complicated than this, especially in ES6 return x === y; } function toPrimitive(obj) { var value = obj.valueOf(); if(obj !== value) return value; return obj.toString(); }
So what does this mean?
It means
== is complicated.
Because it’s complicated it’s hard to know what’s going to happen when you use it.
Which means you could end up with bugs.
So the moral of the story is…
Make your life less complicated.
Use
=== instead of
==.
The End.
Answer #7:
The equal comparison operator == is confusing and should be avoided.
If you HAVE TO live with it, then remember the following 3 things:
- It is not transitive: (a == b) and (b == c) does not lead to (a == c)
- It’s mutually exclusive to its negation: (a == b) and (a != b) always hold opposite Boolean values, with all a and b.
- In case of doubt, learn by heart the following truth table:
EQUAL OPERATOR TRUTH TABLE IN JAVASCRIPT
- Each row in the table is a set of 3 mutually “equal” values, meaning that any 2 values among them are equal using the equal == sign*
**.
Hope you learned something from this post.
Follow Programming Articles for more! | https://programming-articles.com/what-is-the-difference-between-and-in-javascript-answered/ | CC-MAIN-2022-40 | en | refinedweb |
ina De Jager1,779 Points
Won't change to uppercase
Hey guys. Posted this same question in the early afternoon yesterday and never got an answer... Any insights as to why my Grrr!!! isn't coming back in uppercase?
from animal import Animal class Sheep(Animal): pass sound = 'Grrr!!!' def __str__(self): self.sheep.noise = self.sound.upper()
2 Answers
cm21150,854 Points
Hi Carina,
Instead of a str method, let's create a method named noise, and have it return the uppercase value of the instance's sound. That way, noise will be set to that value (the uppercase value of self.sound.)
from animal import Animal class Sheep(Animal): pass sound = 'Grrr!!!' def noise(self): return self.sound.upper()
Hope this helps!
Carina De Jager1,779 Points
YES!!! Thank you! Was starting to go a bit bonkers. Haha | https://teamtreehouse.com/community/wont-change-to-uppercase-2 | CC-MAIN-2022-40 | en | refinedweb |
When
QLabel widgets. These widgets have the size Vertical Policy set to Preferred which automatically resizes the widgets down to fit the available space. The results are unreadable.
Problem of Too Many Widgets.png
Settings the Vertical Policy to Fixed keeps the widgets at their natural size, making them readable again.
Problem of Too Many Widgets With Fixed Heights
However, while we can still add as many labels as we like, eventually they start to fall off the bottom of the layout.
To solve this problem GUI applications can make use of scrolling regions to allow the user to move around within the bounds of the application window while keeping widgets at their usual size. By doing this an almost unlimited amount of data or widgets can be shown, navigated and viewed within a window — although care should be taken to make sure the result is still usable!
In this tutorial, we'll cover adding a scrolling region to your PyQt6 application using
QScrollArea.
Adding a QScrollArea in Qt Designer
First we'll look at how to add a
QScrollArea from Qt Designer.
Qt Creator — Select MainWindow for widget type
So we will choose the scroll area widget and add it to our layout as below.
First, create an empty
MainWindow in Qt Designer and save it as
mainwindow.ui
Add Scroll Area
Next choose to lay out the
QScrollArea vertically or horizontally, so that it scales with the window.
Lay Out The Scroll Area Vertically Or Horizontally
Voila, we now have a completed scroll area that we can populate with anything we need.
The Scroll Area Is Created
Inserting Widgets
We will now add labels to that scroll area. Lets take two labels and place it inside the
QScrollArea. We will then proceed to right click inside the scroll area and select Lay Out Vertically so our labels will be stacked vertically.
Add Labels to The Scroll Area And Set the Layout
We've set the background to blue so the illustration of this this works is clear. We can now add more labels to the
QScrollArea and see what happens. By default, the Vertical Policy of the label is set to Preferred which means that the label size is adjusted according to the constraints of widgets above and below.
Next, we'll add a bunch of widgets.
Adding More Labels to QScrollArea
Any widget can be added into a `QScrollArea` although some make more sense than others. For example, it's a great way to show multiple widgets containing data in a expansive dashboard, but less appropriate for control widgets — scrolling around to control an application can get frustrating.
Note that the scroll functionality has not been triggered, and no scrollbar has appeared on the right hand side. Instead the labels are still progressively getting smaller in height to accommodate the widgets.
However, if we set Vertical Policy to Fixed and set the minimumSize of height to 100px the labels will no longer be able to shrink vertically into the available space. As the layout overflows this will now trigger the
QScrollArea to display a scrollbar.
Setting Fixed Heights for Labels
With that, our scrollbar appears on the right hand side. What has happened is that the scroll area only appears when necessary. Without a fixed height constraint on the widget, Qt assumes the most logical way to handle the many widgets is to resize them. But by imposing size constraints on our widgets, the scroll bar appears to allow all widgets to keep their fixed sizes.
Another important thing to note is the properties of the scroll area. Instead of adjusting fixed heights, we can keep it in
Preferred , we can set the properties of the
verticalScrollBar to
ScrollBarAlwaysOn which will enable the scroll bar to appear sooner as below
ScrollArea Properties
Saving and running the code at the start of this tutorial gives us this scroll area app which is what we wanted.
App With Scroll Bar
Adding a QScrollArea from code
As with all widgets you can also add a
QScrollArea directly from code. Below we repeat the above example, with a flexible scroll area for a given number of widgets, using code.
from PyQt6.QtWidgets import (QWidget, QSlider, QLineEdit, QLabel, QPushButton, QScrollArea,QApplication, QHBoxLayout, QVBoxLayout, QMainWindow) from PyQt6.QtCore import Qt, QSize from PyQt6 import QtWidgets, uic import sys class MainWindow(QMainWindow): def __init__(self): super().__init__() self.initUI()) self.widget.setLayout(self.vbox) ) self.setCentralWidget(self.scroll) self.setGeometry(600, 100, 1000, 900) self.setWindowTitle('Scroll Area Demonstration') self.show() return def main(): app = QtWidgets.QApplication(sys.argv) main = MainWindow() sys.exit(app.exec_()) if __name__ == '__main__': main()
If you run the above code you should see the output below, with a custom widget repeated multiple times down the window, and navigable using the scrollbar on the right.
Scroll Area App
Next, we'll step through the code to explain how this view is constructed.
First we create our layout hierarchy. At the top level we have our
QMainWindow which we can set the
QScrollArea onto using
.setCentralWidget. This places the
QScrollArea in the window, taking up the entire area.
To add content to the
QScrollArea we need to add a widget using
.setWidget, in this case we are adding a custom
QWidget onto which we have applied a
QVBoxLayout containing multiple sub-widgets.)
This gives us the following hierarchy in the window:
Finally we set up properties on the
QScrollArea, setting the vertical scrollbar Always On and the horizontal Always Off. We allow the widget to be resized, and then add the central placeholder widget to complete the layout.
)
Finally, we will add the
QScrollArea as the central widget for our
QMainWindow and set up the window dimensions, title and show the window.
self.setCentralWidget(self.scroll) self.setGeometry(600, 100, 1000, 900) self.setWindowTitle('Scroll Area Demonstration') self.show()
To support developers in [[ countryRegion ]] I give a [[ localizedDiscount[couponCode] ]]% discount with the code [[ couponCode ]] — Enjoy!
For [[ activeDiscount.description ]] I'm giving a [[ activeDiscount.discount ]]% discount with the code [[ couponCode ]] — Enjoy!
Conclusion.
In this tutorial we've learned how to add a scrollbar with an unlimited number of widgets, programmatically or using Qt Designer. Adding a
QScrollArea is a good way to include multiple widgets especially on apps that are data intensive and require objects to be displayed as lists.
Have a go at making your own apps with
QScrollArea and share with us what you have made!
For more information about using QScrollArea check out the PyQt6 documentation. | https://www.pythonguis.com/tutorials/pyqt6-qscrollarea/ | CC-MAIN-2022-40 | en | refinedweb |
TestingTesting
Deno has a built-in test runner that you can use for testing JavaScript or TypeScript code.
QuickstartQuickstart
Firstly, let's create a file
url_test.ts and register a test case using
Deno.test() function.
// url_test.ts import { assertEquals } from ""; Deno.test("url test", () => { const url = new URL("./foo.js", ""); assertEquals(url.href, ""); });
Secondly, run the test using
deno test subcommand.
$ deno test url_test.ts running 1 test from test url test ... ok (2ms) test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out (9ms)
Writing testsWriting tests
To define a test you need to register it with a call to
Deno.test API. There
are multiple overloads of this API to allow for greatest flexibility and easy
switching between the forms (eg. when you need to quickly focus a single test
for debugging, using
only: true option):
import { assertEquals } from ""; // Compact form: name and function Deno.test("hello world #1", () => { const x = 1 + 2; assertEquals(x, 3); }); // Compact form: named function. Deno.test(function helloWorld3() { const x = 1 + 2; assertEquals(x, 3); }); // Longer form: test definition. Deno.test({ name: "hello world #2", fn: () => { const x = 1 + 2; assertEquals(x, 3); }, }); // Similar to compact form, with additional configuration as a second argument. Deno.test("hello world #4", { permissions: { read: true } }, () => { const x = 1 + 2; assertEquals(x, 3); }); // Similar to longer form, with test function as a second argument. Deno.test( { name: "hello world #5", permissions: { read: true } }, () => { const x = 1 + 2; assertEquals(x, 3); }, ); // Similar to longer form, with a named test function as a second argument. Deno.test({ permissions: { read: true } }, function helloWorld6() {"); } });
Test stepsTest steps
The test steps API provides a way to report distinct steps within a test and do setup and teardown code within that test.
Deno.test("database", async (t) => { const db = await Database.connect("postgres://localhost/test"); // provide a step name and function await t.step("insert user", async () => { const users = await db.query( "INSERT INTO users (name) VALUES ('Deno') RETURNING *", ); assertEquals(users.length, 1); assertEquals(users[0].name, "Deno"); }); // or provide a test definition await t.step({ name: "insert book", fn: async () => { const books = await db.query( "INSERT INTO books (name) VALUES ('The Deno Manual') RETURNING *", ); assertEquals(books.length, 1); assertEquals(books[0].name, "The Deno Manual"); }, ignore: false, // these default to the parent test or step's value sanitizeOps: true, sanitizeResources: true, sanitizeExit: true, }); // nested steps are also supported await t.step("update and delete", async (t) => { await t.step("update", () => { // even though this test throws, the outer promise does not reject // and the next test step will run throw new Error("Fail."); }); await t.step("delete", () => { // ...etc... }); }); // steps return a value saying if they ran or not const testRan = await t.step({ name: "copy books", fn: () => { // ...etc... }, ignore: true, // was ignored, so will return `false` }); // steps can be run concurrently if sanitizers are disabled on sibling steps const testCases = [1, 2, 3]; await Promise.all(testCases.map((testCase) => t.step({ name: `case ${testCase}`, fn: async () => { // ...etc... }, sanitizeOps: false, sanitizeResources: false, sanitizeExit: false, }) )); db.close(); });
Outputs:
test database ... test insert user ... ok (2ms) test insert book ... ok (14ms) test update and delete ... test update ... FAILED (17ms) Error: Fail. at <stack trace omitted> test delete ... ok (19ms) FAILED (46ms) test copy books ... ignored (0ms) test case 1 ... ok (14ms) test case 2 ... ok (14ms) test case 3 ... ok (14ms) FAILED (111ms)
Notes:
- Test steps must be awaited before the parent test/step function resolves or you will get a runtime error.
- Test steps cannot be run concurrently unless sanitizers on a sibling step or parent test are disabled.
- If nesting steps, ensure you specify a parameter for the parent step.
Deno.test("my test", (t) => { await t.step("step", async (t) => { // note the `t` used here is for the parent step and not the outer `Deno.test` await t.step("sub-step", () => { }); }); });
Nested test stepsNested test steps
Running testsRunning tests
To run the test, call
deno test with the file that contains your test
function. You can also omit the file name, in which case all tests in the
current directory (recursively) that match the glob
{*_,*.,}test.{ts, tsx, mts, js, mjs, jsx, cjs, cts} will be run. If you pass a
directory, all files in the directory that match this glob will be run.
The glob expands to:
- files named
test.{ts, tsx, mts, js, mjs, jsx, cjs, cts},
- or files ending with
.test.{ts, tsx, mts, js, mjs, jsx, cjs, cts},
- or files ending with
_test.{ts, tsx, mts, js, mjs, jsx, cjs, cts}
# Run all tests in the current directory and all sub-directories deno test # Run all tests in the util directory deno test util/ # Run just my_test.ts deno test my_test.ts
⚠️ If you want to pass additional CLI arguments to the test files use
--to inform Deno that remaining arguments are scripts arguments.
# Pass additional arguments to the test file deno test my_test.ts -- -e --foo --bar:
-
-
-
Example: spying on a function with SinonExample: spying on a function with Sinon
Test spies are function stand-ins that are used to assert if a function's internal behavior matches expectations. Sinon is a widely used testing library that provides test spies and can be used in Deno by importing it from a CDN, such as Skypack:
import sinon from "";
Say we have two functions,
foo and
bar and want to assert that
bar is
called during execution of
foo. There are a few ways to achieve this with
Sinon, one is to have function
foo take another function as a parameter:
// my_file.js export function bar() {/*...*/} export function foo(fn) { fn(); }
This way, we can call
foo(bar) in the application code or wrap a spy function
around
bar and call
foo(spy) in the testing code:
import sinon from ""; import { assertEquals } from ""; import { bar, foo } from "./my_file.js"; Deno.test("calls bar during execution of foo", () => { // create a test spy that wraps 'bar' const spy = sinon.spy(bar); // call function 'foo' and pass the spy as an argument foo(spy); assertEquals(spy.called, true); assertEquals(spy.getCalls().length, 1); });
If you prefer not to add additional parameters for testing purposes only, you
can also use
sinon to wrap a method on an object instead. In other JavaScript
environments
bar might have been accessible via a global such as
window and
callable via
sinon.spy(window, "bar"), but in Deno this will not work and
instead you can
export an object with the functions to be tested. This means
rewriting
my_file.js to something like this:
// my_file.js function bar() {/*...*/} export const funcs = { bar, }; // 'foo' no longer takes a parameter, but calls 'bar' from an object export function foo() { funcs.bar(); }
And then
import in a test file:
import sinon from ""; import { assertEquals } from ""; import { foo, funcs } from "./my_file.js"; Deno.test("calls bar during execution of foo", () => { // create a test spy that wraps 'bar' on the 'funcs' object const spy = sinon.spy(funcs, "bar"); // call function 'foo' without an argument foo(); assertEquals(spy.called, true); assertEquals(spy.getCalls().length, 1); }); | https://deno.land/[email protected]/testing | CC-MAIN-2022-40 | en | refinedweb |
Tutorials > C++ > Structures
Structures allow you to specify a data type that consists of a number of pieces of data.
This is useful to describe a single entity that is represented by a more than one variable.
Structures are defined using the struct keyword.
They can be defined in two ways. One way is to specify a structure as follows :
struct struct name
{ data types and statements };
This structure can now be used to create a number of variables in your program.
Another way of defining a structure is as follows :
struct { data types and statements }
struct_name;
This way of defining a structure creates only one variable named struct_name.
You cannot create more than one variable using this structure.
Contents of main.cpp :
#include <iostream>
#include <stdlib.h>
using namespace std;
Below we create a structure to hold information on a Player. Such
items include the name, health, mana and strength of the player. A flag is also used to
determine if he/she is alive. The way this structure has been created allows for
multiple Player variables to be created.
struct Player
{
char *name;
int health;
int mana;
int strength;
bool dead;
};
Below we create a World structure. As there is only one
world, we create the struct with the name after the contents of the structure.
This creates only one variable with the name World.
struct
{
int numPlayers;
int maxPlayers;
} World;
int main()
{
A Player variable can now be created like any other
normal variable.
Player player1;
Variables that are part of the structure can be accessed using the dot(.) operator.
A . must be placed after the variable which must then be followed by the variable
that you are wanting to access. The variables can therefore be initialized as shown below.
The values of these variables can be retrieved in the same way.
player1.name = "Fred";
player1.health = 100;
player1.dead = false;
cout << player1.name << " : "
<< player1.health << endl;
Another player is created below. Another way of initializing a structure is shown
below. After declaring the variable, an equals sign can be placed follwed by curly
brackets( {} ). The value of each variable can then be initialized by placing their values
in the order as shown in the structure definition separated by commas.
Player player2 = {
"Sally",
75,
50,
10,
false
};
cout << player2.name << " : "
<< player2.health << endl;
The code below shows how the World can be accessed straight
away. This is the only variable of this type.
World.numPlayers = 2;
World.maxPlayers = 5;
cout << World.numPlayers <<
" of " << World.maxPlayers
<< endl;
system("pause");
return 0;
}
You should now be able to create simple structs. More
advanced features of structs will appear in the next
tutorial.
Please let me know of any comments you may have : Contact Me
Back to Top
Read the Disclaimer | http://www.zeuscmd.com/tutorials/cplusplus/27-Structures.php | crawl-001 | en | refinedweb |
The different types of services that ColdFusion supports are referenced in different ways from the client when calling NetConnection.getService( ). For example, you must reference a service implemented as a ColdFusion page differently than a ColdFusion Component. Table 5-2 shows how different types of services should be referenced using getService( ).
In order to adapt to a variety of software architectures and personal preferences, ColdFusion supports Flash Remoting services implemented in a few different ways. You can write your remote services as ColdFusion pages, ColdFusion Components, or Server-Side ActionScript. When the Flash client references a remote service on the ColdFusion Server, ColdFusion looks up the service on the server and invokes it, returning the result to the Flash client. Flash Remoting looks for services in this order:
ColdFusion page (.cfm or .cfml)
ColdFusion Component (.cfc)
Server-Side ActionScript (.asr)
Now that we have covered some of the fundamentals of Flash Remoting with ColdFusion, let's look at some more specific elements of Flash and ColdFusion integration.
To invoke a ColdFusion page from a Flash application, follow these steps:
Set the gateway URL.
Create a connection object using NetServices.createGatewayConnection( ).
Create a service object by invoking getService( ) on the connection object obtained in Step 2. The service path includes the directory name, but not the .cfm page name.
Invoke the page as a method of the service object obtained in Step 3. (That is, use the page name, without the .cfm extension, as the method name.)
The first two steps should be very familiar by now:
NetServices.setDefaultGatewayUrl(""); var myConnection_conn = NetServices.createGatewayConnection( );
To create the service object in Step 3, specify the service name in the call to getService( ). The service name is the name of the directory containing the .cfm file (relative to the web root), substituting dots for slashes, but it does not include the .cfm file's name. For example, to invoke a .cfm page called sendEmail.cfm located in the directory wwwroot/com/oreilly/frdg/Email, create a reference to the service, as follows:
var emailService = myConnection_conn.getService("com.oreilly.frdg.Email", this);
Use the .cfm page name (without the extension) as the remote function name. The following code invokes the sendEmail.cfm ColdFusion page:
You can send as many arguments to the sendEmail.cfm page as you want:
To access the arguments passed from Flash, use the Flash.Params variable within your ColdFusion page. Flash.Params is an array containing sequentially numbered elements, one for each argument passed in from the Flash application.
For example, the following ColdFusion code accesses the four variables passed into the sendEmail.cfm page from Flash:
<cfmail to="#Flash.Params[1]#" from="#Flash.Params[2]#" subject="#Flash.Params[3]#"> #Flash.Params[4]# </cfmail>
Instead of passing ordered arguments to a ColdFusion page, you can also attach properties to an object and pass that object to your remote function. Any properties attached to the object become named arguments to the function. This example passes the to, from, subject, and body arguments as named arguments:
var args = new Object( ); args.to = toAddress; args.from = fromAddress; args.subject = subject; args.body = body; emailService.sendEmail(args);
You can express the same thing in a more succinct manner using an object literal:
To access named arguments on the server, treat the Flash variable as though it were a structure:
<cfmail to=Flash.to from=Flash.from subject=Flash.subject> #Flash.body# </cfmail>
You can also use the attribute name inside of single quotes within brackets. Don't forget to use pound signs and double quotes to surround each element, as follows:
<cfmail to="#Flash['to']#" from="#Flash['from']#" subject="#Flash['subject']#"> #Flash['body']# </cfmail>
Because named arguments are accessed by property name and not by order, the preceding ColdFusion examples work even if the arguments are attached to the object in a different order, such as:
Returning data from a ColdFusion page to Flash is as simple as assigning the return value to the Flash.Result variable. For example, to return the string "Email sent!" to the Flash client making the remote call, use the following code:
<cfset Flash.
As soon as the variable is defined, the data is returned to the client.
Example 5-1 shows the complete CF code, including some basic error handling and sending a return value. You can save the file in the remote services directory webroot\com\oreilly\frdg\cfpages under the name sendEmail.cfm.
<cftry> <cfmail to = Flash.to from = Flash.from subject = Flash.subject> #flash.body# </cfmail> <cfcatch type="Any"> <cfthrow message = "There was an error"> </cfcatch> </cftry> <cfset Flash.
The corresponding client-side ActionScript code is shown in Example 5-2. It assumes that a MessageBox component (from the Macromedia UI Components Set 2) named status_mb is available to display messages upon a successful send or an error. It assumes that the movie has text fields named to_txt, from_txt, subject_txt, and body_txt and containing appropriate text. The final sendEmail.fla file can be downloaded from the online Code Depot.
#include "NetServices.as" var my_conn; // Connection object var emailService; // Service object var myURL = ""; // Responder for general service methods function Responder ( ) { this.onResult = function (myResults) { if (myResults == null) myResults = "Email sent!"; status_mb._visible = true; status_mb.setMessage(myResults); }; this.onStatus = function (theError) { status_mb._visible = true; status_mb.setMessage(theError.description); System.onStatus = this.onStatus; }; } // Close the message box when OK is clicked status_mb.setCloseHandler("closeBox"); function closeBox ( ) { status_mb.visible = false; } // Initialize Flash Remoting function init ( ) { initialized = true; NetServices.setDefaultGatewayUrl(myURL); my_conn = NetServices.createGatewayConnection( ); emailService = my_conn.getService("com.oreilly.frdg.cfpages"); } init( ); // Send the email when the send_pb button is clicked send_pb.setClickHandler("send"); function send ( ) { var args = new Object( ); args.to = to_txt.text; args.from = from_txt.text; args.subject = subject_txt.text; args.body = body_txt.text; // Call the service, passing the responder and then the arguments emailService.sendEmail(new Responder( ), args); }
Writing Flash Remoting services as ColdFusion pages is relatively quick and easy. However, ColdFusion pages primarily are designed to return dynamic HTML to web browsers, as opposed to providing generic services to a variety of clients. ColdFusion Components (CFCs), on the other hand, are specifically designed to provide services to various clients.
CFCs are loosely modeled after Java objects and should be designed and written with many of the same principles in mind. The objective of a CFC should be to provide well-encapsulated functionality to a variety of clients. Encapsulation refers to a service's ability to provide functionality to a client without exposing anything about the implementation behind the functionality.
For example, consider a CFC called UserServices.cfc, containing a function called createUser( ), which takes several arguments pertaining to typical user data and returns a numeric key that is associated with the new user. To clients invoking createUser( ), it is not apparent whether the user information is written to a database or saved in a text file. In theory, the client shouldn't care even if the implementation behind createUser( ) changes completely, as long is it continues to take the same arguments and return a numeric key.
Now, consider an implementation of createUser( ) that requires database connection parameters to be passed to the method in addition to user information. If the implementation of createUser( ) changes to use text files, clients must change their code to pass in a file path rather than database connection parameters. It's easy to see how small changes can require changes elsewhere in an application, requiring rewriting and retesting the code.
CFCs can be invoked from Flash, ColdFusion pages, and even other CFCs. Since CFCs are typically client-agnostic (they don't care what type of client invokes them), it is important that their implementations be kept free of client-specific code. For example, if you were to access arguments passed into the CFC through the Flash variable scope, your CFC couldn't be called successfully from a ColdFusion page or another CFC. Conversely, if your CFC used the <cfoutput> tag to return HTML, it would no longer be usable from Flash. For that reason, it is also advisable not to use constructs such as session or application variables in your CFCs, as that breaks the encapsulation of the component functionality.
The other primary goal of a CFC should be code re-use. Whenever you find yourself implementing the same logic in more than one place in your code, you should consider abstracting the logic into a CFC. Once your CFC properly encapsulates your logic, you can reuse it from anywhere, including different applications and different types of clients.
Additionally, CFCs support inheritance, which means you can "layer" your CFC to get the most functionality out of the fewest lines of code. Multiple layers of abstraction allow you to maintain code in a single location, and they also allow you to change the implementation behind certain services without having to rewrite every client that depends on that CFC.
For example, let's say you create a component method called getUsStates( ) that returns a Query object with the names and abbreviations of all the U.S. states. The logic is generic enough that it could be used by multiple clients or applications without concern for whether the list of states is hardcoded or stored in a text file or database. As long as the CFC's interface never changes, none of the applications or clients calling getUsStates( ) need to change. Let's explore how ColdFusion provides everything you need to keep your CFC generic and well-encapsulated.
As you can see from the following general structure of a CFC, the component is wrapped in a single <cfcomponent> tag containing any number of nested <cffunction> tags or inline ColdFusion code:
<cfcomponent extends="superComponent" displayName="displayName"> <cffunction name="functionName" returnType="dataTypeToReturn" roles="securityRole" access="clientAccess" output="trueOrFalse" hint="functionHint " description="functionDescription"> <cfargument name="variableName" type="argumentDataType" required="trueOrFalse" default="defaultValue" description="argumentDescription"/> <!--- Component implementation here ---> <cfreturn dataToReturn /> </cffunction> <cffunction name="anotherFunction"> <!---body of function ---> </cffunction> </cfcomponent>
Any code that is not contained in a function is executed whenever the component is called. Your CFC needs to be stored in a file with a .cfc extension, which must reside in or below your web root.
Table 5-3 describes the possible attributes of a <cfcomponent> tag.
When a component extends another component, it inherits the properties and functions from the component it is extending. When structuring your CFC, remember that any functions or properties a component inherits from its parent must be just as valid for the inheriting component as it is for the parent. Any component that extends another component should be a subclass, or a more specific version, of the component it is extending.
For example, let's say you've written an ecosystem simulation using a component called PrayingMantis.cfc with the functions getLegCount( ) and getPrimaryDiet( ). You decide that your simulation needs a grasshopper, so you create Grasshopper.cfc and add getLegCount( ) and getPrimaryDiet( ) functions to it. Although praying mantises and grasshoppers have different diets, they both have six legs. Rather than duplicating getLegCount( ) in two places, you create a third component called Insect.cfc that contains the function getLegCount( ). Any function or property that is common to all insects goes in Insect.cfc, while functionality specific to one type of insects should go in a subclass of Insect.cfc. Even if you have 20 different types of insects inheriting from Insect.cfc, you need to write the code only once. And if entomologists discover they have been miscounting all these years and insects actually have seven legs, you only have to change the getLegCount( ) function in a single location to automatically change your entire insect collection.
Table 5-4 describes the possible attributes of a <cffunction> tag.
[1] Use "void" when there is no return value and use "any" when returning an object of type ASObject.
The access attribute of the <cffunction> tag determines what type of clients can access your component functions. It has four possible values:
Function is available to all local pages or other CFCs
Function is available only to other functions in the same CFC
Function is available only to CFCs in the same directory, including the CFC in which the function is declared
Function can be invoked remotely (access must be "remote" to work with Flash Remoting)
Roles are discussed in detail later in this chapter in the section Section 5.6.3.
Table 5-5 describes the possible attributes of a <cfargument> tag.
[2] Use "any" when passing an object of type ASObject to the function.
An argument whose required attribute is "yes" must be passed in at the time the function is invoked (unless the default attribute is also specified); otherwise, the function invocation fails. In the case of Flash Remoting, the responder object's onStatus( ) method is called with an error object indicating which arguments were missing.
The directory structure you use to organize your CFCs is called a package structure. Packages are useful for grouping files in logical ways. For example, I might put the PrayingMantis.cfc and Grasshopper.cfc files discussed earlier in the directory webroot\com\oreilly\frdg\bugs. All ColdFusion Components in the bugs directory in the frdg project should relate to little critters with several legs, whether they are insects or arachnids. The actual package name is relative to the document root and uses dots rather than slashes; so, in the previous example, the package name would be "com.oreilly.frdg.bugs". As discussed in Chapter 2, using domain names as directory structures prevents namespace collisions and keeps your code better organized.
Invoking a CFC from Flash is very similar to invoking a ColdFusion page. To create an instance of a service in your Flash movie, you call getService( ) on your NetConnection instance, passing in the fully qualified component name (the entire name of the CFC, including the package name). Remember that package names are relative to the document root and use dots in place of slashes.
To invoke a CFC from a Flash application, follow these steps:
Set the gateway URL.
Create a connection object using NetServices.createGatewayConnection( ).
Create a service object by invoking getService( ) on the connection object obtained in Step 2. The service path includes the directory name and the name of the .cfc file, excluding the .cfc extension.
Invoke a function within the component as a method of the service object obtained in Step 3. Any functions defined within the component can be accessed as methods of the service object.
The following code creates an instance of the service, which points to our Grasshopper component and invokes the getLegCount( ) method:
var myURL = ""; var myService = "com.oreilly.frdg.bugs.Grasshopper"; NetServices.setDefaultGatewayUrl(myURL); var my_conn = NetServices.createGatewayConnection( ); var grasshopperSevice = my_conn.getService(myService, responderObject); grasshopperService.getLegCount( );
As with the earlier ColdFusion page example, we pass the service name to getService( ). However, note that when using a CFC the service path includes the name of the .cfc file (in this case Grasshopper.cfc) without the .cfc extension. Contrast this with the case of a ColdFusion page, in which the .cfm file is not part of the service path passed to getService( ) but is instead invoked as the method on the service object returned by getService( ).
Remember that a component can define multiple functions (one for each <cffunction> tag), and each one can be accessed as a method of the service object returned by getService( ). That is, as long as the responder object passed to getService( ) is capable of handling different types of results, you can reuse the same service instance to call other functions on the same component:
grasshopperService.getPrimaryDiet( );
You can pass arguments into a remote CFC function the same way you pass arguments to the ColdFusion page. For example, if the component defines a setSpecies( ) function, you can pass arguments to it as follows:
grasshopperService.setSpecies("Melanoplus Differentialis");
ColdFusion Components are extremely versatile, because the code inside of the <cffunction> tags can be written in CFML or CFScript and can be included from external files. Including code in your functions using the <cfinclude> tag is a good way to reuse code, and it allows you to add layers of abstraction to your application. CFC functions can also invoke each other (as long as the access attributes allow for it). They can even create instances of Java objects and use them.
The next three subsections demonstrate various techniques. The first section is a working example of a CFC that performs a service (sends an email) and doesn't return anything to the client. The next section demonstrates performing a database query and returning data to the client. It also shows the benefit of component functions calling other component functions. The last subsection shows how to wrap a Java class in a CFC so it is more easily accessible to Flash through Flash Remoting.
Sending email is a common task that nearly all web-based applications support. There is no way to send email directly from a Flash movie unless you use a mailto URL in conjunction with the getURL( ) function, which is far from ideal since it assumes the user is on his own computer and has a mail client and server properly configured. These are not necessarily things you can count on, since Flash runs on many different types of devices and in many different places. The solution is to use Flash Remoting to delegate the task to a server. Using ColdFusion, the entire procedure can be done in just a few lines of code.
Example 5-3 shows code for a CFC capable of sending email on behalf of a client. It is analogous to the ColdFusion page of Example 5-1 but is written as a component. Name the component Email.cfc and put it in the package com.oreilly.frdg.
<cfcomponent> <cffunction name="sendEmail" access="remote"> <cfargument name="to" required="true" /> <cfargument name="from" required="true" /> <cfargument name="subject" required="true" /> <cfargument name="body" required="true" /> <cftry> <cfmail to = to from = from subject = subject> #body# </cfmail> <cfcatch type="Any"> <cfthrow message = "There was an error"> </cfcatch> </cftry> </cffunction> </cfcomponent>
In order for Email.cfc to work, you must have an email server configured through the ColdFusion administrator.
The component in Example 5-3 is a simple, generic service that can be invoked remotely from a Flash application (since the access attribute is "remote") or locally from a ColdFusion page or even another CFC. The ActionScript code from Example 5-2, used with the ColdFusion page, must be modified slightly to invoke the sendEmail( ) method from the component. On the last line of the init( ) function in Example 5-2, change the getService( ) invocation to use the path to the new component (omit the .cfc extension):
Note that we've named our function sendEmail( ) in imitation of the ColdFusion page named sendEmail.cfm from Example 5-1. This allows us to invoke sendEmail( ) using the same client-side code from the original Example 5-2; however, in the earlier case sendMail was the name a .cfm page, and here it is the name of a function within a CFC:
Even though the sendEmail( ) method doesn't return anything to the client, the responder object's onResult( ) method is called when the function completes. No argument is passed back to the onResult( ) function. Notice that the ActionScript code in Example 5-2 created a default message to be displayed if there was no result from the server.
Let's see how to return a value from a CFC. The CFC listed in Example 5-4 defines two functions, which become methods of the component. The getAllStates( ) method performs a database query to retrieve information on all the U.S. states and returns the result. Notice how the getStatesByRegion( ) method first calls getAllStates( ) to avoid unnecessarily repeating code. The code in Example 5-4 can go in a file called StatesEnum.cfc in the package com.oreilly.frdg.
<cfcomponent> <cffunction name="getAllStates" access="remote" returnType="query"> <cfquery datasource="Northwind" name="allStates"> SELECT StateID, StateName, StateAbbr, StateRegion FROM USStates </cfquery> <cfreturn #allStates# /> </cffunction> <cffunction name="getStatesByRegion" access="remote" returnType="query"> <cfargument name="region" type="string" required="true" /> <cfset allStates=this.getAllStates( ) /> <cfquery dbtype="query" name="regionalStates"> SELECT * FROM allStates WHERE StateRegion = '#region#' </cfquery> <cfreturn #regionalStates# /> </cffunction> </cfcomponent>
The StatesEnum.cfc component takes advantage of a nice feature of ColdFusion called queries of queries. The getAllStates( ) method returns a Query object that was returned from the <cfquery> tag used to query the database. Rather than have the getStatesByRegion( ) method use its own <cfquery> tag to run the same query, we can extract the subset of interest by performing a more specific query on the allStates Query object returned by getAllStates( ). Using ColdFusion's query of query capability is efficient in terms of both performance and coding practice.
Example 5-5 shows the client-side ActionScript code for invoking the getAllStates( ) and getStatesByRegion( ) functions remotely.
NetServices.setDefaultGatewayUrl(""); var con = NetServices.createGatewayConnection( ); var statesSevice = con.getService("com.oreilly.frdg.StatesEnum", this); statesService.getAllStates( ); statesService.getStatesByRegion("south"); function onResult (states) { // states_cb is the instance name of a ComboBox UI component. states_cb.setDataProvider(states); }
The states parameter passed to the onResult( ) function is cast (or transformed) into an ActionScript RecordSet object by the Flash Remoting gateway. Notice how the function is not concerned with whether it is being passed a recordset of all the U.S. states or a subset based on region. Its only job is to populate a ComboBox with the data it receives. Because we can use the same responder object for calls to both getAllStates( ) and getStatesByRegion( ), we again are able to reuse code.
Although you can access arguments to a component function using the Flash.Params array or as named parameters, you should use <cfargument> tags instead.
There is also an additional variable scope you can use with components, called the arguments scope. The arguments scope can be used with dot notation (arguments.argumentName) or the Structure model (arguments["argumentName"]). If you are going to use a variable scope, you should use the arguments scope instead of the Flash scope for two reasons:
It keeps your components generic so that they can be invoked by clients other than remote Flash applications.
It is easier to keep track of the variables that are prepended with their scopes. For example, if the getStatesByRegion( ) function is very long, referring to the region argument as arguments.region makes your code clearer. Even if your function contains <cfargument> tags, it's a good idea to use the arguments scope for the sake of readability.
Although ColdFusion does not support creating or calling methods on Java objects directly, you can easily create CFCs or ColdFusion pages that can delegate to Java objects. That is, you can use a CFC as a thin layer between your Flash Remoting application and the Java object layer. Chapter 7 covers Java and Flash Remoting integration in detail, but here is a short, simple example in the context of a CFC. This example demonstrates the following process:
The Flash client invokes a remote CFC function.
The CFC function instantiates a Java object, passing it the argument that was sent from the Flash client.
The CFC calls a method on an instance of the Java class, which returns a string.
The string returned from the Java method is returned by the CFC to the Flash client.
Figure 5-2 illustrates the process of calling a Java object wrapped in a ColdFusion Component.
There are three parts to this example:
The client-side ActionScript code
The ColdFusion Component
The Java object
Let's work backward and start with the Java object.
The Java object, called StringReverser and shown in Example 5-6, has a constructor that accepts a String object. There is only one method on StringReverser, called getReversedString( ), which reverses the order of the characters in the string and returns it as a new string.
package com.oreilly.frdg; public class StringReverser { private String target; public StringReverser (String target) { this.target = target; } public String getReversedString ( ) { StringBuffer reversedString = new StringBuffer( ); char[] chars = target.toCharArray( ); for (int i = chars.length; i > 0; --i) { reversedString.append(chars[i-1]); } return reversedString.toString( ); } }
You can compile the StringReverser.java file with any Java IDE or with the command-line compiler javac.exe. Once the Java file is compiled, place the resulting .class file in any directory included in ColdFusion's classpath. In a typical installation of ColdFusion MX on a Windows machine, this would be at C:\CFusionMX\runtime\servers\lib.
If you choose to put your classes in a package, remember to add the package declaration to the top of the .java file, as we have done here, and to create the appropriate directory structure. The StringReverser.class should go into a directory structure of classpath\com\oreilly\frdg. You can also put your class files into a Java archive (.jar) file, as long as the .jar files are included in ColdFusion's classpath.
Let's look at the CFC that serves as a proxy between the Flash client and the StringReverser Java object. The code in Example 5-7 is contained in a file called JavaExamples.cfc, located in the package com.oreilly.frdg.
<cfcomponent> <cffunction name="reverseString" access="remote" returnType="string"> <cfargument name="target" type="string" required="true"> <cfobject type="Java" action="create" class="StringReverser" name="reverserClass" /> <cfset reverser = reverserClass.init(#target#) /> <cfset reversedString = reverser.getReversedString( ) /> <cfreturn #reversedString# /> </cffunction> </cfcomponent>
We use the <cfobject> tag to get a reference to the StringReverser class, not an instance of StringReverser. At this point, you have access to only static members of the class. The instance of StringReverser is actually created and returned from the init( ) method call on the line following the <cfobject> tag.
The StringReverser class does not have an explicit init( ) method. init( ) is a ColdFusion function that is required to be called whenever instantiating a Java object. Calling init( ) from ColdFusion invokes the corresponding class constructor. If you attempt to reference a nonstatic member of a Java object before calling an init( ) method, the object's default no-argument constructor is called if one exists. If the object does not have any constructors at all, allowing the default no-argument constructor to be called in this manner is fine. However, if your object has one or more constructors and does not explicitly define a no-argument constructor, attempting to access nonstatic members before initializing the Java object results in an exception being thrown and propagated up to the Flash client.
By passing the target variable into the init( ) method, StringReverser's constructor is called, returning an instance of StringReverser, which has the value of target set as a member variable. The instance is assigned to the ColdFusion variable reverser, which is the instance that you call methods on. Calling getReversedString( ) on reverser returns the value of target, except with its characters in reverse order, which is assigned to the ColdFusion variable reversedString. We then return reversedString, which, in this case, is returned to the Flash client. The instance of StringReverser goes out of scope and is ready for garbage collection at the moment the CFC function returns. In the case of ColdFusion pages and CFCs, instances of Java objects go out of scope when the entire page has finished executing.
Now let's take a look at the client-side ActionScript for this exercise, shown in Example 5-8.
NetServices.setDefaultGatewayURL(""); var my_conn = NetServices.createGatewayConnection( ); var javaExService = my_conn.getService("com.oreilly.frdg.JavaExamples", this); javaExService.reverseString("this is a top secret code"); function onResult (response) { trace(response); } function onStatus (error) { trace("error: " + error.description); }
The ActionScript code for this example is straightforward. Calling the remote reverseString( ) function on JavaExamples.cfc returns the reversed string to the onResult( ) callback function. The result should be the string "edoc terces pot a si siht" printed in your Output window.
Like Java class definitions, CFCs are self-documenting, which means that the components themselves?or, more precisely, the cfexplorer.cfc located in wwwroot/CFIDE/componentutils?can describe how each component is used. A component's ability to reflect upon itself is referred to as introspection or component metadata. There are two primary advantages to component metadata:
It exposes components' application programming interface (API) without exposing the implementation of the component.
It is always up-to-date.
Clients using your components should not care or rely upon how a function is implemented; they should be concerned only with how a function is used and what data it returns. Hiding the internal implementation of a component is a key requirement of abstraction and encapsulation. Component metadata is a great way for developers to access the information they need without looking through the component's code, which not only defeats the goal of encapsulation but also takes a great deal of time.
Component metadata also allows CFC developers to concentrate on their code, rather than manually keeping the documentation current. Component metadata is generated as HTML, so it can be viewed by simply referencing the .cfc file's URL directly from a web browser, like this:
When the ColdFusion Server receives a request for a CFC file rather than a request to invoke a particular function within it, the request is redirected to webroot\CFIDE\componentutils\cfexplorer.cfc. For security reasons, you are prompted to enter the ColdFusion Remote Development Security (RDS) password, after which you should see a document similar to Figure 5-3, which shows the documentation for StatesEnum.cfc from Example 5-4.
ColdFusion metadata includes the following information:
The inheritance tree of the component
The directory the component is in
The package the component is in
The display name of the component, if the displayName attribute is present
Each property that the component contains
All the functions the component contains
What arguments each function accepts, the arguments' datatypes, and whether they are required
The ColdFusion variable name used to reference each argument
Arguments' default values, if present
What type of access each function allows
The value of the function's hint attribute, if present
Whether output is enabled for each function
The datatype of the return value of each function
If you are not sure which component you are looking for, you can use the Component Browser to browse all the components on the ColdFusion Server. To access the Component Browser, enter a URL of the following form in your web browser:
For example:
Dreamweaver MX also allows you to introspect CFCs from its Components panel. In Dreamweaver MX, after you've defined a site and set your server model to ColdFusion, the Components panel displays all components that are available on the server, in a tree, as shown in Figure 5-4. Right-clicking (in Windows) or Ctrl-clicking (on the Macintosh) on a component, method, or argument within the panel gives details about that particular item.
You can also create your own interface for introspecting CFCs.
In addition to being able to browse CFC services from your browser and from Dreamweaver MX, you can also browse them directly from the Flash authoring environment through the Service Browser (Window Service Browser). You cannot discover unknown services, because you must enter the address of the service in order to find it; however, it is a convenient way to keep important component APIs available while you write ActionScript code against them. Chapter 2 describes the Service Browser, which is shown in Figures Figure 2-4 and Figure 5-5.
The Description field | http://etutorials.org/Macromedia/Fash+remoting.+the+definitive+guide/Part+II+The+Server-Side+Languages/Chapter+5.+Flash+Remoting+and+ColdFusion+MX/5.3+Service+Name+Mappings/ | crawl-001 | en | refinedweb |
.
Here’s another problem:
"<itunes:block>
This tag is used to block a podcast or an episode within a podcast from being posted to iTunes. Only use this tag when you want a podcast or an episode to appear within the iTunes podcast directory."
The first and second sentences are contradictory, aren’t they? Shouldn’t the second say "...when you DON’T want..."?
And what should the value of this element be? Should it just be empty (eg. <itunes:block /> or <itunes:block></itunes:block>)--ie., does it’s mere presence indicate that the item should be blocked (that' how I read it), or should it have a value?
And finally, I assume this can appear either at the channel or item level to block the entire feed or just one item, right? Being explicit about that wouldn’t hurt.Posted by Antone Roundy at
What’s your guess about the itunes:block element? No example, and I doubt they mean what the spec text says “This tag is used to block a podcast or an episode within a podcast from being posted to iTunes. Only use this tag when you want a podcast or an episode to appear within the iTunes podcast directory.” but once you change that to “not to appear” do they mean it’s an empty element, or one or the other of their Yyes|Nno values?
Posted by Phil Ringnalda at
Note to self: learn to type faster :)
Posted by Phil Ringnalda at
Sam Ruby: Podcast Specifications Questions[link]...
Excerpt from del.icio.us/tag/itunes at
Other questions worth asking:
- Why itunes:category? Why not reuse the category element in RSS, which includes a @domain attribute?
- Why itunes:summary? Why not reuse channel/description or item/description?
- Why itunes:author? Why not reuse channel/managingEditor or item/author?
- Why itunes:keywords? Why not reuse dc:subject?
- Why itunes:owner? Why not reuse channel/webMaster?
- Why itunes:image? Why not reuse channel/image?
- Why are they redefining the content model of copyright?
Here a Pod, There a PodiTunes 4.9 is out, with its previously-announced support for podcasts. The podcast directory at the iTunes Music Store is very...... [more]
Trackback from Musings at
Why itunes:category? Why not reuse the category element in RSS
I 'spose they could have specified their own taxonomy domain. And warned users that all other taxonomies/plain-text categories would be ignored.
Why itunes:summary?
Cuz, unlike the notoriously under-specified <description> element, they can say:
“Limited to 4000 characters or less, plain text, no HTML”
Why itunes:author?
Cuz, notoriously, the content of the item-level <author> element is an email address.
Why itunes:keywords? Why not reuse dc:subject?
Fair enough, though it is yet another namespace (funky, y’know...)
Why itunes:owner? Why not reuse channel/webMaster?
“Email address for person responsible for technical issues relating to channel.”
Not the same thing at all...
Why itunes:image? Why not reuse channel/image?
"This artwork can be larger than the maximum allowed by RSS. Details on the size recommendations are in the
section below."
Why are they redefining the content model of copyright?
?Posted by Jacques Distler at
Podcasting is the New NapsterI love the smell of disintermediation in the morning. The new iTunes software update enables a little Podcasts folder. Suddenly you get a clean aggregation of a selected fat heat of the long tail in an integrated experience....
Excerpt from Ross Mayfield's Weblog at
iTunes 4.9 (with podcasting support) AVAILABLE!Get it NOW! Oh and while you’re at it: don’t forget to pick up the new iPod updater as well. First quick impression: cool stuff. The interface still needs a little polishing, though - the menu navigation seems inconsistent. The tags are not...
Excerpt from - The GadgetGuy at
CountdownLinks and a countdown....
Excerpt from Anne’s Weblog about Markup & Style at
Word on the street is that <itunes:image> doesn’t work, and <itunes:link does.
Most amusing (in the sense of "I laughed so hard I spit bitten-tongue blood across the room") find, while looking at their prominently featured partners' feeds for clues: Disney has replaced the core /channel/image/url with <itunes:link>(their image url)</itunes:link>
Well, back to the RSS 1.0 RSS 0.91 module, which defines <rss091:webmaster> for sideways compatibility with <webMaster>. If Mark ever does find that hobby, I sure hope he tells me what it is, and how to get started in it.Posted by Phil Ringnalda at
Sam Ruby identifies the IP address of every person who comments on his weblog, as you can see by hovering over a name. For message boards and weblog software I have developed, I won’t reveal user IP addresses out of privacy and security...
Excerpt from Workbench at
Sock PuppetsFWIW, my experience is that both trolling and spamming were greatly reduced once I implemented this. Related: Beware of Strangers Users Who Share Locations... [more]
Trackback from Sam Ruby at
Phil - even more amusing, there is a very subtle mistake in Disney’s xmlns:itunes declaration, making all of their ITunes metadata effectively invisible to any parser than understands namespaces.
Posted by Sam Ruby at
Interesting. I wonder if it works anyway (for things other than the image), implying qname-matching regex parsing?
Posted by Phil Ringnalda at
I wonder if it works anyway (for things other than the image), implying qname-matching regex parsing?
Yes, the Disney namespace (PodCast instead of Podcast) works. But it’s not based on qname, and it’s not based on regular expressions. From my tests so far ( [link] ):
- If the itunes namespace matches exactly, it works.
- If the itunes namespace matches case-insensitively (lowercase, uppercase, or any mixed case other than the correct case), it works. (This is a generalization of the Disney case.)
- If the itunes namespace matches case-insensitively and is defined by a prefix other than “itunes”, it works.
- If the itunes namespace is something else (like substituting “example.com” for “itunes.com” in the namespace), it works but does not find any of the information in the iTunes namespace, and it falls back on information in the RSS feed (channel/description instead of channel/itunes:subtitle, etc). This “etc” probably bears further research, to determine the mapping between default RSS elements and iTunes elements.
- If the namespace declaration is missing, it does not work at all.
- If the feed is ill-formed (missing the final end tag on "rss"), it does not work at all.
By “works”, I mean it downloads and parses the feed, displays it in the local Podcasts subscriptions list, and downloads my sample MP3.
By “does not work at all”, I mean it displays nothing from the feed, only the URL and a “!” icon that, when selected, displays an alert “URL does not seem to be a valid Podcast URL.”
So it appears that iTunes uses a real, draconian, namespace-aware XML parser... except that namespaces are case-insensitive.
I pass along this information without further comment or judgement.Posted by Mark at
It appears that iTunes 4.9 doesn’t EVER respect the charset parameter in the HTTP Content-type header. Test cases:
- [link] displays subtitle with curly quotes (correct - HTTP declares “application/xml” with no encoding, XML says “windows-1252”, iTunes correctly treats it as windows-1252)
- [link] displays subtitle with blocks where quotes should be (incorrect - HTTP says windows-1252, XML has no encoding, iTunes incorrectly treats it as UTF-8)
- [link] displays subtitle with blocks where quotes should be (incorrect - HTTP says windows-1252, XML says iso-8859-1, iTunes incorrectly treats it as iso-8859-1)
<?xml version='1.0' encoding='iso-8859-1'?>
If this were removed, and if iTunes 4.9 ignored the charset parameter, then the feed would not be considered well formed.Posted by Sam Ruby at
If this were removed, and if iTunes 4.9 ignored the charset parameter, then the feed would not be considered well formed.
You are correct, that feed was not actually testing what I thought it was testing. (The other two appear to be accurate tests.) I’ve corrected the feed to match my prose description (HTTP declares “windows-1252”, XML has no encoding), and iTunes now exhibits the behavior you predicted: it does not look at the HTTP charset parameter at all, and since the XML body has no encoding information, it fails to parse the feed at all.Posted by Mark at
MarkP: "it appears that iTunes uses a real, draconian, namespace-aware XML parser... "except that namespaces are case-insensitive. (just keeps getting better)...
Excerpt from LaughingMeme's MLPs at
MarkP: "it appears that iTunes uses a real, draconian, namespace-aware XML parser... "kellan : MarkP: “it appears that iTunes uses a real, draconian, namespace-aware XML parser... ” - except that namespaces are case-insensitive. (just keeps getting better)...
Excerpt from HotLinks - Level 1 at
Liberal RSS Parsing and Apple iTunes... [more]
Trackback from Dare Obasanjo aka Carnage4Life at
Podcasting Spec Iterations AheadTantek Çelik: Excellent! Here’s a few more specific questions. In particular, don’t miss this one. I’d also suggest that you investigate the state of common practice at the moment, for example, Disney, ESPN, CNN and New... [more]
Trackback from Sam Ruby
The itunes:image tag has been fixed and now works.
The format is as follows:
<itunes:image
There are also updated docs that they say should be coming out shortly.Posted by Otto at
Otto: I’m still hearing conflicting input on this. Once the spec is available, it should literally only be a matter of hours before I make the change and the code is online.
Posted by Sam Ruby at
You know people, I am a total apple fan, have bought a mac with my last money, even though I am unemployed, got excited about the itunes podcasting feature, and I am SO disappointed about them bending the rules to their firm instead of first learning all the rss2 features properly.
I see no reason whatsoever for their tags which are merely doubles of the rss2 tags.
Last but not least: Even worse is their “nice” "enhanced" AAC format which makes lots of podcasters now release their podcasts in those compressions which no other mp3 player can play anymore.
I suggest to all of you to boykott this format by not producing it and not subscribing to such feeds if alternate mp3 feeds are available.
Don’t let apple suck us all into their empire and make us dependent from them.
I am trying to develop another podcast player on CyMeP.org in order to keep podcasting in the community and not in apples hands. You are welcome to suggest features you would like to see there to me - let’s do it together !
Posted by CyMeP.org at
HELP PLEASE!
I’m a newbie to podcasting and much of what you have posted is quite helpful- here’s my problem: I’ve got my tags correct, etc. for iTunes, but my inital post of the location of the .xml file to iTunes is WRONG! I want to edit that within iTunes, but it seems to be unchangeable- can I adjust that or DELETE that podcast. There does not seem to be any way to delete a published podcast- any help would be greatly appreciated- [email protected]
Posted by Jean Larroux at
Jean, you might have more luck on the Syndication-dev mailing list
Posted by Sam Ruby at
FeedValidator.rb?This started out as a Random Thought (RT). background The Feed Validator is organized as a recursive descent parser for various feed formats. It is implemented in an object oriented fashion, where each element ‘knows’ what the possible chi... [more]
Trackback from Sam Ruby at
Comment on My year in 12 copy-and-paste comments by: MarkJan
Can I host podcast on my own Apache server? What are the requirements for a server to host podcasting?
Posted by Tyrse
I just spent 3 days trying to find one parser in this world that would show itunes channel image.
Seems I am sol. < itunes:image href=" seems unparsable?
Web Standards????
I ran into the same issues this morning. And now, when I try to submit my feed, I get “We are currently experiencing technical difficulties. Please try again later.”
Oh, well. Hopefully they get it all worked out. Still not sure how they want images included, though.Posted by Christian Cantrell at | http://www.intertwingly.net/blog/2005/06/28/Podcast-Specifications-Questions | crawl-001 | en | refinedweb |
.
For example, this horrible creation seems to function in iTunes.
But wait! There’s more! I’ve devised some more tests ( [link] ). To sum up what we’ve discovered so far:
- Namespaces are case-insensitive, which is actually a good thing, since the namespace defined in the spec is wrong. (Holy wellformedness, Batman! Even my Ultraliberal^H^H^H^H^H^H^H^H^H^H^H^H Universal Feed Parser missed that one. Quick, to the UnitTestCave!)
- Tag names outside the iTunes namespace are case-insensitive. Been there, done that. Look for me at Apachecon this fall; I’ll be the guy wearing a nametag that reads “Hello my name is textinput.”
- iTunes completely ignores HTTP. Content-type is meaningless, charset doubly so. Real men use UTF-8.
- All attributes on the enclosure element are optional except url. On further reflection, this is probably not a bad thing.
- itunes:link is either an element with URL content, a subelement of image with URL content, or an element with rel, type, and href attributes. The spec says one thing, Apple’s discussion forum says another, and a high-profile feed says a third. Blog posts and tea leaves...
- channel/itunes:subtitle is displayed properly even when defined parsed as outside the iTunes namespace. Ditto item/subtitle.
- itunes:duration can also be defined outside without those pesky namespaces. However, for consistency, it is important to note that iTunes elements that are declared outside the iTunes namespace are case-sensitive.
- Dates can be declared without a weekday, time, or timezone. June 31st is a legal date, but for interoperability reasons, it is displayed as July 1st.
- If an element contains inline (not escaped) XHTML, only the text content of the last child element is displayed. This feed displays “now” as its channel description.
- iTunes completely ignores HTTP (part 2). It will continue to attempt to fetch feeds marked 410 Gone forever. This reminds me of one of my ex-girlfriends, but that’s a story for another day.
- iTunes completely ignores HTTP (part 3). It will follow permanent redirects, but it will continue to fetch the old URL forever. Embrace the impermanence of all things.
- iTunes sends only two HTTP headers: “Accept: */*”, and “User-Agent: iTunes/4.9 (Windows; N)” (presumably platform-specific). It does not support ETags, Last-Modified, gzip or zlib compression, or RFC 3229. No pithy tagline, that just sucks.
Y’know, that would be so much more amusing as an acerbic Mark Pilgrim blog post. Nothing against Sam, but a list like that is wasted when buried in the comment section of his blog.
Posted by Jacques Distler at
Mark, isn’t it time for an old-fashioned ‘short essay’ section on your homepage?
Posted by Thijs van der Vossen at
Shameless PanderingOK, so I don’t normally beg for links. In fact, I never do. It very unbecoming, and — quite frankly — I’m happy with the amount of traffic I attract. Not too little, and not too much. All that being said, I want to increase... [more]
Trackback from Sam Ruby at
Please fix iTunes XML Parserdetails are here: [link]...... [more]
Trackback from Davanum Srinivas' weblog at
iTunes has a stupid parser. Read the comments....
Excerpt from del.icio.us/Aquarion at
How To Report Bugs to AppleSo Sam Ruby is pandering for links in a vain effort to tell Apple that iTunes’s RSS support is horribly non-standards compliant. Of course this effort will fail horrible because Apple doesn’t do anything unless there’s a Radar...
Excerpt from Symphonious at
Sam Ruby complains about iTunes XML parsingAnd he asks for a link: here you go....
Excerpt from - The GadgetGuy at
blech: Insensitive iTunes As well as crapping up the interface, it turns out that the podcast/RSS client in iTunes 4.9 is... lacking. The sadly typically quiet Mark Pilgrim chimes in to the comments of this Sam Ruby post, pointing out the multiple...
Excerpt from 2lmc spool at
Indeed, Mark. All the cool kids are starting online magazines these days.
Posted by Aaron Swartz at
Sam Ruby on iTunes and XMLSam Ruby wants Apple to address this concern....... [more]
Trackback from Randy Holloway Unfiltered at
void iTunes(rss xml)It seems Sam Ruby and friends have found holes in iTunes. I think it was originally a bunch of small holes, but then it escalated into “Does anybody at Apple have a copy of the XML spec?” [link]...
Excerpt from The RSS Blog at
Sam Ruby on iTunes and XMLSam Ruby wants Apple to address this concern.......
Excerpt from Randy Holloway Unfiltered at
Public serviceSam Ruby is calling out Apple for releasing iTunes with a screwy RSS parser and Disney for publishing feeds that fall into the traps set by the parser’s quirks....
Excerpt from rc3.org Daily at
Look This Way, ApplePer Sam Ruby’s request, this is an appeal for someone who matters at Apple to please look here. Obviously, there are people at Apple who understand the Net, but for the ones who seem not to, the ones who built the iTunes RSS, here’s how it works:...
Excerpt from ongoing at
iTunesHey Apple, go here and read carefully....... [more]
Trackback from snellspace.com at
Sam Ruby’s Shameless Pandering prompted me to link to his prior post on the problems with the way iTunes handles XML feeds. The comments in that post explain everything in full detail. If this topic is of any interest to...... [more]
Trackback from Full Speed at
Y’know, that would be so much more amusing as an acerbic Mark Pilgrim blog post. Nothing against Sam, but a list like that is wasted when buried in the comment section of his blog.
While I miss Mark’s blog just as much as the next guy, I find it amusing to see him popping up here and there in the comments of blogs that are important to me.Posted by Scott Johnson at
Small update to this comment. It appears that my missing namespace declaration test was invalid (it was missing the beginning rss tag altogether). Now corrected, iTunes downloads and parses the feed without error. It does not find any of the elements in the (undeclared) itunes: namespace, but it does accept the feed, falling back to core RSS elements for display, and merrily downloading the audio file linked in the enclosure/@url attribute.
Because of the undeclared namespace, this feed (despite the URL) is not well-formed XML (libxml2 correctly barfs, "namespace error : Namespace prefix itunes on subtitle is not defined"), but iTunes accepts it anyway. This, in turn, leads me to revise my previous statement on iTunes' draconianness. It is NOT a draconian XML parser; it DOES accept a certain class of non-wellformed feeds.
(In iTunes' defense, both Firefox and Universal Feed Parser also think this feed is well-formed. Firefox displays it as-is, and UFP returns bozo=0. I have added this as a test case.)Posted by Mark at
iTunes does not care if the required rss version attribute is missing.
Posted by Mark at
Firefox and Thunderbird 1.0.x are based on a Mozilla branch that’s quite old. On that branch, Expat is configured not to do XML+Namespaces processing. The faulty prefix example is well-formed XML 1.0, so it works. Firefox/Thunderbird 1.1 will have XML+Namespaces processing turned on. The feed failed to load in last night’s Firefox 1.1 nightly (fatal error).
Posted by Robert Sayre at
iTunes RSS IssuesHere’s a great post about iTunes issues with their RSS parser/namespace. (Note the best info is in the comments) Hard to believe that Apple would do this one: iTunes sends only two HTTP headers: “Accept: */*”, and “User-Agent: iTunes/4.9 (Windows;...... [more]
Trackback from Hodson Blog at
If the channel element is missing (i.e. channel elements are direct children of rss, item element is direct child of rss), iTunes does not display the title or description of either the channel or the items, BUT it does still download the MP3 file linked from /rss/item/enclosure/@url.
If the item element is a direct child of rss, iTunes displays the channel title and description, does not display any item information, but still downloads the MP3 file linked from /rss/item/enclosure/@url.
If item is the root element, iTunes displays no information and gives an error “There are no playable episodes.”
If the root element is atom instead of rss, which contains channel, which contains item... iTunes displays the channel title and description, does not display any item information, but still downloads the MP3 file linked from /atom/channel/item/enclosure/@url.
If the root element is atom, which contains rss, which contains channel, which contains item, iTunes does not display any channel or item information, but still downloads the MP3 file linked from /atom/rss/channel/item/enclosure/@url.
If the root element is a random name other than atom or rss, iTunes displays the channel information, does not display any item information, and gives an error “There are no playable episodes.”
If the root element is a random name and the channel element is a random name, iTunes displays the channel information, does not display any item information, and gives an error “There are no playable episodes.”
If the enclosure element is a child of channel (instead of a child of item), iTunes displays the channel information, does not display any item information, and gives an error “There are no playable episodes.”
If the rss root element is missing, iTunes displays the item information where the channel information should be, and gives an error “There are no playable episodes.”
If the item element is outside channel but inside a randomly-named element, iTunes displays both channel and item information in the correct places, and downloads the MP3 file linked from /rss/foo/item/enclosure/@url.
If the item element is outside channel but inside 2 randomly-named elements, iTunes displays the channel information, does NOT display the item information, BUT still downloads the MP3 file linked from /rss/foo/bar/item/enclosure/@url.
In summary, it appears that
- iTunes looks for channel information in elements named “title”, “description”, “summary”, “subtitle”, or “itunes:subtitle” (where the itunes: namespace is declared but case-insensitive) within the first child of the root element in the default namespace, regardless of the name of the root element or the name of the first child.
- iTunes looks for item information in elements named “title”, “description”, “subtitle”, or “itunes:subtitle” within elements named “item” that contain an element named “enclosure”, as long as the “item” element is a child of the first child of the root element (all in the default namespace, and regardless of the name of the first child or the root element).
- iTunes looks for audio files in the “url” attribute of an “enclosure” element within an “item” element. The “item” element can be anywhere in the feed, as long as the root element of the feed is not “item” or “channel”.
Corrections welcome.Posted by Mark at
Full Speed: Insensitive iTunesSam Ruby’s Shameless Pandering prompted me to link to his prior post on the problems with the way iTunes handles XML feeds. The comments in that post explain everything in full detail. If this topic is of any interest to you, please link to Sam’s...
Excerpt from TextPlanet : The People on TextDrive at
The feed failed to load in last night’s Firefox 1.1 nightly (fatal error).
Ah, that explains it. Yes, the feed is well-formed to a non-namespace-aware XML parser. However, iTunes appears to be using a namespace-aware XML parser (since the name of the “itunes:” prefix does not matter, as long as the namespace matches case-insensitively to the iTunes namespace). So iTunes is definitely accepting non-wellformed feeds.Posted by Mark at
I smell XQuery.
Posted by Robert Sayre at
Am I the only one who doesn’t think this is such a big deal? So they wrote a loose parser. So it accepts some things that most other parsers won’t. At least they didn’t add an <itunes:marqee> tag that shows a marqueed text when you play the visualizer...
JasonPosted by Jason Lustig at
From a friend at Apple: apparently, the iTunes folks have seen Mark’s notes. My friend also points out that the service uses Wonder, an Open Source framework for WebObjects. See [link]. So the place to fix the problems are in there somewhere...
Posted by Greg Stein at
iTunes clangersLots of gaping holes in RSS/XML support in Apple’s iTunes. Classic Mark Pilgrim post - errm, in Sam Ruby’s comments ....
Excerpt from Planet RDF at
Apple’s Refusal to talk to community causing problems!We have been waiting for a week for some sort of word from Apple on a RSS tag clarification. Now...... [more]
Trackback from Geek News Central at
links for 2005-07-06Scaling mod_perl (tags: apache mod_perl perl) Sam Ruby: Insensitive iTunes (tags: apple itunes rss) Wage Slaves from 1UP.COM (tags: games)...... [more]
Trackback from orbitalworks at
What...
Excerpt from fondantfancies.com at
Am I the only one who doesn’t think this is such a big deal?
Apple is an 800-lb. gorilla in this space (at least until Microsoft releases an RSS-enabled IE in Longhorn). iTunes is to podcasting as Internet Explorer is to HTML. RSS interoperability, at least as far as podcasting goes, now means “works with iTunes.” Thousands of people and companies will begin making podcasts that “work with iTunes,” but unintentionally rely on iTunes quirks (e.g. Disney’s incorrect namespace). This in turn will affect every developer who wants to consume RSS feeds, and who will be required to emulate all the quirks of iTunes to remain competitive.
Apple has effectively redefined the entire structure of an RSS feed, added multiple core RSS elements, made all RSS elements case-insensitive, made XML namespaces case-insensitive, created a new date format, made several previously required attributes optional, and created a morass of undocumented and poorly-documented extensions... to what was already a pretty messy format to begin with.
Case in point: my Universal Feed Parser, which already has 2751 test cases and is so incredibly liberal that it can parse an ill-formed EBCDIC-encoded RDF feed with regular expressions, will require hundreds of new test cases to cover all the schlock that iTunes accepts. And I’m one of the lucky ones.
The supreme irony of all this is that I remember Dave Hyatt (Apple Safari developer) bitching and moaning about all the work he had to do to make Safari emulate the buggy, undocumented behavior of Internet Explorer, and how the world would be so much better if only everything used XML and everyone implemented draconian error handling. Never mind the fact that the vast majority of problems that iTunes creates have nothing to do with XML well-formedness; iTunes doesn’t even require well-formed XML in the first place. Utopia, it seems, will have to wait another decade.Posted by Mark at
Attention, Apple!If you work at Apple, especially if you work on iTunes, please read this post and its comments. If you’re a blogger, please consider linking to the above link....
Excerpt from Matthew Gifford at
Liberal RSS Parsing and Apple iTunes... [more]
Trackback from Dare Obasanjo aka Carnage4Life at
Journal - That which is seenWhy Jason Scott’s BBS documentary won’t be on TV Because they’ll take overbbs, tv, video Tweaking PHP 4 on Tiger Using EacceleratorEaccelerator, cite:wfm, osx, php George Bush singing Imagine All we are sayingfun, podcast, politics Stereotyping...
Excerpt from Aquarionics at
Apple’s standardMicrosoft recently suggested an extension to an important emerging standard, RSS. They used a grass roots industry conference to discuss their extension before shipping any product which used it and quickly received comprehensive, constructive feedback from a range of smart...... [more]
Trackback from Active Web at
Top 20 RSS ReadersRSS Readers: Narrowing Down Your Choices Interesting mix of server- and desktop-based setups. Not sure if iTunes is number 3 because it’s so popular or that’s skewed because it doesn’t support conditional GET (or compression, or...
Excerpt from Raw at
iTunes and lax parsingIt’s been reported that iTunes is lax in parsing the RSS extensions that they added to support iTunes....... [more]
Trackback from steve's blog at
Sam Ruby: Insensitive iTunesMark Pilgrim’s comment looks at ways iTunes isn’t playing nice with RSS specs....
Excerpt from del.icio.us/djwudi at
Linking to Sam RubyHow can I not respond to Sam’s Shameless Pandering for a link to his Insensitive iTunes post? It really is important Apple gets this one right....
Excerpt from Loebrich.org at
Sam Ruby and others on iTunes RSS parsingThe iTunes RSS parser apparently has some interesting issues....
Excerpt from inessential.com at
insensitive itunesEveryone loves a good pile-on when the behemoth hijaacks a standard. Only this time, the behemoth is Apple who, if......
Excerpt from JayAllen - The Daily Journey at
Mark PilgrimMark Pilgrim, commenting on iTunes' RSS handling, wrote: Dates can be declared without a weekday, time, or timezone. June 31st is a legal date, but for interoperability reasons, it is displayed as July 1st. ... iTunes completely ignores HTTP (part...
Excerpt from Sounds From The Dungeon at
Met with iTunes folkstags:tags Apple iTunes podcast RSS XML Last Thursday Kevin Marks and I met with some friends on the iTunes team at Apple and talked about tags, podcasts, RSS, XML, spec-writing, validators, and how best to give, watch, and respond to feedback on the...
Excerpt from Tantek's Thoughts at
NonstandardThere are numerous reasons to be nonstandard. For example when they built the world trade center in New York the thing was so huge that the vendor suggested they could have custom light bulbs that screwed in counter clockwise rather than the usual...
Excerpt from Ascription is an Anathema to any Enthusiasm there to talk......
Excerpt from Northwest Noise at
Podcasting in iTunes and OdeoLots of activity in the podcasting space at the moment, which hasn’t gone un-noticed by the likes of The Economist, who describe the addition of podcasting support to iTunes thusly Any confusion about the term or the process has not......
Excerpt from cityofsound...
Excerpt from Apple Noise! at
Standards and Integration, a ramblePersonally, I usually think that the tech industry has gotten a little too “standards happy”. Moreover, I think that good standards need “extension points” that ensure extended capabillity without sacrificing compatibility. J2EE has a good and bad...
Excerpt from Enter The JBoss Matrix at
Slammin' the iTunes XML parserFiled under: Audio, Windows, Macintosh, Blogging, Business, Developer, Internet, Podcasting, TextIt can’t be all sunshine and lollipops in iTunes podcasting heaven, can it? Lest you think I’m the ultimate Apple fanboy, here is a good example of...
Excerpt from Download Squad at
Support for iTunes is LiveTraci: As promised, we [FeedBurner dudes and dudettes] are pleased to announce the immediate availability of iTunes metadata for your podcast. [cut] After a[n] audio post has been created, it must be properly transformed into an RSS2.0 feed...
Excerpt from The RSS Blog at
Podcasting is social mediaIn 1993, America Online jumped into the Internet with both feet, and the waves took an awful long time to subside. A huge influx of new users discovered Usenet, and thanks to AOL’s poor stewardship of the situation, the newsgroups...... [more]
Trackback from Podcasting at
iTunes' case-insensitive XML parser以前とりあげた iTunes の Podcasting ですが、XML のコミュニティではすこぶる不評のようです。というのも、iTunes 内で RSS を解釈するパーサーが大文字小文字を区別していないとか。 まさか Apple がゼロから XML パーサーを実装しているわけでも無いだろうに、なんでやねんと思うと同時に、この人たちはいったいどんな使い方をしたら、そんなことに気付くんだろう? という疑問も。まあ、世の中にはすごい人がいるものですね。...
Excerpt from Hamazy Webspace at
iTunes stares blankly at Atom podcastsI was disappointed to discover that while it has no problem interpreting podcasts distributed as RSS 2.0 with <enclosure> tags, iTunes completely ignores Atom feeds with <link rel=”enclosure”> elements. In fact, it’s pr...... [more]
Trackback from dsandler.org
Sam Ruby’s Shameless Pandering prompted me to link to his prior post on the problems with the way iTunes handles XML feeds. The comments in that post explain everything in full detail. If this topic is of any interest to......
Excerpt from Full Speed at
Blog noiosiPassata l’overdose di blogging fatta con Blogo di cui parlerò un’altra volta, mi sto accorgendo di quanto siano noiosi, ripetitivi e spesso approssimativi buona parte dei blog. Il mio livello di tolleranza ha raggiunto il limite per i pettegolezzi e...
Excerpt from Ludo|Blog
Discussion of Apple’s RSS extensionsSam Ruby asks for linking to the discussion of Apple’s RSS extensions in his blog. It’s a worthwile read on how to (and especially how not to) extend existing XML formats. The topic is quite interesting. I’d be interested in a more...
Excerpt from Martins Notepad at
iTunes and lax parsingIt’s been reported that iTunes is lax in parsing the RSS extensions that they added to support iTunes. In a nutshell, the way iTunes finds the nodes it needs in an RSS document is apparently by scanning through the document doing a case-insensitive...
Excerpt from steve's blog at
Does anyone have a working RSS file that iTunes will accept and parse that I can look at? I’m having trouble working thru all the non-standard bugs and know that if I could look at a fully working RSS file I could then get mine working. [email protected] - Thanks!
Posted by James Fabin at
James, the Sample in this document is a good start, with a few exceptions:
- The namespace is incorrect, use.
- The line containing
<category text="Food">should actually read
<category text="Food"/>(note the addition of a slash).
- Similarly, the line containing
<category text="Politics">should actually read
<category text="Politics"/>.
Once you have a feed, you can use the online Feed Validator to check conformance to the published specifications as we best understand them. It might make sense to check back periodically as we aggressively update the validator as we find out more information from Apple.Posted by Sam Ruby at
Links for 2005-07-31 [del.icio.us]Telegraph | News | Soldiers forced to shout ‘bang’ as the Army runs out of ammunition "Why should soldiers who are being sent to Iraq, where their lives will be endangered, be forced to shout ‘bang’ in training because someone in the Ministry...
Excerpt from mmeiser blog at
links for 2005-08-01 Fresno Famous - Flowing wit...links for 2005-08-01 Fresno Famous - Flowing with Famous Podcast A very interesting and well done little podcast for Fresno California (I assume) :) (tags: fresno california podcast) RSS: Dave winer points out some issues with Apple’s RSS extensions...
Excerpt from the backchannel on mmeiser blog at
Apple has created a syndiation-dev mailing list specifically to have an open discussion about the use of syndication related technologies in Apple products.Posted by bbum at
The Hand of FuManChu PUTs POST in perspectiveReferrer inspired communication is nifty. [link].org/ brought me this gem regarding semantics of HTTP’s PUT vs. POST. My recent encounter with this while digging about REST is the XML.com article “How to create a REST protocol”...
Excerpt from Square Rutabaga at
Converge Trip Report #3: SatudayI spent Friday and Saturday in Greensboro, NC at the ConvergeSouth 2005 conference. I had a great time meeting fellow NC bloggers and attending talks on community building, blogging and journalism, collaboration, blog tools and podcasting. The next...
Excerpt from Blogging Roller at
iTunes 6.0 now supports Atom 1.0 feeds.
Things that work:
- feed/title displayed as podcast channel name
- feed/subtitle displayed as podcast channel description
- entry/title displayed as podcast item name
- entry/summary displayed as podcast item description
- entry/published displayed as release date
- entry/link[@rel='enclosure']/@href downloaded as enclosure
Thanks to the Apple dev team for supporting open standards!Posted by Mark at
reef metaphorIn a posting in response to my comments on iTunes' botched one-click subscribe, Peter van Dijk brings in an ecological metaphor: I think most developers feel they have better things to do than to get Apple to clean up their act. Me included. We’re...
Excerpt from the weblog of Lucas Gonze at
why itunes bitesso I just downloaded itunes, something i’ve never bothered with before because a) I never buy downloaded music and b) i don’t have an ipod. even when i reinstalled quicktime, i installed the previous version, as the newest comes packages...
Excerpt from Jen's Den of Iniquity's podcasting technical specificationsAfter a fairly disastrous start last summer, Apple’s podcasting specifications are improving. Let’s hope the same happens with photocasting....
Excerpt from Penmachine.com at
When the bough breaks
......
a list of detailed changes please Stop doing this. Stop doing this. Stop doing this. Stop doing this. Stop doing this. Stop doing this. Fix this. Fix this. (At least the part towards the end about not shitting all over the Net.) Fix this. Let
Excerpt from Comments on: When the bough breaks Understands The Web?The Wall Street Journal piece on Apple’s iPhone arrangement with Cingular is a worthy read. One section of this article shows a disconnect from reality though: Mr. Jobs once referred to telecom operators as “orifices” that other...
Excerpt from db79 at
Anne van Kesteren : Insensitive iTunes - Mark Pilgrim is back. kayodeok : Sam Ruby: Insensitive iTunes - Mark Pilgrim: it appears that iTunes uses a real, draconian, namespace-aware XML parser... except that namespaces are case-insensitive...
Excerpt from HotLinks at
iTunes uses a real, draconian XML parser. iTunes the application looks like it contains some case-insensitive processing. For example, this horrible creation seems to function in iTunes. iTunes seems to ignore elements in the iTunes namespace with iNcorrEct cases, but liberally munges RSS elements. But, the parser is still draconian. A document that begins
<RsS>and ends with
</rss>won’t work.
Oh, and there’s this:Posted by Robert Sayre at | http://www.intertwingly.net/blog/2005/07/05/Insensitive-iTunes | crawl-001 | en | refinedweb |
This behaviour of your program depending on these errors; that
is what is done for the filtered robust predicates (see Section
).
You can find more theoretical information on this topic in
[BBP01].
Interval arithmetic is a large concept and we will only consider here a simple arithmetic based on intervals whose bounds are doubles. So each variable is an interval representing any value inside the interval. All arithmetic operations (+, -, , , , square(), min(), max() and abs()) on intervals preserve the inclusion. This property can be expressed by the following formula ( and are reals, and are intervals, is an arithmetic operation):
For example, if the final result of a sequence of arithmetic operations is an interval that does not contain zero, then you can safely determine its sign.
#include <CGAL/Interval_arithmetic.h>
All functions required by a class to be considered as a CGAL number type
(see
) are present, as well as the utility functions,
sometimes with a particular semantic which is described below. There are also
a few additional functions.
The two following operators can be used for interval analysis, they are not directly useful for Interval_nt as a number type.
The comparison operators (, , , , , , sign() and compare()) have the following semantic: it is the intuitive one when for all couples of values in both intervals, the comparison is identical (case of non-overlapping intervals). This can be expressed by the following formula ( and are reals, and are intervals, is a comparison operator):
and
Otherwise, the comparison is not safe, and we first increment the counter Interval_nt_advanced::number_of_failures, and then throw the exception Interval_nt_advanced::unsafe_comparison.
Interval_nt derives from Interval_nt_advanced. The operations on Interval_nt are automatically protected against rounding modes, and are thus slower than those on Interval_nt_advanced, but easier to use.
Users that need performance are encouraged to use Interval_nt_advanced
instead (see Section
). | http://www.cgal.org/Manual/3.1/doc_html/cgal_manual/NumberTypeSupport_ref/Class_Interval_nt.html | crawl-001 | en | refinedweb |
Previous: Conditional Syntax, Up: Conditionals
#if 0 before the deleted code and
#endif after it. This works even if the code being turned
off contains conditionals, but they must be entire conditionals
(balanced `#if' and `#endif').
Some people use
#ifdef notdef instead. This is risky, because
notdef might be accidentally defined as a macro, and then the
conditional would succeed.
#if 0 can be counted on to fail.
Do not use
#if 0 for comments which are not C code. Use a real
comment, instead. The interior of
#if 0 must consist of complete
tokens; in particular, single-quote characters must balance. Comments
often contain unbalanced single-quote characters (known in English as
apostrophes). These confuse
#if 0. They don't confuse
`/*'. | http://gcc.gnu.org/onlinedocs/gcc-4.1.2/cpp/Deleted-Code.html | crawl-001 | en | refinedweb |
#include <CommPort.h>
List of all members.
Key extensions provided beyond the std::basic_streambuf are a mutual exclusion lock, explicitly separate read/write buffers (which, depending on implementation, could refer to the same stream), recursive open/close, plist-based configuration parameters, and integration with an InstanceTracker registry and factory for dynamic reconfigurability.
Usually you can get by using one of the standard stream buffers (streambuf, filebuf, stringbuf) but if you need to implement a custom stream, these links may help get you started:
See also our own network stream class, ionetstream (Wireless/netstream.h/.cc). Although intended for networking, you can pass it any file descriptor, which makes it handy for pipes as well.
Clients should be careful to use the locking mechanism if there is a possibility of confusing query-responses or competing command/query-polls!
Example usage: look up an instance named "Foo", and send it a query.
CommPort* comm = CommPort::getRegistry().getInstance("Foo");
std::ostream is(&comm->getReadStreambuf());
std::ostream os(&comm->getWriteStreambuf());
is.tie(&os); // fancy failsafe -- make sure 'os' is flushed anytime we read from 'is'
// Locking a CommPort across query-response pairs:
int value;
{
MarkScope autolock(*comm); // marks the comm port as "in use" until the end of its scope
os << "query-value-command" << endl;
is >> value;
// because we have the lock, we know 'value' is in response
// to the 'query-value-command', and not a response to any other thread
}
Advanced locking: try to get a lock, then transfer it to a MarkScope to ensure exception-safety.
ThreadNS::Lock& l = comm->getLock();
if(l.trylock()) {
MarkScope autolock(l); l.unlock(); // transfer lock to MarkScope
// use comm ...
}
Definition at line 62 of file CommPort.h.
the streambuf which does the actual work should inherit from basic_streambuf, using the system's default character type
Definition at line 68 of file CommPort.h.
short hand for the instance tracker, which allows dynamic reconfiguration of CommPort instances
Definition at line 146 of file CommPort.h.
[inline, virtual]
destructor, removes from registry in case we're deleting it from some other source than registry's own destroy()
Definition at line 65 of file CommPort.h.
[inline, protected]
constructor, pass the name of the class's type so we can use it in error messages, and a name for the instance so we can register it for MotionHook's to lookup
Definition at line 152 of file CommPort.h.
[private]
[pure virtual]
Returns the name of the class (aka its type).
Suggested implementation is to declare a static string member, set it to the result of calling the registry's registerType, and then return that member here
Implemented in ExecutableCommPort, FileSystemCommPort, NetworkCommPort, RedirectionCommPort, and SerialCommPort.
Referenced by registerInstance().
Provides serialized access to the comm port.
Multiple drivers might be using the same comm port, callers should get the lock when doing operations on the comm port, particularly across sending a command and waiting for the reply. See MarkScope for usage.
Definition at line 79 of file CommPort.h.
Referenced by NetworkCommPort::close(), CreateDriver::connect(), SSC32Driver::getData(), CreateDriver::getData(), SSC32Driver::motionCheck(), DynamixelDriver::motionCheck(), CreateDriver::motionCheck(), and NetworkCommPort::plistValueChanged().
Called when communication is about to begin, should handle recursive open/close calls.
The subclass is expected to have its own configuration settings which define the parameters of what is to be "opened". Hence, no arguments are passed.
You should be able to handle recursive levels of open/close in case multiple drivers are using the same CommPort.
Referenced by SSC32Driver::motionStarting(), DynamixelDriver::motionStarting(), CreateDriver::motionStarting(),().
Called when communication is complete, should handle recursive open/close calls.
Referenced by SSC32Driver::motionStopping(), DynamixelDriver::motionStopping(), CreateDriver::motionStopping(),().
Allows you to check whether the reference from getReadStreambuf() is currently functional (if checking is supported!).
For streambufs which don't have a way to check this, always returns true.
Reimplemented in ExecutableCommPort, FileSystemCommPort, NetworkCommPort, and RedirectionCommPort.
Definition at line 98 of file CommPort.h.
Referenced by SSC32Driver::getData(), ImageStreamDriver::getData(), CreateDriver::getData(), RedirectionCommPort::isReadable(), SSC32Driver::nextTimestamp(), ImageStreamDriver::nextTimestamp(), and CreateDriver::nextTimestamp().
Allows you to check whether the reference from getWriteStreambuf() is currently functional (if checking is supported!).
Definition at line 102 of file CommPort.h.
Referenced by SSC32Driver::getData(), CreateDriver::getData(), RedirectionCommPort::isWriteable(), SSC32Driver::motionCheck(), DynamixelDriver::motionCheck(), CreateDriver::motionCheck(), and DynamixelDriver::motionStarting().
Returns a std::basic_streambuf, which is expected to implement the actual work.
You can pass this to an istreamWriteStreambuf. If they are the same instance, then you could use an iostream instead of separate istream and ostream.
Implemented in ExecutableCommPort, FileSystemCommPort, NetworkCommPort, and RedirectionCommPort.
Referenced by SSC32Driver::getData(), ImageStreamDriver::getData(), CreateDriver::getData(), RedirectionCommPort::getReadStreambuf(), and read().
You can pass this to an ostreamReadStreambuf. If they are the same instance, then you could use an iostream instead of separate istream and ostream.
Referenced by CreateDriver::connect(), SSC32Driver::getData(), CreateDriver::getData(), RedirectionCommPort::getWriteStreambuf(), SSC32Driver::motionCheck(), DynamixelDriver::motionCheck(), CreateDriver::motionCheck(), DynamixelDriver::motionStarting(), and write().
returns up to n bytes from the streambuf, returns the number read
Definition at line 125 of file CommPort.h.
Referenced by read().
writes up to n bytes from the streambuf, returns the number written
Definition at line 127 of file CommPort.h.
Referenced by write().
reads all available data from getReadStreambuf()
Definition at line 130 of file CommPort.h.
writes the string into getWriteStreambuf()
Definition at line 143 of file CommPort.h.
[inline, static]
registry from which current instances can be discovered and new instances allocated based on their class names
Definition at line 148 of file CommPort.h.
Referenced by Simulator::cmdDelete(), CreateDriver::connect(), SSC32Driver::getData(), ImageStreamDriver::getData(), CreateDriver::getData(), RedirectionCommPort::getInputCP(), RedirectionCommPort::getOutputCP(), SSC32Driver::motionCheck(), DynamixelDriver::motionCheck(), CreateDriver::motionCheck(), SSC32Driver::motionStarting(), DynamixelDriver::motionStarting(), CreateDriver::motionStarting(), SSC32Driver::motionStopping(), DynamixelDriver::motionStopping(), CreateDriver::motionStopping(), SSC32Driver::nextTimestamp(), ImageStreamDriver::nextTimestamp(), CreateDriver::nextTimestamp(), SSC32Driver::plistValueChanged(), ImageStreamDriver::plistValueChanged(), DynamixelDriver::plistValueChanged(), CreateDriver::plistValueChanged(), registerInstance(), SSC32Driver::setDataSourceThread(), ImageStreamDriver::setDataSourceThread(), CreateDriver::setDataSourceThread(), Simulator::Simulator(), ~CommPort(), and Simulator::~Simulator().
[inline, protected, virtual]
To be called be "deepest" subclass constructor at the end of construction.
Don't want to register until completed construction! plist::Collection listeners would be triggered and might start performing operations on instance while partially constructed
Definition at line 161 of file CommPort.h.
Provides a Resource interface, allowing you to use MarkScope directly on the CommPort instead of calling through getLock().
Users don't need to call this directly... either pass the CommPort to a MarkScope, or call getLock().
Implements Resource.
Definition at line 173 of file CommPort.h.
provides a Resource interface, allowing you to use MarkScope directly on the CommPort instead of calling through getLock().
Definition at line 176 of file CommPort.h.
[protected]
holds the name of this instance of CommPort (mainly for error message reporting by the class itself)
ensures that serialized access is maintained (assuming clients use the lock...)
Definition at line 178 of file CommPort.h.
Referenced by NetworkCommPort::doOpen(), registerInstance(), and ~CommPort().
Often devices have either half-duplex communication, or may give responses to command strings. It is important to get a lock across a query-response pair so that there is no risk of a second thread attempting a competing command or query.
Definition at line 184 of file CommPort.h.
Referenced by getLock(), releaseResource(), and useResource(). | http://www.tekkotsu.org/dox/hal/classCommPort.html | crawl-001 | en | refinedweb |
#include <InterleavedYUVGenerator.h>
List of all members..
constructor
Definition at line 9 of file InterleavedYUVGenerator.cc.
constructor, you can pass which channels to interleave
Definition at line 19 of file InterleavedYUVGenerator.cc.
[inline, virtual]
destructor
Definition at line 31 of file InterleavedYUV.
[virtual]
should receive FilterBankEvents from any standard format FilterBankGenerator (like RawCameraGenerator)
Reimplemented from FilterBankGenerator.
Definition at line 30 of file InterleavedYUVGenerator.cc.
Calculates space needed to save - if you can't precisely add up the size, just make sure to overestimate and things will still work.
getBinSize is used for reserving buffers during serialization, but does not necessarily determine the actual size of what is written -- the return value of saveBuffer() specifies that after the data actually has been written. If getBinSize overestimates, the extra memory allocation is only temporary, no extra filler bytes are actually stored.
Definition at line 43 of file InterleavedYUVGenerator.cc.
The loadBuffer() functions of the included subclasses aren't tested, so don't assume they'll work without a little debugging...
Definition at line 51 of file InterleavedYUVGenerator.cc.
Save to a given buffer in memory.
Definition at line 76 of file InterleavedYUV 131 of file InterleavedYUVGenerator.cc.
Referenced by ~InterleavedYUVGenerator().
marks all of the cached images as invalid (but doesn't free their memory)
You probably want to call this right before you send the FilterBankEvent
Definition at line 139 of file InterleavedYUVGenerator.cc.
[protected, virtual] 117 of file InterleavedYUVGenerator.cc.
Referenced by InterleavedYUVGenerator().
resets stride parameter (to correspond to width*3 from FilterBankGenerator::setDimensions())
Definition at line 101 of file InterleavedYUVGenerator.cc.
deletes the arrays
Definition at line 108 of file InterleavedYUVGenerator.cc.
Implements FilterBankGenerator.
Definition at line 149 of file InterleavedYUV 162 of file InterleavedYUVGenerator.cc.
[static]
so you can refer to the YUV channel symbolically. (as opposed to others that might be added?)
Definition at line 36 of file InterleavedYUVGenerator.h.
[protected]
the channel of the source's Y channel
Definition at line 59 of file InterleavedYUVGenerator.h.
Referenced by calcImage(), and createImageCache().
the channel of the source's U channel
Definition at line 60 of file InterleavedYUVGenerator.h.
Referenced by calcImage().
the channel of the source's V channel
Definition at line 61 of file InterleavedYUVGenerator.h.(). | http://www.tekkotsu.org/dox/classInterleavedYUVGenerator.html | crawl-001 | en | refinedweb |
Contributed and maintained by
Vladimir Kvashin
and Vladimir Voskresensky
December 2007 [Revision number: V6.0-2]
If your project has a question mark in the Projects window, or a
#include
directive is underlined in red, then your project has unresolved include directives. The IDE uses internal parser
that is used by Code Assistance features (Code Completion, Classes window,
Navigator window, etc). The above means that this parser cannot resolve some
#include directives.
It means that the IDE project has wrong configuration.
Here are some possible reasons (arranged on probability, from most to least
probable):
Try launching the Configure Code Assistance wizard by right-clicking the
project and choosing Code
Assistance > Configure Code Assistance. It helps to resolve the problem.
If you know exactly where the files are that correspond to the failed include
directive located, then you can setup the project, logical folder, and file
properties manually.
If you are developing a multi-platform project from existing code, you can use
the same IDE project for different platforms. Just.
The Configure Code Assistance wizard is most efficient if you built
your code with debugging information (the best options are -g3
-gdwarf-2 for GNU compilers and just-g for Sun compilers.
But in the case that your project is not built or does not contain.
A hyperlink from function usage tries to find the function definition in
opened projects. If the function definition is not found in opened projects,
then the hyperlink jumps to the function declaration.
A hyperlink from a function declaration tries to find the function
definition in opened projects. If it succeeds, then it opens the
definition.
A hyperlink from a function definition infrastructure tries to find the
function declaration in opened projects. If it succeeds, then it opens
the declaration.
A namespace can be defined in different files of the project. To navigate
between different namespace definitions, use the Classes window (Ctrl-9).
Right-click the namespace you are interested in and
choose All Declarations. You will see a list of all definitions sorted by file
names.
Sometimes macros are used to declare functions, namespaces, and variables.
To see how the macro was expanded in the source code to introduce a
declaration, use the Navigator window (Ctrl-7) and put the cursor on the macro-based declaration. Navigator will select the correspondent language declaration in it's view.
Bookmark this page | http://www.netbeans.org/kb/60/cnd/HowTos.html | crawl-001 | en | refinedweb |
#include <SegCamBehavior.h>
List of all members.
The format used for serialization is basically defined by the subclass of FilterBankGenerator being used. I suggest looking at that classes's documentation to determine the format used. (Generally either SegmentedColorGenerator or RLEGenerator)
However, Seg, SegCamBehavior may send a "Close Connection" packet when the server is shutting down. This is to help UDP connections, which otherwise wouldn't realize that they need to start trying to reconnect.
string:"CloseConnection">
This is exactly the same protocol that is followed by the RawCamBehavior as well - the same code can parse either stream.
However, one odd bit - since the RLEGenerator doesn't save the color information itself, SegCamBehavior will do it instead. So, if SegCamBehavior is using RLE compression, it will tack a footer at the end of the packet: (from SegmentedColorGenerator::encodeColors())
char:
You can tell whether to expect the color footer by the creator string that follows the SegCamBehavior header. (The compression field listed is considering segmented color itself a type of compression, whether or not it's RLE encoded, so you can't use that to tell whether the data is RLE encoded until you get to the data section.)
This is a binary protocol -- the fields listed indicate binary values in the AIBO's byte order (little endian). Strings are encoded using the LoadSave::encode(char*,unsigned int, unsigned int) method.
Definition at line 58 of file SegCamBehavior.h.
delete
this
constructor
Definition at line 12 of file SegCamBehavior.cc.
[inline]
destructor
Definition at line 64 of file SegCamBehavior.h.
[private]
don't call
[virtual]
By default, merely adds to the reference counter (through AddReference()); Note you should still call this from your overriding methods.
Reimplemented from BehaviorBase.
Definition at line 20 of file Seg 28 of file SegCamBehavior.cc.
Referenced by processEvent().
By defining here, allows you to get away with not supplying a processEvent() function for the EventListener interface. By default, does nothing.
Reimplemented from CameraStreamBehavior.
Definition at line 35 of file SegCamBehavior.cc.
[inline, static]
Gives a short description of what this class of behaviors does... you should override this (but don't have to).
If you do override this, also consider overriding getDescription() to return it
Definition at line 78 of file SegCamBehavior.h.
83 of file SegCamBehavior.h.
[inline, static, protected]
function for network data to be sent to -- forwards to theOne's receiveData()
Definition at line 88 of file SegCamBehavior.h.
Referenced by setupServer().
[protected]
tear down the server socket (visRLE)
Definition at line 85 of file SegCamBehavior.cc.
Referenced by DoStop(), and processEvent().
setup the server socket (visRLE )
Definition at line 97 of file SegCamBehavior.cc.
Referenced by DoStart(), and processEvent().
opens a new packet, writes header info; returns true if open, false if otherwise open (check cur==NULL for error)
see the class documentation for SegCamBehavior for the protocol documentation
Definition at line 123 of file SegCamBehavior.cc.
Referenced by writeRLE(), and writeSeg().
writes a color image
Definition at line 149 of file SegCamBehavior.cc.
Definition at line 174 of file SegCamBehavior.cc.
closes and sends a packet, does nothing if no packet open
Definition at line 191 of file SegCamBehavior.cc.
Referenced by processEvent(), writeRLE(), and writeSeg().
sends a packet signaling the server is closing the connection (good for UDP connections)
Definition at line 201 of file SegCamBehavior.cc.
Referenced by closeServer().
[static]
85000 bytes for use up to 416x320 pixels / 8 min expected runs * 5 bytes per run + some padding
Definition at line 67 of file SegCamBehavior.h.
64KB is the max udp packet size
Definition at line 70 of file SegCamBehavior.h.
[static, protected]
global instance of SegCamBehavior acting as server
Definition at line 86 of file SegCamBehavior.h.
Referenced by networkCallback(), SegCamBehavior(), and ~SegCamBehavior().
socket to send image stream over
Definition at line 103 of file SegCamBehavior.h.
Referenced by closePacket(), closeServer(), openPacket(), processEvent(), sendCloseConnectionPacket(), and setupServer().
buffer being filled out to be sent
Definition at line 104 of file SegCamBehavior.h.
Referenced by closePacket(), openPacket(), and processEvent().
current location in packet
Definition at line 105 of file SegCamBehavior.h.
Referenced by closePacket(), openPacket(), processEvent(), writeRLE(), and writeSeg().
number of bytes remaining in packet
Definition at line 106 of file SegCamBehavior.h.
Referenced by closePacket(), openPacket(), writeRLE(), and writeSeg().
the buffer size requested from Wireless when the socket was allocated
Definition at line 107 of file SegCamBehavior.h.
Referenced by openPacket(), and setupServer().
the time that the last event was processed
Definition at line 108 of file SegCamBehavior.h.
Referenced by closePacket(), and processEvent(). | http://www.tekkotsu.org/dox/classSegCamBehavior.html | crawl-001 | en | refinedweb |
The advanced class allows you to make faster computations with interval arithmetic, but you need to set the rounding mode of the FPU to 'round to infinity' (see below for how to do that) before doing any computation with this number type, and each function (arithmetic operators and conversion functions) leaves the rounding mode in this state if it needs to modify it internally.
Changing the rounding mode affects all floating point computations, and might cause problems with parts of your code, or external libraries (even CGAL), that expect the rounding mode to be the default (round to the nearest).
#include <CGAL/Interval_arithmetic.h>
We provide the following interface to change the rounding mode:
The macros CGAL_FE_TONEAREST, CGAL_FE_TOWARDZERO, CGAL_FE_UPWARD and CGAL_FE_DOWNWARD are the values corresponding to the rounding modes.
The correct way to protect an area of code that uses operations on the class Interval_nt_advanced is the following:
FPU_CW_t backup = FPU_get_and_set_cw(CGAL_FE_UPWARD); ... // The code to be protected. FPU_set_cw(backup);
The basic idea is to use the directed rounding modes specified by the IEEE 754 standard, which are implemented by almost all processors nowadays. It states that you have the possibility, concerning the basic floating point operations () to specify the rounding mode of each operation instead of using the default, which is set to 'round to the nearest'. This feature allows us to compute easily on intervals. For example, to add the two intervals [a.i;a.s] and [b.i;b.s], compute rounded towards minus infinity,. The class Interval_nt takes care of this, but is a bit slower.: | http://www.cgal.org/Manual/3.1/doc_html/cgal_manual/NumberTypeSupport_ref/Class_Interval_nt_advanced.html | crawl-001 | en | refinedweb |
#include <RawCamBehavior.h>
List of all members.
The format used for serialization is basically defined by the subclass of FilterBankGenerator being used. I suggest looking at that classes's documentation to determine the format used. (Generally either RawCameraGenerator or JPEGGenerator)
However, Raw, RawCameraGenerator may send a "Close Connection" packet when the server is shutting down. This is to help UDP connections, which otherwise wouldn't realize that they need to start trying to reconnect.
string:"CloseConnection">
This is exactly the same protocol that is followed by the SegCamBehavior as well - the same code can parse either stream.
This is a binary protocol -- the fields listed indicate binary values in the AIBO's byte order (little endian). Strings are encoded using the LoadSave::encode(char*,unsigned int, unsigned int) method.
Definition at line 41 of file RawCamBehavior.h.
delete
this
constructor
Definition at line 12 of file RawCamBehavior.cc.
[inline]
destructor
Definition at line 47 of file RawCamBehavior.h.
[private]
don't call
[virtual]
By default, merely adds to the reference counter (through AddReference()); Note you should still call this from your overriding methods.
Reimplemented from BehaviorBase.
Definition at line 20 of file Raw 29 of file RawCamBehavior.cc.
Referenced by processEvent().
By defining here, allows you to get away with not supplying a processEvent() function for the EventListener interface. By default, does nothing.
Reimplemented from CameraStreamBehavior.
Definition at line 36 of file RawCamBehavior.cc.
[inline, static]
Gives a short description of what this class of behaviors does... you should override this (but don't have to).
If you do override this, also consider overriding getDescription() to return it
Definition at line 62 of file RawCamBehavior.h.
Referenced by getDescription().
67 of file RawCamBehavior.h.
[static]
returns the layer which will be used out of the source, based on current config settings (i.e. compression, skip, etc)
Definition at line 93 of file RawCamBehavior.cc.
Referenced by getSourceULayer(), getSourceVLayer(), and getSourceYLayer().
Definition at line 118 of file RawCamBehavior.cc.
Definition at line 121 of file RawCamBehavior.cc.
Definition at line 124 of file RawCamBehavior.cc.
[inline, static, protected]
function for network data to be sent to -- forwards to theOne's receiveData()
Definition at line 77 of file RawCamBehavior.h.
Referenced by setupServer().
[protected]
tear down the server socket (visRaw)
Definition at line 129 of file RawCamBehavior.cc.
Referenced by DoStop(), and processEvent().
setup the server socket (visRaw)
Definition at line 141 of file RawCamBehavior.cc.
Referenced by DoStart(), and processEvent().
opens a new packet, writes header info; returns true if open, false if otherwise open (check cur==NULL for error)
see the class documentation for RawCamBehavior for the protocol documentation
Definition at line 167 of file RawCamBehavior.cc.
Referenced by writeColor(), and writeSingleChannel().
writes a color image
Definition at line 193 of file RawCamBehavior.cc.
writes a single channel
Definition at line 292 of file RawCamBehavior.cc.
closes and sends a packet, does nothing if no packet open
Definition at line 317 of file RawCamBehavior.cc.
Referenced by processEvent(), writeColor(), and writeSingleChannel().
sends a packet signaling the server is closing the connection (good for UDP connections)
Definition at line 333 of file RawCamBehavior.cc.
Referenced by closeServer().
900KB for max of full-color 640x480 + 1KB for header
Definition at line 52 of file RawCamBehavior.h.
64KB is the max udp packet size
Definition at line 54 of file RawCamBehavior.h.
[static, protected]
global instance of RawCamBehavior acting as server
Definition at line 75 of file RawCamBehavior.h.
Referenced by networkCallback(), RawCamBehavior(), and ~RawCamBehavior().
socket for sending the image stream
Definition at line 92 of file RawCamBehavior.h.
Referenced by closePacket(), closeServer(), openPacket(), processEvent(), sendCloseConnectionPacket(), and setupServer().
point to the current buffer being prepared to be sent
Definition at line 93 of file RawCamBehavior.h.
Referenced by closePacket(), openPacket(), and processEvent().
current location within that buffer
Definition at line 94 of file RawCamBehavior.h.
Referenced by closePacket(), openPacket(), processEvent(), writeColor(), and writeSingleChannel().
the number of bytes remaining in the buffer
Definition at line 95 of file RawCamBehavior.h.
Referenced by closePacket(), openPacket(), writeColor(), and writeSingleChannel().
the buffer size requested from Wireless when the socket was allocated
Definition at line 96 of file RawCamBehavior.h.
Referenced by openPacket(), and setupServer().
the time that the last event was processed
Definition at line 97 of file RawCamBehavior.h.
Referenced by closePacket(), and processEvent(). | http://www.tekkotsu.org/dox/classRawCamBehavior.html | crawl-001 | en | refinedweb |
#include <MotionHook.h>
List of all members.
You can expect to be called every FrameTime*NumFrame milliseconds in terms of simulator time. However, keep in mind this is relative to SharedGlobals::timeScale (Config.Speed) in terms of wall-clock time, and is also subject to the simulator being paused, set to full-speed mode, or hitting a breakpoint in the debugger. See enteringRealtime() and leavingRealtime() if you want updates when the user switches simulation modes, although there's still no way to get notification if a debugger breakpoint is hit
Definition at line 19 of file MotionHook.h.
[inline]
constructor
Definition at line 22 of file MotionHook.h.
[inline, virtual]
no-op destructor
Definition at line 25 of file MotionHook.h.
Called when motion process is starting.
Reimplemented in CreateDriver, DynamixelDriver, SSC32Driver, and TeRKDriver.
Definition at line 28 of file MotionHook.h.
Referenced by SSC32Driver::motionStarting(), DynamixelDriver::motionStarting(), and CreateDriver::motionStarting()..
Reimplemented in CreateDriver, DynamixelDriver, SSC32Driver, and IPCMotionHook.
Definition at line 45 of file MotionHook.h.
Referenced by SSC32Driver::motionCheck(), DynamixelDriver::motionCheck(), and CreateDriver::motionCheck()..
Reimplemented in TeRKDriver.
Definition at line 69 of file MotionHook.h.
Referenced by motionCheck().
Called when motion process is stopping.
Definition at line 72 of file MotionHook.h.
Referenced by SSC32Driver::motionStopping(), DynamixelDriver::motionStopping(), and CreateDriver::motionStopping().
Called when the controller is going to be running in realtime mode, which is probably the normal mode you'd expect.
No guarantees though! You might be in realtime mode, but a debugger breakpoint will still pause things.
Definition at line 76 of file MotionHook.h.
Called when leaving realtime mode, which means you have no idea when motionCheck is going to be called in terms of wall-clock time.
Argument set to true if entering full speed mode, which may mean motionCheck will be called at a high(er) frequency, or slower the computation is overwhelming the host hardware. However, if false, almost certainly indicates updates will be sparse.
A non-realtime mode might be triggered if the user wants to pause the simulator/controller to debug something... No guarantees though! The debugger might catch a breakpoint and stop things, and this won't be called!
Definition at line 85 of file MotionHook.h.
Called by simulator thread to indicate level of verbosity for diagnostics and reporting errors.
Definition at line 88 of file MotionHook.h.
[protected]
stores current verbosity
Definition at line 92 of file MotionHook.h.
Referenced by CreateDriver::connect(), SSC32Driver::motionCheck(), DynamixelDriver::motionCheck(), and setMotionHookVerbose().
set to false following the first motionCheck, reset to true by motionStopping
Definition at line 94 of file MotionHook.h.
Referenced by SSC32Driver::getData(), SSC32Driver::motionCheck(), motionCheck(), DynamixelDriver::motionCheck(), and motionStopping().
stores the last frame of the outputs, updated by motionCheck()
Definition at line 96 of file MotionHook.h.
Referenced by SSC32Driver::motionCheck(), motionCheck(), and DynamixelDriver::motionCheck(). | http://www.tekkotsu.org/dox/hal/classMotionHook.html | crawl-001 | en | refinedweb |
#include <BallDetectionGenerator.h>
List of all members.
This expects its events to come from a RegionGenerator (or a compatable subclass)
Sends a VisionObjectEvent only for the largest ball found (if one is found)
You can set the index of the color of the ball to look for in the constructor, so you can have several of these running looking for balls of different colors.
This is one of our oldest code segments, and has been hacked on a lot, so apologies for a bit of a mess...
Definition at line 25 of file BallDetectionGenerator.h.
[protected]
shorthand
Definition at line 36 of file BallDetectionGenerator.h.
constructor
Definition at line 14 of file BallDetectionGenerator.cc.
[private]
don't call
[inline, static]
Gives a short description of what this class of behaviors does... you should override this (but don't have to).
If you do override this, also consider overriding getDescription() to return it
Reimplemented from BehaviorBase.
Definition at line 30 of file BallDetectionGenerator.h.
[virtual]
see class notes above for what data this can handle
Reimplemented from EventGeneratorBase.
Definition at line 19 of file BallDetectionGenerator.cc.
decides wether to actually send the event based on confidence threshold.
Definition at line 248 of file BallDetectionGenerator.cc.
Referenced by processEvent().
does the actual event sending
Definition at line 295 of file BallDetectionGenerator.cc.
Referenced by processEvent(), and testSendEvent().
[static, protected]
returns a bit mask corresponding to edges touched by the coordinates passed
Definition at line 302 of file BallDetectionGenerator.cc.
[inline, static, protected]
returns
Definition at line 63 of file BallDetectionGenerator.h.
bitmask for calcEdgeMask results
Definition at line 39 of file BallDetectionGenerator.h.
Referenced by calcEdgeMask().
Definition at line 40 of file BallDetectionGenerator.h.
Definition at line 41 of file BallDetectionGenerator.h.
Definition at line 42 of file BallDetectionGenerator.h.
the number of regions to check (from largest to smallest)
Definition at line 45 of file BallDetectionGenerator.h.
the index of the color of the ball we're looking for
Definition at line 68 of file BallDetectionGenerator.h.
the index of the theshold map (channel) of the FilterBankEvent
Definition at line 69 of file BallDetectionGenerator.h.
information about the best ball found
Definition at line 70 of file BallDetectionGenerator.h.
if true, we think we have a ball in front of us
Definition at line 71 of file BallDetectionGenerator.h.
for each frame where we don't agree with present's value, this is incremented and compared against noiseFilter.
Definition at line 72 of file BallDetectionGenerator.h.
the number of frames to wait to make sure an object has dissappeared/reappeared
Definition at line 73 of file BallDetectionGenerator.h.
how sure we should be it's a ball before declaring it as such.
Definition at line 74 of file BallDetectionGenerator.h.
Referenced by testSendEvent(). | http://www.tekkotsu.org/dox/classBallDetectionGenerator.html | crawl-001 | en | refinedweb |
The FCL includes a set of classes that access data sources and manage complex data sets. Known as ADO.NET, these classes are the managed replacement for ADO under Win32. ADO.NET supports both connected and disconnected operations, multiple data providers (including nonrelational data sources), and serialization to and from XML.
For more information, see the following namespaces:
System.Data System.Data.Common System.Data.Odbc (.NET 1.1) System.Data.OleDb System.Data.OracleClient (.NET 1.1) System.Data.SqlClient System.Data.SqlTypes | http://etutorials.org/Programming/C+in+a+nutshell+tutorial/Part+II+Programming+with+the+.NET+Framework/Chapter+5.+Framework+Class+Library+Overview/5.13+Data+Access/ | crawl-001 | en | refinedweb |
I want to write a Mixin who can extend eventual parent class atribute instead replace it. Example:
class Base(object):
my_attr = ['foo', 'bar']
class MyMixin(object):
my_attr = ['baz']
class MyClass(MyMixin, Base):
pass
print(MyClass.my_attr)
['baz']
MyMixin
MyClass.my_attr
['foo', 'bar', 'baz']
MyClass
If you really want to do this on class attributes with a mixin, you'll have to use a custom metaclass AND put the mixin after
Base in the mro (which means it won't override anything in the base class, just add to it):
class Base(object): my_attr = ['foo', 'bar'] class MyMixinBase(type): def __init__(cls, name, parents, attribs): my_attr = getattr(cls, "my_attr", None) if my_attr is None: cls.my_attr = ["baz"] elif "baz" not in my_attr: my_attr.append("baz") class MyMixin(object): __metaclass__ = MyMixinBase class MyClass(Base, MyMixin): pass print(MyClass.my_attr)
which makes for quite convoluted code... At this point, just using
MyMixinBase as metaclass for
MyClass would yield the same result:
class MyOtherClass(Base): __metaclass__ = MyMixinBase print(MyOtherClass.my_attr)
and for such a use case, a class decorator works just fine:
def with_baz(cls): my_attr = getattr(cls, "my_attr", None) if my_attr is None: cls.my_attr = ["baz"] elif "baz" not in my_attr: my_attr.append("baz") return cls @with_baz class AnotherClass(Base): pass print(AnotherClass.my_attr) | https://codedump.io/share/UhOpiv8vZ3WJ/1/extend-class-attribute-with-mixin | CC-MAIN-2017-39 | en | refinedweb |
Foreword: I'm a REAL rails newbie. I'm developing my first web application with it and therefore even basic concepts are hard to understand for me.
Having said thay, my problem:
I'm planning to use Paperclip (and Paperclip only) to store pdfs in my applications (since I'm expecting the pdfs to be around 0.5mb). The tutorial on Paperclip's github didn't one thing clear for me:
<div class="field">
<%= f.label :pdf %>
<%= f.file_field :pdf %>
</div>
<td><%= (link_to 'Related file', task.pdf.url, :target => "_blank") if task.pdf.exists? %></td>
Your migration to add the pdf attachment to a table (In this example I am adding the attachment PDF to a
Documents table) should look like this:
class AddAttachmentPdfToDocuments < ActiveRecord::Migration def self.up change_table :documents do |t| t.attachment :pdf end end def self.down remove_attachment :documents, :pdf end end
You need to run
rake db:migrate after you create this migration so that the pdf column actually gets added to the table.
Your Model code for paperclip (Again, I'm using A
documents model for example) should look something like this:
has_attached_file :pdf, :use_timestamp => false validates_attachment_content_type :pdf, :content_type => ['application/pdf', 'text/plain']
To add a pdf, you need to add an input to your form, for example this is the loop to use to ask for a pdf attachment:
<%= form_for @document do |f| %> <%= f.input :title, label: "Title" %> <%= f.input :pdf, label: "Upload document:" %> <%= f.button :submit %> <% end %>
In your controler (again using
documents as the controler) you need to pass the params
def documents_params params.require(:document).permit(:title, :pdf) end
Note, I am using
document as my controller, model, form etc, so if you are using a different name you need to change that.
If you have problems leave a comment and I will try to help | https://codedump.io/share/VTmiJYwq8k3P/1/ask-for-attachment-on-form-paperclip | CC-MAIN-2017-39 | en | refinedweb |
In python I can easily read a file line by line into a set, just be using:
file = open("filename.txt", 'r')
content = set(file)
set
import re
content = re.split("(\n)", string)
Here's a simple generator that does the job:
content = set(e + "\n" for e in s.split("\n"))
This solution adds an additional newline at the end though. | https://codedump.io/share/ffOV7dvKRZiO/1/python-split-string-but-keep-delimiter | CC-MAIN-2017-39 | en | refinedweb |
Hi,
Is there any problem with the current source, conda, and the docker? Asking because I have been trying to build a new docker using the latest source but after building (i.e. import torch), I get the following error: (I didn’t have this problem before)
from torch._C import *
ImportError: /opt/conda/envs/pytorch-py35/lib/python3.5/site-packages/torch/_C.cpython-35m-x86_64-linux-gnu.so: undefined symbol: _ZN3MPI8Datatype4FreeEv
and here is my docker:
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cmake \
git \
curl \
vim \
ca-certificates \
libjpeg-dev \
libpng-dev &&\
rm -rf /var/lib/apt/lists/*
RUN curl -o ~/miniconda.sh -O && \
chmod +x ~/miniconda.sh && \
~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
/opt/conda/bin/conda install conda-build && \
/opt/conda/bin/conda create -y --name pytorch-py35 python=3.5.2 numpy pyyaml scipy ipython mkl&& \
/opt/conda/bin/conda clean -ya
ENV PATH /opt/conda/envs/pytorch-py35/bin:$PATH
RUN conda install --name pytorch-py35 -c soumith magma-cuda80
RUN git clone --recursive /opt/pytorch
WORKDIR /opt/pytorch
RUN git submodule update --init
RUN TORCH_CUDA_ARCH_LIST="3.5 5.2 6.0 6.1+PTX" TORCH_NVCC_FLAGS="-Xfatbin -compress-all" \
CMAKE_PREFIX_PATH="$(dirname $(which conda))/../" \
pip install -v .
RUN git clone && cd vision && pip install -v .
WORKDIR /workspace
RUN chmod -R a+w /workspace
If you are willing to go with python 2.7, my docker has everything you need:
I’m fixing this problem today. you can track my progress with the issue:
I’ve just checked, and Dockerfile upstream builds and I can ‘import torch’ without issues. The base image does not have mpi, neither mpi is installed later, which means that THD is compiled without support for MPI backend, but also means that you don’t have import problems.
My docker contains openmpi.1.10.3 but I had not had this problem before (even without building docker and just installing from source, I get this error …).
The error is same as the link posted above by @smth | https://discuss.pytorch.org/t/docker-problem-with-the-latest-source-and-conda/7292 | CC-MAIN-2017-39 | en | refinedweb |
stop_sample man page
stop_sample — Stops a sample from playing. Allegro game programming library.
Synopsis
#include <allegro.h>
void stop_sample(const SAMPLE *spl);
Description
Stop a sample from playing, which is required if you have set a sample going in looped mode. If there are several copies of the sample playing, it will stop them all. You must still destroy the sample using destroy_sample().
See Also
play_sample(3), destroy_sample(3)
Referenced By
play_sample(3).
version 4.4.2 Allegro manual | https://www.mankier.com/3/stop_sample | CC-MAIN-2017-39 | en | refinedweb |
posix_mem_offset man page
Prolog
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
posix_mem_offset — find offset and length of a mapped typed memory block (ADVANCED REALTIME)
Synopsis
#include <sys/mman.h> int posix_mem_offset(const void *restrict addr, size_t len, off_t *restrict off, size_t *restrict contig_len, int *restrict fildes);].
The following sections are informative.
Examples
None.
Application Usage
None.
Rationale
None.
Future Directions
None.
See Also
mmap(), posix_typed_mem_open()
The Base Definitions volume of POSIX.1-2008, <sys_m_typed_mem_open(3p), sys_mman.h(0p). | https://www.mankier.com/3p/posix_mem_offset | CC-MAIN-2017-39 | en | refinedweb |
ActiveState's Tcl Dev Kit is a powerful set of development tools and an extended Tcl platform for professional Tcl developers.
Tcl Dev Kit is a component of ActiveTcl Pro Studio, which also includes ActiveState's Komodo IDE enhanced with integrated Tcl debugging and syntax checking. For more information on ActiveTcl Pro Studio and its benefits, see ActiveTcl Pro Studio.
Tcl Dev Kit includes the following Tcl productivity tools:
Binaries for Windows, Mac OS X, Linux, Solaris, HP-UX and AIX are available on the ActiveState web site.
-mergeis specified:
-startupis not required, do not complain when it is missing.
-output, not
-prefix.
-prefixfile read-only, preventing bogus change of modified timestamp for a file which is not written to.
dict forwhen
-onepassis active.
-onepassis active and checked code read from
stdin.
-psnXXXfor OS X bundles.
-metadataoption has been added to support metadata key/value pairs used by TEA packages ('name', 'version', 'platform' etc.). An
-infoplistoption has also been added for specifying Info.plist metadata for OS X applications
Tcl Dev Kit version 3.0 includes two new tools and numerous enhancements to existing tools. The following sections provide a brief overview of the new tools and enhancements to existing tools. Refer to the relevant chapter in the User Guide for a complete description.
Project files from previous versions of the Tcl Dev Kit are compatible with this version.
The Virtual Filesystem Explorer is used to mount and navigate system drives and volumes, to view the contents of archive files with .zip extensions, and to view the contents of starkits and starpacks. Refer to the Virtual Filesystem Explorer section in the User Guide for complete information.
Running the Virtual Filesystem Explorer:
Windows
tclvfse.tcl
Unix
tclvfse
The Cross Reference Tool is a new application that builds a database of program components, including packages, namespaces, variables, etc. Program component information can be extracted from programs and packages contained in TclApp (or Prowrap) projects, Tcl Dev Kit Package definitions (".tap files"), and from Komodo project files. The database can be explored via any of the program components. For example, the Cross Reference Tool can be used to view commands and variables within namespaces, or the source of variable definitions. Refer to the Cross Reference Tool section of the User Guide for complete information.
Running the Cross Reference Tool:
Windows
tclxref.tcl
Unix
tclxref
tcldebugger.tcl. From the Unix command line, enter
tcldebugger.
coverageswitch; coverage functionality is accessible via the View | Code Coverage menu option, or via the Code Coverage button on the toolbar.
tclchecker.tcl. From the Unix command line, enter
tclchecker.
-useoption allows a version specification, and overrides
package requirestatements. For example,
-use tk4.1will scan against version 4.1 of Tk, even if the
package requirestatement specified a different Tk version.
tclcompiler.tcl. From the Unix command line, enter
tclcompiler.
tclapp.tcl. From the Unix command line, enter
tclapp. To use the command-line version, append options to the command.
-icon pathcommand-line switch.
-logswitch.
-prefixfile, the prefix file's extension will be used as the default extension for the generated application (if no explicit output file name is specified). If the prefix has no extension, the name of the generated application will have no extension.
-pkg-acceptswitch is used (which will use the highest available version of the package). When using the graphical TclApp, a message will be displayed when opening the project file, and the highest available version of the specified package will be used.
tclpe.tcl. From the Unix command line, enter
tclpe.
tclsvc.tcl.
base-tclsvc-win32-ix86.exe") that can be used to build "portable" services. When using TclApp to generate the application, specify
base-tclsvc-win32-ix86.exeas the prefix file. This service can then be installed on any system (using the Tcl Dev Kit's Service Manager) that supports Windows NT-based services.
tclinspector.tcl. From the Unix command line, enter
tclinspector.
tbcloadpackage in the output application.
bigtclshand
bigwishinterpreters are now the default. The non-lite versions are no longer supported. | http://docs.activestate.com/tdk/5.3/Release.html | CC-MAIN-2017-39 | en | refinedweb |
Instructions to Wiki EditorsInstructions to Wiki Editors
These instructions are written mainly with the editing of the User Manual in mind. Some of the points might also be valid for other parts of the wiki though..
Note: There seems to be a small bug in the template for the moment with the detection of the page. If the bar does not have the correct list of languages, you can specify the page explicitly in the template:
{{Languages|User Manual/Introduction}}
The page name given to the template should always be the english page.
#REDIRECT [[User Manual/ScummVM Interface]].
There are several wiki extensions installed to help you in the editing task.
This extension can be used to present source code with syntax highlighting. As you can guess for us it is mainly useful for C++ code, but it can also be used with other languages.
Link:
Syntax:
<source lang="cpp">
#include <foo.h>
class MyClass {
public:
MyClass();
~MyClass();
};
</source>
Which gives the following result:
#include <foo.h>
class MyClass {
public:
MyClass();
~MyClass();
};
The extension also now supports using the syntaxhighlight tag instead of source, which can help if the code itself contains the source tag:
<syntaxhighlight lang="xml">
<header path="include/foo.h" />
<source path="src/foo.cpp" />
</syntaxhighlight>>...</math>
MathJax produces nice and scalable mathematics, see their website () for a demonstration.
Link:
<math>
Skewness(X) = \frac{N}{(N-1)*(N-2)*\sigma(X)^3} * \sum_{i=1}^{N}{(X_i - E(X))^3}
</math>
[math]
Skewness(X) = \frac{N}{(N-1)*(N-2)*\sigma(X)^3} * \sum_{i=1}^{N}{(X_i - E(X))^3}
[/math]
This extension can be used to embed Google spreadsheet into a wiki page.
Link:
.
This extension can be used to create footnotes on a wiki page.
Link:
You need to use the <ref> tag to define a reference:
This is an example of use of the Cite extension<ref>Criezy, ScummVM wiki, 2009</ref>.
And then to use the <references /> tag as a placeholder (e.g. at the bottom of the page for a footnote):
--- Notes ---
<references />
This example gives:
This is an example of use of the Cite extension[1].
--- Notes ---
This extension adds logical functions and functions that operate on strings to the wiki parser.
Link:
Syntax: See and for a list of functions and their syntax.
This extension is quite complex. Basically it can be used to display content from other pages into a wiki page.
Link:
Syntax: This extension is invoked with the parser function {{#dpl: .... }} or parser tag <DPL>. See the link above for more details and examples. | http://wiki.scummvm.org/index.php?title=Instructions_to_Wiki_Editors&printable=yes | CC-MAIN-2017-39 | en | refinedweb |
.integrity;23 24 import java.util.Set ;25 26 import org.jboss.tm.TransactionImpl;27 28 /**29 * A transaction integrity that rolls back the transaction30 * if there are other threads associated with it.31 * 32 * @author <a HREF="[email protected]">Adrian Brock</a>33 * @version $Revision: 37459 $34 */35 public class FailIncompleteTransactionIntegrity extends AbstractTransactionIntegrity36 {37 public void checkTransactionIntegrity(TransactionImpl transaction)38 {39 // Assert the only thread is ourselves40 Set threads = transaction.getAssociatedThreads();41 String rollbackError = null;42 synchronized (threads)43 {44 if (threads.size() > 1)45 rollbackError = "Too many threads " + threads + " associated with transaction " + transaction;46 else if (threads.size() != 0)47 {48 Thread other = (Thread ) threads.iterator().next();49 Thread current = Thread.currentThread();50 if (current.equals(other) == false)51 rollbackError = "Attempt to commit transaction " + transaction + " on thread " + current +52 " with other threads still associated with the transaction " + other;53 }54 }55 if (rollbackError != null)56 {57 log.error(rollbackError, new IllegalStateException ("STACKTRACE"));58 markRollback(transaction);59 }60 }61 }62
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/jboss/tm/integrity/FailIncompleteTransactionIntegrity.java.htm | CC-MAIN-2017-39 | en | refinedweb |
a .java file must be compiled into a .class file with the javac compiler. once you have the .class files, you can run those.
are you using the windows "run program" dialog box? if so, that's wrong - or at least, i've never heard of anyone doing it that way. Usually, once you have compiled the file.java into file.class, from a command line, you do this:
java file
Can you explain more clearly what you are doing?
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
public class hello
{
public static void main(String[]args)
{
System.out.println("Hello,World");
}
}
After typing this, I went to tools and then Compile Java, I clicked on this and it did compile, when I tried to run the program again using Run Java Application, I get the response "the process cannot access the file because it is being used by another process".
Also, when I do compile the file hello, it does not create a .class file.
I hope this makes sense, sorry if there is confusion.
Thank you for the help!
Originally posted by matt van:
How do you run the javac compiler? Also, how do a run a program from the command line? ...
See this Hello World tutorial (for Windows), which provides a step-by-step process for using the javac and java commands.
java.lang.NoClassDefFoundError: HelloWorld
Exception in thread "main"
Tool completed with exit code 1
To run the application you can use Tools --> Run Java Application (or Ctrl-2). I have never run into the error you describe in all the years I have used TextPad. In fact, I can't even make it produce that error when I open the .java and/or .class file in another application. [Tools --> Run ... is for something else entirely.]
So despite the appearance that you are using TextPad Tools properly, I am unable to help you resolve your issue.
Whatever editor the students are using (BlueJ, IntelliJ, Eclipse, TextPad, NotePad, DOS edit) they should send you their .java file in plain ascii text. You can compile their .java file with whatever method you choose (TextPad, command line or whatever) and likewise run the application after it is compiled using TextPad, command line or whatever. No problem. We do that all the time in the CattleDrive course here at JavaRanch. Make sure they are not sending you the .class file.
[ January 04, 2007: Message edited by: Marilyn de Queiroz ]
JavaBeginnersFaq
"Yesterday is history, tomorrow is a mystery, and today is a gift; that's why they call it the present." Eleanor Roosevelt
"Tool completed successfully"
in the result window, but TextPad will return to the .java file that you compiled.
After you successfully compile the .java file into a .class file (which you can see in the same directory as the .java file by using Windows Explorer), you should be able to "Run Java Application" and see some results in the result window (if it prints something using System.out.println).
The .java file should not be in the same directory that TextPad is installed in. I usually keep my .java files in a directory named "java" (i.e. C:\java\)
Double check that Configure --> Preferences --> Tools --> Run Java Application --> "Capture output" is checked.
JavaBeginnersFaq
"Yesterday is history, tomorrow is a mystery, and today is a gift; that's why they call it the present." Eleanor Roosevelt
[ January 05, 2007: Message edited by: marc weber ]
Originally posted by matt van:
Sorry about the last post [removed by mw]...
You can edit/delete your own posts by clicking on the paper/pencil icon. (Note to the curious: This was nothing "bad." It just looks like it got posted in mid-composition.)
So are you able to compile and run from the command line? If so, then I think we should move this thread to the IDE forum -- but only after we've verified that Java is correctly installed to work from the command line.
To the second message, what would I be looking for to kill the non-essential processes in task manager. Sorry this is taking so long for me to figure out and thank you for continuing to try and help me.
Originally posted by matt van:
... I tried to run from the command line and it does not seem to work...
Tell us exactly what steps you followed (starting from where you saved the .java file, and exactly what commands you entered), and where the problem occurred, including any error messages.
If you can copy and paste your command prompt session, that would be helpful.
[ January 11, 2007: Message edited by: matt van ]
Originally posted by Fred Rosenberger:
Can you explain more clearly what you are doing?
He is using Textpad. I started using it while reading Core Java by Cay S. Horstmann. Dr. Horstmann holds:
Textpad is.
"The differential equations that describe dynamic interactions of power generators are similar to that of the gravitational interplay among celestial bodies, which is chaotic in nature."
Originally posted by marc weber:
Close TextPad and follow the Hello World tutorial (for Windows). Tell us how these steps work for you.
Textpad will compile Java from a menu item within the application, compiling whatever source.java code file is open in the window.
"The differential equations that describe dynamic interactions of power generators are similar to that of the gravitational interplay among celestial bodies, which is chaotic in nature."
It should, but in this case it seems to be hanging on something. I think Textpad basically just uses a .bat file to issue the commands. If we can test the process by manually typing the commands, maybe we'll see what the problem is.
Originally posted by marc weber:
It should, but in this case it seems to be hanging on something. I think Textpad basically just uses a .bat file to issue the commands. If we can test the process by manually typing the commands, maybe we'll see what the problem is.
If so, I had this problem or something similar and fixed it by removing the compile java command using the delete command, then used the add java compile commnand to replace it. The command began working again.
Textpad does seem to use batch files, it clutters up the directory with these. I also experienced a system hang due to a setting noted as capture output and the way this command works in conjunction with the batch files.
Something along this line of thought is noted in the help files.
[ January 12, 2007: Message edited by: Nicholas Jordan ]
"The differential equations that describe dynamic interactions of power generators are similar to that of the gravitational interplay among celestial bodies, which is chaotic in nature."
Originally posted by matt van:
...I ended up saving my documents and using the recovery disk. I downloaded textpad and now the program works fine. Sorry for all of the confusion...
Wow, I'm glad you got it worked out!
| https://coderanch.com/t/405766/java/problems-running-programs-Textpad | CC-MAIN-2017-39 | en | refinedweb |
Inside the project.json file for an ASP.NET 5 application, you can find a commands section:
"commands": { "gen": "Microsoft.Framework.CodeGeneration", "kestrel": "Microsoft.AspNet.Hosting --server Kestrel --server.urls", "web": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.WebListener --server.urls" },
You can execute these commands from a command prompt with the .NET Execution Environment (dnx, formerly named k), which will hunt for a standard entry point in the assembly referenced by the command value, and also pass along any parameters in the command value.
In other words, “dnx web” will look in the Microsoft.AspNet.Hosting assembly for a main entry point, and then pass the –server and –server.urls parameters into the entry point.
Typical commands you’ll see in new projects include commands to spin up the application in a web server, or execute database migrations.
You can also create your own command hosts using the “ASP.NET 5 Console Application” template, or, you can add a Program class with a Main method inside an existing web application.
public class Program { public Program(ILibraryManager libraryManager) { foreach(var lib in libraryManager.GetLibraries()) { Console.WriteLine(lib.Name); } } public void Main(string[] args) { var config = new Configuration() .AddJsonFile("config.json") .AddCommandLine(args); Console.WriteLine(config.Get("message")); foreach (var arg in args) { Console.WriteLine(arg); } Console.ReadLine(); } }
Notice how the Main method doesn’t require the static keyword, in fact the dnx will instantiate the Program class itself if possible, and even inject dependencies. You can ask for IServiceManifest to see all the services available at startup. In the above code, I’ve asked for an ILibraryManager, and I can use the manager to list out all the dependencies for the application.
The ASP.NET 5 configuration system works for this scenario, too. The above code sets up config.json as the first configuration source. If we set up the configuration file with a message property:
{ "message": "Hello!" }
… then running the application will display the message “Hello!”.
We can override the message by specifying a command line value in the project.json file.
"commands": { "me" : "TestApp --message=Goodbye" }
In this case, “me” is the name of the command and “TestApp” is the name of the web application. Running dnx me from the command line will display the string “Goodbye”. You can also override the command line parameters specified in the project commands by passing parameters to dnx. In other words, “dnx me --message=Allo!” will print the string “Allo!”
Although not the most exciting feature of ASP.NET 5, commands do open up some interesting possibilities for hosting and executing utility functions, data migrations, and scheduled tasks. | https://odetocode.com/blogs/scott/archive/2015/03/30/entry-points-for-asp-net-5-commands.aspx | CC-MAIN-2019-18 | en | refinedweb |
Subscriber portal
Hi there,
I've just watched the video of the "Bring pen and touch input to your Metro style apps with ink" session from the Build conference.
Can you tell me if there is any support for the TabletPC InkAnalyzer api in WinRT?
If not, it would be a great addition to the inker's toolkit for Metro.
Thank you...
Robert
Hi Robert,
Ink Analysis ( ) is not available on Windows 8 (nor I believe on Windows 7).
Basic recognition (as described in Jay and Annie's talk) is available via the Windows::UI::Input::Inking namespace and could be used to build more advanced analysis, but you'd have to implement that yourself.
--Rob | https://social.msdn.microsoft.com/Forums/en-US/e2ae0b05-15b4-4253-9d0e-c5e9b45ce01b/build-session-192-inkanalyzer-question?forum=tailoringappsfordevices | CC-MAIN-2019-18 | en | refinedweb |
I’ve published ShareCoffee today on GitHub. ShareCoffee is a small library containing cool stuff for SharePoint App developers. ShareCoffee is entirely written in CoffeeScript using the test-driven-design approach with (Mocha, ChaiJS, and SinonJS). Since the SharePoint App model is available, I thought about writing this library, but within the last four days I sat down and created it! 🙂
ShareCoffee is offering all the stuff I was faced with since I started App development for SharePoint. The goal was for me to create a library which provides a single programming interface for any quite dirty. It’s a better approach to install ShareCoffee from
NuGet or
bower.io using the following commands
NuGet
Install-Package ShareCoffee
Bower.IO
bower install ShareCoffe
After the sources have been included in your project, you have to add a script reference to your SharePoint App WebSite, and you’re ready to go. When writing JavaScripts, you can enable basic IntelliSense using the reference syntax in VisualStudio by adding the following line to your JavaScript for the
SPAppWebUrl parameter. For both methods, the custom load functions have the highest priority, which means that if you pass a custom load function,
getAppWebUrl and
getHostWebUrl will only call these. You can set the custom load function for
getAppWebUrl as shown here:
var appWebUrl = ShareCoffee.Commons.getAppWebUrl(); // looks within _spPageContext if present // looks within the QueryString for SPAppWebUrl ShareCoffee.Commons.loadAppWebUrlFrom = function(){ return ""; }; appWebUrl = ShareCoffee.Commons.getAppWebUrl(); // will only invoke ShareCoffee.Commons.loadAppWebUrlFrom()
ShareCoffee.UI
ShareCoffee.UI is offering various functions for interacting with SharePoint’s UI. The most powerful method is of course
ShareCoffee.UI.loadAppChrome(chrome settings); which does the whole SharePoint-App-Chome loading stuff for you! The following Sample shows how
loadAppChrome is configured.
var chromeSettings = new ShareCoffee.ChromeSettings("", "My AutoHosted SharePoint App", new ShareCoffee.SettingsLink("foo.html", "Foo", true), new ShareCoffee.SettingsLink("bar.html", "Bar", true) ); var onAppChromeLoaded = function () { console.log("chrome should be loaded now!"); }; ShareCoffee.UI.loadAppChrome("chrome-placeholder-id", chromeSettings, onAppChromeLoaded);.
// showNotification(message, isSticky = false) // isSticky defaults to false ShareCoffee.UI.showNotification("My notification"); var stickyNotificationId = ShareCoffee.UI.showNotification("My sticky notification", true); ShareCoffee.UI.removeNotification(stickyNotificationId);
Moreover, status samples here.
var lastStatusId = null; // showStatus(title, contentAsHtml, showOnTop = false, color = 'blue'); //use defaults for showOnTop (false) and color (blue) lastStatusId = ShareCoffee.UI.showStatus("Status Title", "ShareCoffee <b>Status</b>"); // show status on top lastStatusId = ShareCoffee.UI.showStatus("Status Title", "ShareCoffee <b>Status</b> displayed on top", true); ShareCoffee.UI.setStatusColor(lastStatusId, 'red'); ShareCoffee.UI.setStatusColor(lastStatusId, 'yellow'); ShareCoffee.UI.setStatusColor(lastStatusId, 'blue'); // removeStatus will not be forwarded to SharePoint is statusId is null or undefined ShareCoffee.UI.removeStatus(lastStatusId); ShareCoffee.UI.removeAllStatus();
What’s next
Within the next post on ShareCoffee, I’ll explain how to work with the
ShareCoffe.REST namespace!
Happy Scripting. | https://thorsten-hans.com/sharecoffee-is-available | CC-MAIN-2019-18 | en | refinedweb |
For server activated objects in remoting, we commonly tend to expose the server remote class to the client in order to build the proxy object in the client. I would say the main goal in remoting is to ensure that the code on the server does not have to be shipped to the client.
My article will explain how to hide the implementation of the server code using broker pattern.
The application contains the sample code for remoting using server activated objects and helps in understanding the broker pattern more. The application uses the TcpChannel with binary serialization.
TcpChannel
My target audience are those who are aware of the basic remoting concepts :-)
"Hide the implementation details of the remote service invocation by encapsulating them into a layer other than the business component itself."
The client application (referred as ClientAPP) will invoke the methods from the BrokerInterface as though it is invoking any local interface. However, the methods inside the client interface trigger services to be performed on the remote application (referred as ServerAPP). This is transparent to the client because the remote service object implements the same interface.
ClientAPP
BrokerInterface
ServerAPP
The BrokerInterface is a necessary abstraction that makes distribution possible by providing the contract about the service to the client, which the server provides without exposing the implementation details on the client side. Isolation, simplicity and flexibility are the major benefits while using this pattern.
My application uses .NET remoting to retrieve the user details as a DataSet from the server. The following mechanism is used to implement this functionality:
DataSet
MarshalByRefObject
RemoteComponents
public class UserManager : MarshalByRefObject,IUserManager
{
public DataSet listUserDetails()
{
//logic for retrieving user details
}
}
The UserManager is a remote enabled class providing a method named listUserDetails(), which returns the list of user details. The logic for retrieving the user details is written here.
UserManager
listUserDetails()
RemotingConfiguration.RegisterWellKnownServiceType(
typeof(UserManager),
"UserDetails",
WellKnownObjectMode.Singleton);
* The activation mode for UserManager is a singleton, so you will have only one instance of UserManager running on the server. If the user details is supposed to change based on any group, then the activation mode can be changed to SingleCall.
SingleCall
namespace BrokerInterface
{
public interface IUserManager
{
DataSet listUserDetails();
}
}
This is the code for the extracted IUserManager interface. The implementation for the method ‘listUserDetails’ will be written in the UserManager class deployed in the server.
IUserManager
listUserDetails
location="tcp://localhost:4820/";
IUserManager mgr= (IUserManager)Activator.GetObject(typeof(IUserManager),
location + "UserDetails");
The client program calls the remoting framework method Activator.GetObject() to retrieve a proxy for the UserManager object on the server. The method specifies the location where the object is located along with the type that should be returned.
Activator.GetObject()
In this case, you should expect an IuserManager object at the following location:. After you have the instance, you can call methods on it as if it were in the same application domain.
IuserManager
userDS=mgr.listUserDetails();
When using .NET remoting, you must pay careful attention to the deployment of the application into different assemblies. I repeat, the main goal is to ensure that the code on the server does not have to be shipped to the client.
Assemblies shipped to the client in this application:
Assemblies required in the. | https://www.codeproject.com/Articles/9679/NET-Remoting-Using-Broker-Pattern | CC-MAIN-2019-18 | en | refinedweb |
Introduction
The Magic Hand allows people with disabilities and motor skill impairments to enjoy the creativity of drawing and writing in a simulated environment. The Magic Hand is a wearable glove that senses the motion of your index finger and translates that into the drawing of lines on a computer screen.
Materials Needed
LSM9DOF Breakout Board --- $24.95 ---
Adafruit Feather with Wifi --- $18.95 ---
Female/Female Wires --- $1.95 ---
Tape/Velcro strips --- $3
Two magnets of equal strength --- Prices vary
How it works
By using an accelerometer, we can gather acceleration data for the y-axis which will help us determine when the user's finger is moving up and down. Due to the fact that our accelerometer measures acceleration with respect to the center of the earth we can't determine the acceleration of the x-axis (left or right). Luckily the LSM9DOF breakout board also contains a magnetometer which allows us to gather data on magnetic fields. We place two magnets 30 cm apart and have the glove in between. If the magnetic data read positive then we know the glove is moving right and vice versa. After all the data is collected in the accelerometer/magnetometer it sends the data via wire to the feather which is connected to a computer of wifi and then forwards the data to the computer which we can then use in our code.
Step 1: Physical Prototype 1
This prototype is meant to be glove sewn together loosely on the hand in order for it to slip over the electronic devices. The electronic device will then be attached by velcro to the under armor sleeve base combined with a basic glove on the hand. Then the green glove will slip over the base and the electronic devices....
Steps in making the prototype glove:
- Get two pieces of fabric large enough to trace hand
- Trace hand onto both pieces of fabric and cut them out
- Put the two hand cut outs together so they are perfectly alligned
- Next, to prepare the sewing machine, run the thread through the indicated spots on the machine
- When the sewing machine is set up, lift the needle and place the two put-together pieces of fabric under the needle
- Make sure the needle is lined up on the very edge of the fabric, start the machine, and sew along the edges of the fabric, while leaving the two pieces unsewn at the wrist so a hand can fit in.
Step 2: Physical Prototype 2
Our final prototype is a regular glove combined with Velcro strap that is adjustable to any wrist. The glove and strap are sewn together, and the electronic devices are attached to the glove via Velcro.
Steps in making the 2nd prototype of the glove:
- Purchase a glove, the material of the glove does not matter.
- Purchase a velcro wrist strap
- Purchase a portable battery
- Purchase Sticky Velcro
- With a sewing needle, attach the velcro wrist strap to the base of the glove
- The wrist strap should be able to adjust to different wrist sizes.
- Attach the sticky tape to the base of the accelerometer and attach it to the index finger of the glove
- Attach sticky tape to the feather and attach it to the top of the glove.
- Using wires connect the 3V3 pin in the feather to the VIN pin in the accelerometer
- Using wires connect the GND pin in the feather to the GND pin the accelerometer.
- Using wires connect the SCL pin in the feather to the SCL pin the accelerometer.
- Using wires connect the SDA pin in the feather to the SDA pin the accelerometer.
- Connect at-least a 5 volt battery through usb to the feather to provide power.
Step 3: Magnets
Step 1: Put the two magnets of equals strength across from each other.
Step 2: Measure out 30 cm gap between the two magnets
Step 3: place the Magnetometer exactly in the middle of the two magnets. You should receive data around 0 while its in the middle. If you receive a reading of zero skip to step 5.
Step 4: If the reading is not zero or close to zero then you must adjust the distance of the magnets. If the reading is negative move the left magnet a cm or 2 to left or until reading is zero. If it is positive do the same thing except with the right magnet.
Step 5: Write code that accepts the data from the magnetometer and reads if it positive or negative. If positive have the code draw a line to the right and if negative draw a line left.
Step 4: Code...
Introduction:
In order to process data from the accelerometer, a client/server relationship must be established between the Adafruit feather and the server that processes the data (running on a laptop/desktop). Two code files will need to be created: one for the client (the Adafruit feather), and the other for the server (in this case, Jarod’s laptop). The client is written in C++, and the server is written in python. The language used for the client matters as Arduino is mainly a C++ language, and changing it to use a different language is difficult. The server can be written in any language, as long as it has network features.
Setting up the Client:
First, we will setup the client code. Most of the WiFi connection code is readily available through the Adafruit libraries. We begin by including relevant classes.
#include <ESP8266WiFi.h>
#include <SPI.h> #include <Wire.h> #include <Adafruit_Sensor.h> #include <Adafruit_LSM9DS0.h>
Set some variables what will be used throughout the code.
// Connect to a network
const char* ssid = "MMServer"; const char* password = "MMServer-Password"; // IP and port of the server which will receive data const char* host = "149.160.251.3"; const int port = 12347; bool connected = false;
// Initialize motion detector Adafruit_LSM9DS0 lsm = Adafruit_LSM9DS0(1000);
WiFiClient client;
Create a setup() function which will be run as soon as the feather starts.
// Setup WiFi connection, and connect to the server
void setup() { Serial.begin(9600); delay(100);
Serial.println(); Serial.println(); Serial.print("Connecting to "); Serial.println(ssid); // Start WiFi WiFi.begin(ssid, password); // Connecting... while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("."); } // Successfully connected to WiFi Serial.println(""); Serial.println("WiFi connected"); Serial.println("IP address: "); Serial.println(WiFi.localIP());
#ifndef ESP8266 while(!Serial); #endif Serial.begin(9600); Serial.println("Sensor Test");
// Initialize the sensor if(!lsm.begin()) { // There was a problem detecting the LSM9DS0 Serial.print(F("Ooops, no LSM9DS0 detected ... Check your wiring or I2C ADDR!")); while(1); } Serial.println(F("Found LSM9DS0 9DOF")); // Begin connecting to server Serial.print("Connecting to "); Serial.println(host);
// Check for successful connection. If failed then abort if (!client.connect(host, port)) { Serial.println("connection failed"); connected = false; return; } else { connected = true; }
//Setup the sensor gain and integration time configureSensor(); }
We then need a loop function that will repeatedly loop. In this case, it is used to repeatedly send data from the accelerometer to the server in the form of “[z_accel]:[y_mag]:[z_mag]”. The client.print(numbers); function is what sends data to the server.
void loop() {
delay(250); if(connected){ // This will send data to the server sensors_event_t accel, mag, gyro, temp; lsm.getEvent(&accel, &mag, &gyro, &temp); String numbers; numbers += accel.acceleration.z; numbers += ":"; numbers += mag.magnetic.y; numbers += ":"; numbers += mag.magnetic.z; Serial.print(numbers); client.print(numbers); Serial.println(); } else { establishConnection(); } }
For some utility functions, we need one to establish the connection between the feather and the server.
void establishConnection(){
if (!client.connect(host, port)) { Serial.println("connection failed"); connected = false; return; } else { connected = true; } }
We also need to configure the sensor and give it the range of values it will read. For example, acceleration has 5 options for the range: 2g, 4g, 6g, 8g, and 16g.
void configureSensor(void)
{ // Set the accelerometer range //lsm.setupAccel(lsm.LSM9DS0_ACCELRANGE_2G); lsm.setupAccel(lsm.LSM9DS0_ACCELRANGE_4G); //lsm.setupAccel(lsm.LSM9DS0_ACCELRANGE_6G); //lsm.setupAccel(lsm.LSM9DS0_ACCELRANGE_8G); //lsm.setupAccel(lsm.LSM9DS0_ACCELRANGE_16G); // Set the magnetometer sensitivity //lsm.setupMag(lsm.LSM9DS0_MAGGAIN_2GAUSS); //lsm.setupMag(lsm.LSM9DS0_MAGGAIN_4GAUSS); //lsm.setupMag(lsm.LSM9DS0_MAGGAIN_8GAUSS); lsm.setupMag(lsm.LSM9DS0_MAGGAIN_12GAUSS);
// Setup the gyroscope lsm.setupGyro(lsm.LSM9DS0_GYROSCALE_245DPS); //lsm.setupGyro(lsm.LSM9DS0_GYROSCALE_500DPS); //lsm.setupGyro(lsm.LSM9DS0_GYROSCALE_2000DPS); }
Setting up the Server:
The server will be a python file that will run on the command line of a computer. To start, import the required classes.
import socket
import re import pyautogui
socket is used for networking. re is used for regex, or string manipulations. pyautogui is a python library which will allow the drawing to happen (discussed later).
Next, we should define some variables. These will be global variables, so they will be accessed in multiple functions. They will be used later in the code.
i = 0
n = 0 line = 1
data_list = []
mag_data = [] mag_calib_y = 0 mag_offset_y = 0
z_calib = 0 z_offset = 0 z_moving_offset = 0 z_diff = 0 z_real = 0 z_velo = 0 z_pos = 0
keep_offset = False first_data = True
We now need a function to create a server and open it for incoming connections.
def startServer():
global i global first_data # initialize server socket serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) serversocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # Server IP address and port host = "149.160.251.3" port = 12347 server_address = (host, port) # Open the server and listen for incoming connections print ('Starting server on %s port %s' % server_address) serversocket.bind(server_address) serversocket.listen(5) # Wait for connections... while True: print ('Waiting for connection...') # Accept an incoming connection (clientsocket, address) = serversocket.accept() # Try to parse data received try: print ('Connection established from ', address) while True: # Receive the data and send it for processing data = clientsocket.recv(25) accel_data = re.split('[:]', str(data)) accel_data[0] = accel_data[0][2:] accel_data[1] = accel_data[1] accel_data[2] = accel_data[2][1:-1] print(accel_data) i+=1 if(i < 51): calibData(accel_data) else: movingAccel(accel_data[0]) processData(accel_data) first_data = False finally: # Close the socket to prevent unnecessary data leak clientsocket.close()
We now require the functions that will process all of the data. The first step to take, and the first function called, is the calibration of the sensor for the calculation purposes.
def calibData(list):
global z_calib global z_offset global mag_data global mag_calib_y global mag_offset_y z_calib += float(list[0]) mag_calib_y += float(list[1]) if(i==50): z_offset = z_calib / 50 mag_offset_y = mag_calib_y / 50 z_calib = 0 mag_calib_y = 0 mag_data.append(mag_offset_y)
Next, we create a moving acceleration offset. This makes it so the program recognizes when someone stops moving their finger because all of the values for acceleration that are sent to the server should be the same at that time.
def movingAccel(num):
global z_calib global z_diff global z_moving_offset global z_offset global data_list global n global keep_offset if(n < 10): n += 1 z_calib += float(num) data_list.append(float(num)) else: z_moving_offset = z_calib / 10 for entry in data_list: z_diff = float(entry) - z_moving_offset if(z_diff > 0.2 or z_diff < -0.2): # motion detected within data, restart keep_offset = True n = 0 z_calib = 0 z_moving_offset = 0 z_diff = 0 data_list = [] break if not keep_offset: # stationary in data, set new z_offset z_offset = z_moving_offset print("New z_offset: ") print(z_offset) n = 0 z_calib = 0 z_moving_offset = 0 z_diff = 0 data_list = [] keep_offset = False keep_offset = False
Next, we do the brunt of the math. This involves translating the acceleration data into a position data which will allow us to tell the direction that the user moves their finger.
def processData(list): #[accel.z, mag.y]
global z_offset global z_real global z_velo global z_pos global first_data global mag_data
z_real = float(list[0]) - z_offset mag_y = list[1] mag_z = list[2] left = False right = False # Don't process acceleration until absolutely sure it has accelerated # Prevents mechanical noise from contributing to position if(z_real < 0.20 and z_real > -0.20): z_real = 0 #Begin integrations to find position if(first_data): mag_data.append(mag_y) z_pos = (0.5 * z_real * 0.25 * 0.25) + (z_velo * 0.25) + z_pos z_velo = z_real * 0.25 pyautogui.moveTo(1500,1000) else: z_pos = (0.5 * z_real * 0.25 * 0.25) + (z_velo * 0.25) + z_pos z_velo = (z_real * 0.25) + z_velo del mag_data[0] mag_data.append(mag_y) if(float(mag_data[1]) - float(mag_data[0]) > 0.03): right = True elif(float(mag_data[1]) - float(mag_data[0]) < -0.03): left = True if(right): movement(50, int(z_pos*1000)) elif(left): movement(-50, int(z_pos*1000)) z_velo = 0 z_pos = 0
Now, finally, we move the cursor! To do this, we opened a paint window and made it full screen. The pyautogui library contains a function called pyautogui.dragRel(x,y); which we use to drag the mouse cursor from one point to the next. It uses relative position data so the movement is relative to the last position of the cursor.
def movement(x, y):
print("moving to", x, -y) pyautogui.dragRel(x,-y)
Lastly, we need to call the main function to even allow all of this code to run.
# Calls the function to begin the server
startServer()
3 Discussions
9 months ago
very clever!!!
1 year ago
That's a cool idea! I hope you get an A on your project :)
Reply 1 year ago
We are very happy with the results of our project. Nothing feels better then getting something to work. Thanks for the good luck. | https://www.instructables.com/id/Accel-Writing-Magic-Hand/ | CC-MAIN-2019-18 | en | refinedweb |
Secure Git credential storage for Windows with support for Visual Studio Team Services, GitHub, and Bitbucket multi-factor authentication.
View the Project on GitHub Microsoft/Git-Credential-Manager-for-Windows
Git Credential Manager and Git Askpass work out of the box for most users. Configuration options are available to customize or tweak behavior(s).
The Git Credential Manager for Windows [GCM] can be configured using Git’s configuration files, and follows all of the same rules Git does when consuming the files.
Global configuration settings override system configuration settings, and local configuration settings override global settings; and because the configuration details exist within Git’s configuration files you can use Git’s
git config utility to set, unset, and alter the setting values.
The GCM honors several levels of settings, in addition to the standard local > global > system tiering Git uses.
Since the GCM is HTTPS based, it’ll also honor URL specific settings.
Regardless, all of the GCM’s configuration settings begin with the term
credential.
Additionally, the GCM respects GCM specific environment variables as well.
Regardless, the GCM will only be used by Git if the GCM is installed and the key/value pair
credential.helper manager is present in Git’s configuration.
For example:
credential.microsoft.visualstudio.com.namespaceis more specific than
credential.visualstudio.com.namespace, which is more specific than
credential.namespace.
In the examples above, the
credential.namespace setting would affect any remote repository; the
credential.visualstudio.com.namespace would affect any remote repository in the domain, and/or any subdomain (including
www.) of, ‘visualstudio.com’; where as the the
credential.microsoft.visualstudio.com.namespace setting would only be applied to remote repositories hosted at ‘microsoft.visualstudio.com’.
For the complete list of settings the GCM understands, see the list below.
Defines the type of authentication to be used.
Supports
Auto,
Basic,
AAD,
MSA,
GitHub,
Bitbucket,
Integrated, and
NTLM.
Use
AAD or
MSA if the host is ‘visualstudio.com’ Azure Domain or Live Account authentication, relatively.
Use
GitHub if the host is ‘github.com’.
Use
BitBucket or
Atlassian if the host is ‘bitbucket.org’.
Use
Integrated or
NTLM if the host is a Team Foundation, or other NTLM authentication based, server.
Defaults to
Auto.
git config --global credential.microsoft.visualstudio.com.authority AAD
See GCM_AUTHORITY
Causes the proxy value to be considered when evaluating credential target information. A proxy setting should established if use of a proxy is required to interact with Git remotes.
The value should the URL of the proxy server.
Defaults to not using a proxy server.
git config --global credential.github.com.httpProxy
See HTTP_PROXY
Specifies if user can be prompted for credentials or not.
Supports
Auto,
Always, or
Never. Defaults to
Auto.
git config --global credential.microsoft.visualstudio.com.interactive never
See GCM_INTERACTIVE
Forces authentication to use a modal dialog instead of asking for credentials at the command prompt.
Supports
true or
false. Defaults to
true.
git config --global credential.modalPrompt true
See GCM_MODAL_PROMPT
Sets the namespace for stored credentials.
By default the GCM uses the ‘git’ namespace for all stored credentials, setting this configuration value allows for control of the namespace used globally, or per host.
Supports any ASCII, alpha-numeric only value. Defaults to
git.
git config --global credential.namespace name
See GCM_NAMESPACE
Prevents the deletion of credentials even when they are reported as invalid by Git. Can lead to lockout situations once credentials expire and until those credentials are manually removed.
Supports
true or
false. Defaults to
false.
git config --global credential.visualstudio.com.preserve true
See GCM_PRESERVE
Sets the maximum time, in milliseconds, for a network request to wait before timing out. This allows changing the default for slow connections.
Supports an integer value. Defaults to 90,000 milliseconds.
git config --global credential.visualstudio.com.httpTimeout 100000
See GCM_HTTP_TIMEOUT
Sets a duration, in hours, limit for the validity of Personal Access Tokens requested from Azure DevOps.
If the value is greater than the maximum duration set for the account, the account value supersedes. The value cannot be less than a one hour (1).
Defaults to the account token duration. Honored when authority is set to
AAD or
MSA.
git config --global credential.visualstudio.com.tokenDuration 24
Instructs Git to supply the path portion of the remote URL to credential helpers. When path is supplied, the GCM will use the host-name + path as the key when reading and/or writing credentials.
Note: This option changes the behavior of Git.
Supports
true or
false. Defaults to
false.
git config --global credential.bitbucket.com.useHttpPath true
Instructs Git to provide user-info to credential helpers. When user-info is supplied, the GCM will use the user-info + host-name as the key when reading and/or writing credentials. See RFC: URI Syntax, User Information for more details.
Note: This option changes the behavior of Git.
Supports any URI legal user-info. Defaults to not providing user-info.
git config --global credential.microsoft.visualstudio.com.username johndoe
Causes validation of credentials before supplying them to Git. Invalid credentials get a refresh attempt before failing. Incurs minor network operation overhead.
Supports
true or
false. Defaults to
true. Ignored when authority is set to
Basic.
git config --global credential.microsoft.visualstudio.com.validate false
See GCM_VALIDATE
Overrides GCM default scope request when generating a Personal Access Token from Azure DevOps.
The supported format is one or more scope values separated by whitespace, commas, semi-colons, or pipe
'|' characters.
Defaults to
vso.code_write|vso.packaging; Honored when host is ‘dev.azure.com’.
git config --global credential.microsoft.visualstudio.com.vstsScope vso.code_write|vso.packaging_write|vso.test_write
See GCM_VSTS_SCOPE
Enables trace logging of all activities.
Logs are written to the local
.git/ folder at the root of the repository.
Note: This setting will not override the
GCM_TRACE environment variable.
Supports
true or
false. Defaults to
false.
git config --global credential.writeLog true
See GCM_WRITELOG
[credential "microsoft.visualstudio.com"] authority = AAD interactive = never preserve = true tokenDuration = 24 validate = false [credential "visualstudio.com"] authority = MSA [credential] helper = manager writelog = true | http://microsoft.github.io/Git-Credential-Manager-for-Windows/Docs/Configuration.html | CC-MAIN-2019-18 | en | refinedweb |
Sorry it's taking me so long to get the posts out. The series turned out to be a little longer than I anticipated 🙂 I got a lot of good feedback on the Data Model stuff.
First off, I want to mention layering. The DataModel typically is a layer on top of some other lower level data model that's not optimized to WPF. This might be something specific to the database technology you are using. Or, it could wrap some native object that's accessed via interop (I've run into a couple of examples of people doing this recently).
Also, I made some simplifications in the first post that made them much less interesting. The big simplification was that the models only fetched their data once and weren't live. Things get a lot more interesting when the models keep their data up to date.
Once you make them live, you run into a life time issue. If you have a large set of items, you only want to keep the visible items live. We'll do this by giving models Activate and Deactivate functions that control when it is live. Let's start with the DataModel changes. I'm not going to list the full class here, just the modifications. I'll post the entire sample soon when I finish up the series. If you have any question about how to apply these changes, let me know.
First, an IsActive property, which is implemented much like the State property. We could make it settable to activate and deactivate the model, but I like to think of those as methods rather than a property change:
public bool IsActive { get { VerifyCalledOnUIThread(); return _isActive; } private set { VerifyCalledOnUIThread(); if (value != _isActive) { _isActive = value; SendPropertyChanged("IsActive"); } } }
And, the Activate/Deactivate methods:
public void Activate() { VerifyCalledOnUIThread(); if (!_isActive) { this.IsActive = true; OnActivated(); } }
public void Deactivate() { VerifyCalledOnUIThread(); if (_isActive) { this.IsActive = false; OnDeactivated(); } }
And, some simple overridable stubs:
protected virtual void OnActivated() { }
protected virtual void OnDeactivated() { }
This is all pretty simple, we can just activate it and deactivate it. Subclasses can override the behavior when activated and deactivated.
Now, let's modify the StockModel to be live while activated. We'll use a DispatcherTimer to update on an interval. We'll start the timer and do the first update when activated:
protected override void OnActivated() { VerifyCalledOnUIThread(); base.OnActivated(); _timer = new DispatcherTimer(DispatcherPriority.Background); _timer.Interval = TimeSpan.FromMinutes(5); _timer.Tick += delegate { ScheduleUpdate(); }; _timer.Start(); ScheduleUpdate(); }
And, we'll stop the timer when deactivated:
protected override void OnDeactivated() { VerifyCalledOnUIThread(); base.OnDeactivated(); _timer.Stop(); _timer = null; }
When we're ready to do an update, we'll use a background thread as before:
private void ScheduleUpdate() { VerifyCalledOnUIThread(); // Queue a work item to fetch the quote if (ThreadPool.QueueUserWorkItem(new WaitCallback(FetchQuoteCallback))) { this.State = ModelState.Fetching; } }
Note: We could have used a System.Threading.Timer to do the updates where we'd be called on a background thread directly, but then we couldn't set the model state to fetching. We'd have to send that back to the UI thread.
Ok, now we've made it so you can activate it and deactivate it, but when do we do so? Let's say we've got thousands of our models in a ListBox. It's only going to only show a few of them on the screen at a time and we only want the ones on screen to be active. We'll use the attached property trick to do this without having to write custom code each time we want to activate and deactivate models. The basic idea is that we're going to display our model in a DataTemplate and we want to activate the model when FrameworkElements in the UI are loaded, and deactivate the model when they are unloaded. With the attached property trick, our DataTemplate Xaml just has to be:
<DataTemplate DataType="{x:Type local:StockModel}">
<StackPanel Orientation="Horizontal" local:ActivateModel.
<TextBlock Text="{Binding Symbol}" Width="100"/>
<TextBlock Text="{Binding Quote}" />
</StackPanel>
</DataTemplate>
Note the local:ActivateModel.Model={Binding}. Now, here's how we implement that magic property! First, we need define the DependencyProperty and the accessor functions:
public static class ActivateModel { public static readonly DependencyProperty ModelProperty = DependencyProperty.RegisterAttached("Model", typeof(DataModel), typeof(ActivateModel), new PropertyMetadata(new PropertyChangedCallback(OnModelInvalidated))); public static DataModel GetModel(DependencyObject sender) { return (DataModel)sender.GetValue(ModelProperty); } public static void SetModel(DependencyObject sender, DataModel model) { sender.SetValue(ModelProperty, model); }
We've registered a PropertyChangedCallback on the property, so any time it is changed (including when it's initially set), OnModelInvalidated is going to be called. In OnModelInvalidated, we're going to register for Loaded and Unloaded events on the the FrameworkElement we are attached to. We also have to do a bit of bookkeeping to clean up if we were previously pointing to a different model.
private static void OnModelInvalidated(DependencyObject dependencyObject, DependencyPropertyChangedEventArgs e) { FrameworkElement element = (FrameworkElement)dependencyObject; // Add handlers if necessary if (e.OldValue == null && e.NewValue != null) { element.Loaded += OnElementLoaded; element.Unloaded += OnElementUnloaded; } // Or, remove if necessary if (e.OldValue != null && e.NewValue == null) { element.Loaded -= OnElementLoaded; element.Unloaded -= OnElementUnloaded; } // If loaded, deactivate old model and activate new one if (element.IsLoaded) { if (e.OldValue != null) { ((DataModel)e.OldValue).Deactivate(); } if (e.NewValue != null) { ((DataModel)e.NewValue).Activate(); } } }
And, here are the Loaded/Unloaded handlers.
static void OnElementLoaded(object sender, RoutedEventArgs e) { FrameworkElement element = (FrameworkElement)sender; DataModel model = GetModel(element); model.Activate(); } static void OnElementUnloaded(object sender, RoutedEventArgs e) { FrameworkElement element = (FrameworkElement)sender; DataModel model = GetModel(element); model.Deactivate(); }
Pretty neat trick, isn't it? This means it's really easy for us to activate models when they are visible in the UI by simply adding the ActivateModel.Model property to the UI element. Since we know all FrameworkElements will get unloaded when they go away, we know we won't have to worry about leaking anything. It doesn't require any custom activate/deactivate code per view!
We get a form of data virtualization out of this trick. If the data is very expensive, the models can act as a relatively cheap shell. When you go to view a a large collection of items, you just need to provide a collection of the data models instead of the full items. The expensive data can then be accessed only when the UI for the data is visible on the screen and then thrown out when the data goes offscreen.
I'll confess, I've made another simplification from what we've done in Max. The lifetime management of a lot of our models is slightly more complex. And, as much as we hated to do so, we ended up adding a reference counting to our models. So we keep track of multiple levels of activation and a model is live as long as that count is greater than zero. To get something a bit like smart pointers, we have Activate return an IDisposable which we call an "activation cookie". Disposing of the activation cookie decrements the activation count. The cookie is the only way to decrement the count, there's no public method on the model. It's smart enough to not let you decrement multiple times. And, in debug builds, we have a finalizer on the cookie that asserts if Dispose wasn't called, to help catch leaks. We were quite happy to leaving reference counting when moving to managed code, so it hurt a bit to bring it back 🙂
Ok, I think I'm just a couple of posts from wrapping this up!
Hi Dan!
very great series! I’m really impressed, thats really tricky!
I’m going to try implementing this pattren for 3DObjects(MV3D). I know 3DObjects are not FrameWorkElements or UIElements, but I think not all but some of concepts of this pattern could be very usefull to implement binding between a 3DObject and a DataModel (to get the position in 3D space for example could be a possible Data from DataModel). Thanks a lot for great work! Waiting for next/last couples of posts…
PS: your session at deep dive was very cool! Thanks.
How about posting those last couple of posts to wrap this up? 🙂
Oh, Yes, I’m waiting for that too. I hope it comes soon :). Let’s wrap it up…..
Sorry, I’ll get to it this weekend at the latest!
In part 5, I talked about commands and how they are used for behavior. Now, I want to talk about a better…
Hi Dan,
Thanks a lot! I’m looking forward for other posts.
One question came to my mind while reading DM-M-VM series:
Why don’t you use async bindings for fetching data in background?
Thanks,
paul
There are a few things that steer me away from asynchronous data binding. The first is that the property getters will now be callable by multiple threads, and that makes the threading model for the object much more complicated. Second, I think it would be surprising that some properties could hang indefinitely, but others wouldn’t. As a consumer of the class, how would you know what’s safe to call? And, finally, you get more control if you do it yourself. With asynchronous binding, WPF creates the thread for you, but with this mechanism you have more control over scheduling updates and such.
Great series! Thanks for the efforts!
In reviewing all of the parts of this series tonight, the statement "we ended up adding a reference counting to our models" got me curious. You obviously weren’t happy about having to do it, so what exactly motivated you to implement reference counting in Max?
Thanks for reply, Dan!
I see your point.
Actually, I guessed a reason behind your solution, just, you know, it seemed curious: you hadn’t mentioned async bindings at all in your articles.
Now it’s clear,
Thanks
paul
The activation model I presented here works fine if there is only one "owner" that ever activates the model. But, as soon as multiple things want to activate the same model, you need the reference counting. As a simple example, what if you wanted to show the same model in two places in the UI?
If you’re doing WPF development, you really need to check out Dan Crevier ‘s series on DataModel-View-ViewModel.
In part 5 , I talked about commands and how they are used for behavior. Now, I want to talk about a better
I thought I should add a post with the full list of posts in the D-V-VM pattern. They are: DataModel-View-ViewModel
This is just a little thing but when you call Activate() you immediately call your DataModel’s VerifyCalledOnUIThread(); then you set the property using this.IsActive = true which then immediately calls VerifyCalledOnUIThread(); is this an oversight or am I not getting something…seems like these VerifyCalledOnUIThread methods should only ever be called when you are changing properties. Unless you method directly manipulates the member data itself.
I agree it’s overkill since the IsActive’s setter also checks, but I think it’s nice to have it explicitly there. | https://blogs.msdn.microsoft.com/dancre/2006/08/25/dm-v-vm-part-6-revisiting-the-data-model/ | CC-MAIN-2019-18 | en | refinedweb |
@UML(identifier="NameSpace", specification=ISO_19103) public interface Namespace extends Set<Name>
A namespace contains
Name objects. Each name usually corresponds to the name of a
type. The namespace uri of each name (
getURI() is the same as the uri of the
Namespace object containing it (
getURI().
//create namespace for gml Namespace namespace = new NamespaceImpl( "" ); //add some names namespace.add( new NameImpl( "", "PointType" ) ); namespace.add( new NameImpl( "", "LineStringType" ) ); namespace.add( new NameImpl( "", "PolygonType" ) ); namespace.add( new NameImpl( "", "AbstractFeatureType" );
One allowance ISO_19103 allows for is having a Namespace located inside another namespace. You may certaintly do this by constructing a facility similar to Schema in which namespaces may be looked up via a Name with the same URI as the one used here.
We are simply not dictating the lookup mechanism, or a backpointer to a containing namespace (note the two solutions are in conflict and we would like to offer application the freedom to back this interface onto a facility such as JNDI used in their own application).
add, addAll, clear, contains, containsAll, equals, hashCode, isEmpty, iterator, remove, removeAll, retainAll, size, spliterator, toArray, toArray
parallelStream, removeIf, stream
forEach
String getURI()
This value can never be
null.
Name lookup(String name)
Since all Name objects in the namespace share the same uri as the namespace itself, only the local part of the name is specified.
This method returns
null if no such name exists.
name- The local part of the name to look up.
null. | http://docs.geotools.org/stable/javadocs/org/opengis/feature/type/Namespace.html | CC-MAIN-2019-18 | en | refinedweb |
Hello all,
I am using sqlite to develop apps using Xamarin.Forms
I had a problem. Some times i got an exception "Busy" or "DataBase is locked"
Here is my code:
`
public class Repository where T : class
{
SQLiteAsyncConnection database;
public Repository() { database = DependencyService.Get<ISqlite>().GetConnection(); } /// <summary> /// Get an entity by primary key /// </summary> /// <param name="id"></param> /// <returns></returns> public async Task<T> FindByKey(int id) { return await database.FindAsync<T>(id); } /// <summary> /// Finds the by key. /// </summary> /// <returns>The by key.</returns> /// <param name="id">Identifier.</param> public async Task<T> FindByKey(string id) { return await database.FindAsync<T>(id); } /// <summary> /// Select data /// </summary> /// <typeparam name="Tvalue"></typeparam> /// <param name="where">Expression where statment </param> /// <param name="orderBy">Expression order by statment</param> /// <returns></returns> public async Task<List<T>> GetItem<Tvalue>(Expression<Func<T, bool>> where = null, Expression<Func<T, Tvalue>> orderBy = null) { var query = database.Table<T>(); if(where != null) { query = query.Where(where); } if(orderBy != null) { query = query.OrderBy<Tvalue>(orderBy); } return await query.ToListAsync(); } /// <summary> /// Insert an entity /// </summary> /// <param name="entity"></param> /// <returns></returns> public async Task<int> Insert(T entity) { return await database.InsertAsync(entity); } /// <summary> /// Update an entity /// </summary> /// <param name="entity"></param> /// <returns></returns> public async Task<int> Update(T entity) { return await database.UpdateAsync(entity); } /// <summary> /// Delete an entity /// </summary> /// <param name="id">Primary key</param> /// <returns></returns> public async Task<int> Delete(object id) { var entity = database.FindAsync<T>(id); if(entity != null) { return await database.ExecuteAsync ("delete from " + typeof(T).Name + " where id = " + id); } return -1; } }
`
Anyone can help me? I will appreciate it.
Answers
at SQLite.Net.PreparedSqlLiteInsertCommand.ExecuteNonQuery (System.Object[] source) [0x0016b] in <filename unknown>:0 at SQLite.Net.SQLiteConnection.Insert (System.Object obj, System.String extra, System.Type objType) [0x000bc] in <filename unknown>:0 --- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw () [0x0000c] in /Users/builder/data/lanes/2689/962a0506/source/maccore/_build/Library/Frameworks/Xamarin.iOS.framework/Versions/git/src/2689/962a0506/source/maccore/_build/Library/Frameworks/Xamarin.iOS.framework/Versions/git/src/mono/external/referencesource/mscorlib/system/runtime/compilerservices/TaskAwaiter.cs:201 at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification (System.Threading.Tasks.Task task) [0x0002e] in /Users/builder/data/lanes/2689/962a0506/source/maccore/_build/Library/Frameworks/Xamarin.iOS.framework/Versions/git/src/mono/external/referencesource/mscorlib/system/runtime/compilerservices/TaskAwaiter.cs:170 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd (System.Threading.Tasks.Task task) [0x0000b] in /Users/builder/data/lanes/2689/962a0506/source/maccore/_build/Library/Frameworks/Xamarin.iOS.framework/Versions/git/src/mono/external/referencesource/mscorlib/system/runtime/compilerservices/TaskAwaiter.cs:142 at System.Runtime.CompilerServices.TaskAwaiter1[TResult].GetResult () [0x00000] in
......
It's possible that your application using few asyncConnection at the same time. Solution for it is using lock.
Thank @RobertBosko . I think so, I tried to use lock to lock the sqlite query, but i haven't yet found a "LOCK" that can lock asynchronous sqlite.
Could you use something like this for the lock?
How are you initializing your connection? We had the same issue and eventually solved it by doing:
As according to: the often described way, creates multiple connections.
I was aready using the initialization code @Networkapp provided and also using the Async lock that @AshleyJackson showed and just received this error. The weird thing is that it has worked great for months and months in production and dev. Just got this error for the first time, on a simulator.
I had the same issue on UWP , solved with the UWP version of this.
Make it static and lock connection, each connection needs to open database and refresh connection.
just use below it will help you. | https://forums.xamarin.com/discussion/comment/367502 | CC-MAIN-2019-18 | en | refinedweb |
One item not mentioned anywhere is how to invoke an anonymous function "in-line".
To explain what I mean, let me use an actual example. Instead of creating code like this:
<?php
$wording = ($from == 0 && $to == 0
? '< 1 Year'
: ($to == 9999
? ($from - 1) . '+ Years'
: "$from - $to Years"));
?>
You might, instead, want to use an anonymous function right in the assignment line. However, doing:
<?php
$wording = function() use ($from, $to) { ... }
?>
would actually assign $wording to the Closure instance, not actually execute the function.
The best way to do this that I've found is:
<?php
$wording = call_user_func(function() use ($from, $to) {
if ($from == 0 && $to == 0) return '< 1 Year';
if ($to == 9999) return ($from - 1) . '+ Years';
return "$from - $to Years";
});
?>
It's a little longer than the first code sample, but I think it's more readable, and using this methodology opens up the ability to use more complex means of calculating a value without filling the variable scope with a bunch of temporary variables.
----
Server IP: 69.147.83.199
Probable Submitter: 184.71.0.166
----
Manual Page --
Edit --
Del: integrated --
Del: useless --
Del: bad code --
Del: spam --
Del: non-english --
Del: in docs --
Del: other reasons--
Reject --
Search -- | https://grokbase.com/t/php/php-notes/135f8dmmkv/note-112192-added-to-functions-anonymous | CC-MAIN-2019-18 | en | refinedweb |
In the previous post we had a look at the proposal of introducing resumable functions into the C++ standard to support writing asynchronous code modeled on the C# async/await pattern.
We saw that it is already possible to experiment with the future resumable and await keywords. In C# a generator (or iterator) is a method that contains at least one yield statement and that returns an IEnumerable<T>.
For example, the following C# code produces the sequence of Fibonacci numbers (1, 1, 2, 3, 5, 8, 13, 21, …):
IEnumerable<T> Fibonacci() { int a = 0; int b = 1; while (true) { yield return b; int tmp = a + b; a = b; b = tmp; } }
A generator acts in two phases. When it is called, it just sets up a resumable function, preparing for its execution, and returns some enumerator (in the case of C#, an IEnumerable<T>). But the actual execution is deferred to the moment when the values are actually enumerated and pulled from the sequence, for example with a foreach statement:
foreach (var num in Fibonacci()) { Console.WriteLine("{0}", num); }
Note that the returned sequence is potentially infinite; its enumeration could go on indefinitely (if we ignore the integer overflows).
Of course there is nothing particularly special about doing the same thing in C++. While STL collections are usually eagerly evaluated (all their values are produced upfront) it is not difficult to write a collection that provides iterators that calculate their current value on the spot, on the base of some state or heuristic.
What gives a particular expressive power to generators is the ability to pause the execution each time a new value is generated, yielding control back to the caller, and then to resume the execution exactly from the point where it had suspended. A generator is therefore a special form of coroutine, limited in the sense that it may only yield back to its caller.
The yield statement hides all the complexity inherent in the suspension and resumption of the function; the developer can express the logic of the sequence plainly, without having to setup callbacks or continuations.
From resumable functions to generators (and beyond)
It would be nice to bring the expressive power of generators to our good old C++, and naturally there is already some work going on for this. In this proposal Gustaffson et al. explain how generator functions could be supported by the language as an extension of resumable functions, making it possible to write code like:
sequence<int> fibonacci() resumable { int a = 0; int b = 1; while (true) { yield b; int tmp = a + b; a = b; b = tmp; } }
Here, the proposal introduces two new concepts, the type sequence<T> and the yield keyword.
– sequence<T> is a (STL-like) collection that only supports iteration and only provides an input iterator.
– The yield statement suspends the execution of the function and returns one item from the sequence to the caller.
In C# terms, sequence<T> and its iterator are respectively the equivalent of an IEnumerable<T> and IEnumerator<T>. But while the C# generators are implemented with a state machine, in C++ the suspension and resumption would be implemented, as we’ll see, with stackful coroutines.
Once we had a lazy-evaluated sequence<T> we could write client code to pull a sequence of values, which would be generated one at the time, and only when requested:
sequence<int> fibs = fibonacci(); for (auto it = fibs.begin(); it != fibs.end(); ++it) { std::cout << *it << std::endl; }
In C++11 we could also simplify the iteration with a range-based for loop:
sequence<int> fibs = fibonacci(); for (auto it : fibs) { std::cout << *it << std::endl; }
More interestingly, we could define other resumable functions that manipulate the elements of a sequence, lazily producing another sequence. This example, taken from Gustaffson’s proposal, shows a lazy version of std::transform():
template<typename Iter> sequence<int> lazy_tform(Iter beg, Iter end, std::function<int(int)> func) resumable { for (auto iter = beg; iter != end; ++iter) { yield func(*iter); } }
Moving further with this idea, we could pull another page out of the C# playbook and enrich the sequence class with a whole set of composable, deferred query operators, a la LINQ:
template <typename T> class sequence { public: template <typename Predicate> bool all(Predicate predicate); [...] static sequence<int> range(int from, int to); template <typename TResult> sequence<TResult> select(std::function<TResult(T)> selector); sequence<T> take(int count); sequence<T> where(std::function<bool(T)> predicate); };
Lazy sequences
Certainly, resumable generators would be a very interesting addition to the standard. But how would they work? We saw that the Visual Studio CTP comes with a first implementation of resumable functions built over the PPL task library, but in this case the CTP is of little help, since it does not support generator functions yet. Maybe they will be part of a future release… but why to wait? We can implement them ourselves! 🙂
In the rest of this post I’ll describe a possible simple implementation of C++ lazy generators.
Let’s begin with the lazy sequence<T> class. This is a STL-like collection which only needs to support input iterators, with a begin() and an end() method.
Every instance of this class must somehow be initialized with a functor that represents the generator function that will generate the values of the sequence. We’ll see later what can be a good prototype for it.
As we said, the evaluation of this function must be deferred to the moment when the values are retrieved, one by one, via the iterator. All the logic for executing, suspending and resuming the generator will actually be implemented by the iterator class, which therefore needs to have a reference to the same functor.
So, our first cut at the sequence class could be something like this:
template<typename T> class sequence_iterator { // TO DO }; template<typename T> class sequence { public: typedef typename sequence_iterator<T> iterator; typedef ??? functor; sequence(functor func) : _func(func) { } iterator begin() { return iterator(_func); } iterator end() { return iterator(); } private: functor _func; };
Step by step
The sequence<T> class should not do much more than create iterators. The interesting code is all in the sequence iterator, which is the object that has the ability to actually generate the values.
Let’s go back to our Fibonacci generator and write some code that iterates through it:
sequence<int> fibonacci() resumable { int a = 0; int b = 1; while (true) { yield b; int tmp = a + b; a = b; b = tmp; } } auto fibs = fibonacci(); for (auto it : fibs) { std::cout << *it << std::endl; }
How should this code really work? Let’s follow its execution step by step.
- First, we call the function fibonacci(), which returns an object of type sequence<int>. Note that at this point the execution of the function has not even started yet. We just need to return a sequence object somehow associated to the body of the generator, which will be executed later.
- The returned sequence is copied into the variable fibs. We need to define what does it mean to copy a sequence: should we allow copy operations? Should we enforce move semantic?
- Given the sequence fibs, we call the begin() method which returns an iterator “pointing ” to the first element of the sequence. The resumable function should start running the moment the iterator is created and execute until a first value is yielded (or until it completes, in case of empty sequences).
- When the end() method is called, the sequence returns an iterator that represents the fact that the generator has completed and there are no more values to enumerate.
- The operator == () should behave as expected, returning true if both iterators are at the same position of the same sequence, or both pointing at the end of the sequence.
- The operator *() will return the value generated by the last yield statement (i.e., the current value of the sequence).
- At each step of the iteration, when operator ++() is called, the execution of the generator function will be resumed, and will continue until either the next yield statement updates the current value or until the function returns.
Putting all together, we can begin to write some code for the sequence_iterator class:
template<typename T> class sequence_iterator { public: typedef ??? functor; sequence_iterator(functor func) { // initializes the iterator from the generator functors, executes the functors // until it terminates or yields. } sequence_iterator() : _func(func) { // must represent the end of the sequence } bool operator == (const sequence_iterator& rhs) { // true if the iterators are at the same position. } bool operator != (const sequence_iterator& rhs) { return !(*this==rhs); } const T& operator * () const { return _currentVal; } sequence_iterator operator ++ () { // resume execution return *this; } private: T _currentVal; };
The behavior of the iterator is fairly straightforward, but there are a few interesting things to note. The first is that evidently a generator function does not do what it says: looking at the code of the fibonacci() function there is no statement that actually returns a sequence<T>; what the code does is simply to yield the sequence elements, one at the time.
So who creates the sequence<T> object? Clearly, the implementation of generators cannot be purely library-based. We can put in a library the code for the sequence<T> and for its iterators, we can also put in a library the platform-dependent code that manages the suspension and resumptions of generators. But it will be up to the compiler to generate the appropriate code that creates a sequence<T> object for a generator function. More on this later.
Also, we should note that there is no asynchrony or concurrency involved in this process. The function could resume in the same thread where it suspended.
Generators as coroutines
The next step is to implement the logic to seamlessly pause and resume a generator. A generator can be seen as an asymmetric coroutine, where the asymmetry lies in the fact that the control can be only yielded back to the caller, contrary to the case of symmetric coroutines that can yield control to any other coroutine at any time.
Unfortunately coroutines cannot be implemented in a platform-independent way. In Windows we can use Win32 Fibers (as I described in this very old post) while on POSIX, you can use the makecontext()/swapcontext() API. There is also a very nice Boost library that we could leverage for this purpose.
But let’s ignore the problems of portability, for the moment, and assume that we have a reliable way to implement coroutines. How should we use them in an iterator? We can encapsulate the non-portable code in a class __resumable_func that exposes this interface:
template <typename TRet> class __resumable_func { typedef std::function<void(__resumable_func&)> TFunc; public: __resumable_func(TFunc func); void yieldReturn(const TRet& value); void yieldBreak(); void resume(); const TRet& getCurrent() const; bool isEos() const; }
The class is templatized on the type of the values produced by the generator and provides methods to yield one value (yieldReturn()), to retrieve the current value (i.e., the latest value yielded) and to resume the execution and move to the next value.
It should also provide methods to terminate the enumeration (yieldBreak()) and to tell if we have arrived at the end of the sequence (isEos()).
The function object passed to the constructor represents the generator function itself that we want to run. More precisely, it is the function that will be executed as a coroutine, and its prototype tells us that this function, in order to be able to suspend execution, needs a reference to the __resumable_func object that is running the coroutine itself.
In fact the compiler should transform the code of a generator into the (almost identical) code of a lambda that uses the __resumable_func object to yield control and emit a new value.
For example, going back again to our fibonacci() generator, we could expect the C++ compiler to transform the code we wrote:
sequence<int> fibonacci() resumable { int a = 0; int b = 1; while (true) { yield b; int tmp = a + b; a = b; b = tmp; } }
into this lambda expression:
auto __fibonacci_func([](__resumable_func<int>& resFn) { int a = 0; int b = 1; while (true) { resFn.yieldReturn(b); int tmp = a + b; a = b; b = tmp; } });
where the yield statement has been transformed into a call to __resumable_func::yieldReturn().
Likewise, client code that invokes this function, like:
sequence<int> fibs = fibonacci();
should be transformed by the compiler into a call to the sequence constructor, passing this lambda as argument:
sequence<int> fibs(__fibonacci_func);
Sequence iterators
We can ignore the details of the implementation of __resumable_func<T> coroutines for the moment and, assuming that we have them working, we can now complete the implementation of the sequence_iterator class:
template <typename T> class sequence_iterator { std::unique_ptr<__resumable_func<T>> _resumableFunc; sequence_iterator() : _resumableFunc(nullptr) { } sequence_iterator(const std::function<void(__resumable_func<T>&)> func) : _resumableFunc(new __resumable_func<T>(func)) { } sequence_iterator(const sequence_iterator& rhs) = delete; sequence_iterator& operator = (const sequence_iterator& rhs) = delete; sequence_iterator& operator = (sequence_iterator&& rhs) = delete; public: sequence_iterator(sequence_iterator&& rhs) : _resumableFunc(std::move(rhs._resumableFunc)) { } sequence_iterator& operator++() { _ASSERT(_resumableFunc != nullptr); _resumableFunc->resume(); return *this; } bool operator==(const sequence_iterator& _Right) const { if (_resumableFunc == _Right._resumableFunc) { return true; } if (_resumableFunc == nullptr) { return _Right._resumableFunc->isEos(); } if (_Right._resumableFunc == nullptr) { return _resumableFunc->isEos(); } return (_resumableFunc->isEos() == _Right._resumableFunc->isEos()); } bool operator!=(const sequence_iterator& _Right) const { return (!(*this == _Right)); } const T& operator*() const { _ASSERT(_resumableFunc != nullptr); return (_resumableFunc->getCurrent()); } };
The logic here is very simple. Internally, a sequence_iterator contains a __resumable_func object, to run the generator as a coroutine. The default constructor creates an iterator that points at the end of the sequence. Another constructor accepts as argument the generator function that we want to run and starts executing it in a coroutine and the function will run until either it yields a value or terminates, giving the control back to the constructor. In this way we create an iterator that points at the beginning of the sequence.
If a value was yielded, we can call the dereference-operator to retrieve it from the __resumable_func object. If the function terminated, instead, the iterator will already point at the end of the sequence. The equality operator takes care of equating an iterator whose function has terminated to the end()-iterators created with the default constructor. Incrementing the iterator means resuming the execution of the coroutine, from the point it had suspended, giving it the opportunity to produce another value.
Note that, since the class owns the coroutine object, we disable copy constructors and assignment operators and only declare the move constructor, to pass the ownership of the coroutine.
Composable sequence operators
Almost there! We have completed our design, but there are still a few details to work out. The most interesting are related to the lifetime and copyability of sequence objects. What should happen with code like this?
sequence<int> fibs1 = fibonacci(); sequence<int> fibs2 = fibs1; for (auto it1 : fibs1) { for (auto it2 : fibs2) { ... } }
If we look at how we defined class sequence<T>, apparently there is no reason why we should prevent the copy of sequence objects. In fact, sequence<T> is an immutable class. Its only data member is the std::function object that wraps the functor we want to run.
However, even though we don’t modify this functor object, we do execute it. This object could have been constructed from a lambda expression that captured some variables, either by value or by reference. Since one of the captured variables could be a reference to the same sequence<T> object that created that iterator, we need to ensure that the sequence object will always outlive its functors, and allowing copy-semantics suddenly becomes complicated.
This brings us to LINQ and to the composability of sequences. Anyone who has worked with C# knows that what makes enumerable types truly powerful and elegant is the ability to apply chains of simple operators that transform the elements of a sequence into another sequence. LINQ to Objects is built on the concept of a data pipeline: we start with a data source which implements IEnumerable<T>, and we can compose together a number of query operators, defined as extension methods to the Enumerable class.
For example, this very, very useless query in C# generates the sequence of all square roots of odd integers between 0 and 10:
var result = Enumerable.Range(0, 10) .Where(n => n%2 == 1) .Select(n => Math.Sqrt(n));
Similarly, to make the C++ sequence<T> type really powerful we should make it composable and enrich it with a good range of LINQ-like operators to generate, filter, aggregate, group, sort and generally transform sequences.
These are just a few of the operators that we could define in the sequence<T> class:
template <typename T> class sequence { public: [...] static sequence<int> range(int from, int to); template <typename TResult> sequence<TResult> select(std::function<TResult(T)> selector); sequence<T> where(std::function<bool(T)> predicate); };
to finally be able to write the same (useless) query:
sequence<double> result = sequence<int>::range(0, 10) .where([](int n) { return n => n%2 == 1; }) .select([](int n) { return sqrt(n); });
Let’s try to implement select(), as an experiment. It is conceptually identical to the lazy_tform() method we saw before, but now defined in the sequence class. A very naïve implementation could be as follows:
// Projects each element of a sequence into a new form. (NOT WORKING!) template <typename TResult> sequence<TResult> select(std::function<TResult(T)> selector) { auto func = [this, selector](__resumable_func<T>& rf) { for (T t : *this) { auto val = selector(t); rf.yieldReturn(val); } }; return sequence<TResult>(func); }
It should be now clear how it works: first we create a generator functor, in this case with a lambda expression, and then we return a new sequence constructed on this functor. The point is that the lambda needs to capture the “parent” sequence object to be able to iterate through the values of its sequence.
Unfortunately this code is very brittle. What happens when we compose more operators, using the result of one as the input of the next one in the chain? When we write:
sequence<double> result = sequence<int>::range(0, 10) .where([](int n) { return n => n%2 == 1; }) .select([](int n) { return sqrt(n); });
there are (at least) three temporary objects created here, of type sequence<T>, and their lifetime is tied to that of the expression, so they are deleted before the whole statement completes.
The situation is like in the figure: the functor of each sequence in the chain is a lambda that has captured a pointer to the previous sequence object. The problem is in the deferred execution: nothing really happens until we enumerate the resulting sequence through its iterator, but as soon as we do so each sequence starts pulling values from its predecessor, which has already been deleted.
Temporary objects and deferred execution really do not get along nicely at all. On one hand in order to compose sequences we have to deal with temporaries that can be captured in a closure and then deleted long before being used. On the other hand, the sequence iterators, and their underlying coroutines, should not be copied and can outlive the instance of the sequence that generated them.
We can enforce move semantics on the sequence<T> class, but then what do we capture in a generator like select() that acts on a sequence?
As often happens, a possible solution requires adding another level of indirection. We introduce a new class, sequence_impl<T>, which represents a particular application of a generator function closure:
template <typename T> class sequence_impl { public: typedef std::function<void(__resumable_func<T>&)> functor; private: const functor _func; sequence_impl(const sequence_impl& rhs) = delete; sequence_impl(sequence_impl&& rhs) = delete; sequence_impl& operator = (const sequence_impl& rhs) = delete; sequence_impl& operator = (sequence_impl&& rhs) = delete; public: sequence_impl(const functor func) : _func(std::move(func)) {} sequence_iterator<T> begin() const { // return iterator for beginning of sequence return iterator(_func); } sequence_iterator<T> end() const { // return iterator for end of sequence return iterator(); } };
A sequence_impl<T> is neither copiable nor movable and only provides methods to iterate through it.
The sequence<T> class now keeps only a shared pointer to the unique instance of a sequence_impl<T> that represents that particular application of the generator function. Now we can support chained sequences by allowing move semantics on the sequence<T> class.
template <typename T> class sequence { std::shared_ptr<sequence_impl<T>> _impl; sequence(const sequence& rhs) = delete; sequence& operator = (const sequence& rhs) = delete; public: typedef typename sequence_impl<T>::iterator iterator; typedef typename sequence_impl<T>::functor functor; sequence(functor func) { _impl(std::make_shared<sequence_impl<T>>(func)) } sequence(sequence&& rhs) { _impl = std::move(rhs._impl); } sequence& operator = (sequence&& rhs) { _impl = std::move(rhs._impl); } iterator begin() const { return _impl->begin(); } iterator end() const { return _impl->end(); } };
The diagram below illustrates the relationships between the classes involved in the implementation of lazy sequences:
LINQ operators
Ok, now we have really almost done. The only thing left to do, if we want, is to write a few sequence-manipulation operators, modeled on the example of the LINQ-to-objects. I’ll list just a few, as example:
// Determines whether all elements of a sequence satisfy a condition. bool all(std::function<bool(T)> predicate) { if (nullptr == predicate) { throw std::exception(); } for (auto t : *_impl) { if (!predicate(t)) { return false; } } return true; } // Returns an empty sequence static sequence<T> empty() { auto fn = [](__resumable_func<T>& rf) { rf.yieldBreak(); }; return sequence<T>(fn); } // Generates a sequence of integral numbers within a specified range [from, to). static sequence<int> range(int from, int to) { if (to < from) { throw std::exception(); } auto fn = [from, to](__resumable_func<T>& rf) { for (int i = from; i < to; i++) { rf.yieldReturn(i); } }; return sequence<int>(fn); } // Projects each element of a sequence into a new form. template <typename TResult> sequence<TResult> select(std::function<TResult(T)> selector) { if (nullptr == selector) { throw std::exception(); } std::shared_ptr<sequence_impl<T>> impl = _impl; auto fn = [impl, selector](__resumable_func<T>& rf) { for (T t : *impl) { auto val = selector(t); rf.yieldReturn(val); } }; return sequence<TResult>(fn); } // Returns a specified number of contiguous elements from the start of a sequence. sequence<T> take(int count) { if (count < 0) { throw std::exception(); } std::shared_ptr<sequence_impl<T>> impl = _impl; auto fn = [impl, count](__resumable_func<T>& rf) { auto it = impl->begin(); for (int i = 0; i < count && it != impl->end(); i++, ++it) { rf.yieldReturn(*it); } }; return sequence<T>(fn); } // Filters a sequence of values based on a predicate. sequence<T> where(std::function<bool(T)> predicate) { if (nullptr == predicate) { throw std::exception(); } std::shared_ptr<sequence_impl<T>> impl = _impl; auto fn = [impl, predicate](__resumable_func<T>& rf) { for (auto item : *impl) { if (predicate(item)) { rf.yieldReturn(item); } } }; return sequence<T>(fn); }
We could write many more, but I think these should convey the idea.
Example: a prime numbers generator
As a final example, the following query lazily provides the sequence of prime numbers (smaller than INT_MAX), using a very simple, brute-force algorithm. It is definitely not the fastest generator of prime numbers, it’s maybe a little cryptic, but it’s undoubtedly quite compact!
sequence<int> primes(int max) { return sequence<int>::range(2, max) .where([](int i) { return sequence<int>::range(2, (int)sqrt(i) + 2) .all([i](int j) { return (i % j) != 0; }); }); }
Conclusion
In this article I rambled about generators in C++, describing a new sequence<T> type that model lazy enumerators and that could be implemented as an extension of resumable functions, as specified in N3858. I have described a possible implementation based on coroutines and introduced the possibility of extending the sequence class with a set of composable operators.
If you are curious and want to play with my sample implementation, you can find a copy of the sources here. Nothing too fancy, just the code that I showed in this post.
Appendix – Coroutines in Win32
Having completed my long ramble on the “platform independent” aspects of C++ generators, it’s time to go back to the point we left open: how to implement, on Windows, the coroutines that we encapsulated in the __resumable_func class?
We saw that the Visual Studio CTP comes with a first implementation of resumable functions, built over the PPL task library and using Win32 fibers as side-stacks. Even though the CTP does not support generator functions yet, my first idea was to just extend the <pplawait.h> library to implement them. However the code there is specialized for resumable functions that suspend awaitingfor some task, andit turns out that we can reuse only part of their code because, even if we are still dealing with resumable functions, the logic of await and yield are quite different.
In the case of await, functions can be suspended (possibly multiple times) waiting for some other task to complete. This means switching to a fiber associated to the task after having set up a continuation that will be invoked after the task completes, to switch the control back to the current fiber. When the function terminates, the control goes back to the calling fiber, returning the single return value of the async resumable function.
In the case of yield, we never suspend to call external async methods. Instead, we can suspend multiple times going back to the calling fiber, each time by returning one of the values that compose the sequence. So, while the implementation of the await keyword needs to leverage the support of PPL tasks, the concept of generator functions does not imply any concurrency or multithreading and using the PPL is not necessary.
Actually, there are ways to implement yield with await) but I could not find a simple way to use the new __await keyword without spawning new threads (maybe this could be possible with a custom PPL scheduler?)
So I chose to write the code for coroutines myself; the idea here is not very different from the one I described in a very old post (it looks like I keep rewriting the same post :-)) but now I can take advantage of the fiber-based code from the CTP’s <pplawait.h> library.
Win32 Fibers
Let’s delve into the details of the implementation. Before all, let me summarize once again the Win32 Fiber API.
Fibers were added to Windows NT to support cooperative multitasking. They can be thought as lightweight threads that must be manually scheduled by the application. In other words, fibers are a perfect tool to implement coroutines sequencing.
When a fiber is created, with CreateFiber, it is passed a fiber-start function. The OS then assigns it a separate stack and sets up execution to begin at this fiber-start function. To schedule a fiber we need to “switch” to it manually with SwitchToFiber and once it is running, a fiber can then suspend itself only by explicitly yielding execution to another fiber, also by calling SwitchToFiber.
SwitchToFiber only works from a fiber to another, so the first thing to do is to convert the current thread into a fiber, with ConvertThreadToFiber. Finally, when we have done using fibers, we can convert the main fiber back to a normal thread with ConvertFiberToThread.
The __resumable_func class
We want to put all the logic to handle the suspension and resumption of a function in the __resumable_func<T> class, as described before.
In our case we don’t need symmetric coroutines; we just need the ability of returning control to the calling fiber. So our class will contain a handle to the “caller” fiber and a handle to the fiber we want to run.
#include <functional> #include <pplawait.h> template <typename TRet> class __resumable_func : __resumable_func_base { typedef std::function<void(__resumable_func&)> TFunc; TFunc _func; TRet _currentValue; LPVOID _pFiber; LPVOID _pCallerFiber; Concurrency::details::__resumable_func_fiber_data* _pFuncData; public: __resumable_func(TFunc func); ~__resumable_func(); void yieldReturn(TRet value); void yieldBreak(); void resume(); const TRet& getCurrent() const const { return _currentValue; } bool isEos() const { return _pFiber == nullptr; } private: static void yield(); static VOID CALLBACK ResumableFuncFiberProc(PVOID lpParameter); };
The constructor stores a copy of the generator function to run, creates a new fiber object specifying ResumableFuncFiberProc as the function to execute, and immediately switches the execution to this fiber:
__resumable_func(TFunc func) : _currentValue(TRet()), _pFiber(nullptr), _func(func), _pFuncData(nullptr) { // Convert the current thread to a fiber. This is needed because the thread needs to "be" // a fiber in order to be able to switch to another fiber. ConvertCurrentThreadToFiber(); _pCallerFiber = GetCurrentFiber(); // Create a new fiber (or re-use an existing one from the pool) _pFiber = Concurrency::details::POOL CreateFiberEx(Concurrency::details::fiberPool.commitSize, Concurrency::details::fiberPool.allocSize, FIBER_FLAG_FLOAT_SWITCH, &ResumableFuncFiberProc, this); if (!_pFiber) { throw std::bad_alloc(); } // Switch to the newly created fiber. When this "returns" the functor has either returned, // or issued an 'yield' statement. ::SwitchToFiber(_pFiber); _pFuncData->suspending = false; _pFuncData->Release(); }
The fiber will start from the fiber procedure, which has the only task of running the generator function in the context of the fiber:
// Entry proc for the Resumable Function Fiber. static VOID CALLBACK ResumableFuncFiberProc(PVOID lpParameter) { LPVOID threadFiber; // This function does not formally return, due to the SwitchToFiber call at the bottom. // This scope block is needed for the destructors of the locals in this block to fire // before we do the SwitchToFiber. { Concurrency::details::__resumable_func_fiber_data funcDataOnFiberStack; __resumable_func* pThis = (__resumable_func*)lpParameter; // The callee needs to setup some more stuff after we return (which would be either on // yield or an ordinary return). Hence the callee needs the pointer to the func_data // on our stack. This is not unsafe since the callee has a refcount on this structure // which means the fiber will continue to live. pThis->_pFuncData = &funcDataOnFiberStack; Concurrency::details::POOL SetFiberData(&funcDataOnFiberStack); funcDataOnFiberStack.threadFiber = pThis->_pCallerFiber; funcDataOnFiberStack.resumableFuncFiber = GetCurrentFiber(); // Finally calls the function in the context of the fiber. The execution can be // suspended by calling yield pThis->_func(*pThis); // Here the function has completed. We set return to true meaning this is the // final 'real' return and not one of the 'yield' returns. funcDataOnFiberStack.returned = true; pThis->_pFiber = nullptr; threadFiber = funcDataOnFiberStack.threadFiber; } // Return to the calling fiber. ::SwitchToFiber(threadFiber); // On a normal fiber this function won't exit after this point. However, if the fiber is // in a fiber-pool and re-used we can get control back. So just exit this function, which // will cause the fiber pool to spin around and re-enter. }
There are two ways to suspend the execution of the generator function running in the fiber and to yield control back to the caller. The first is to yield a value, which will be stored in a data member:
void yieldReturn(TRet value) { _currentValue = value; yield(); }
The second is to immediately terminate the sequence, for example with a return statement or reaching the end of the function. The compiler should translate a return into a call to the yieldBreak method:
void yieldBreak() { _pFiber = nullptr; yield(); }
To yield the control we just need to switch back to the calling fiber:
static void yield() { _ASSERT(IsThreadAFiber()); Concurrency::details::__resumable_func_fiber_data* funcDataOnFiberStack = Concurrency::details::__resumable_func_fiber_data::GetCurrentResumableFuncData(); // Add-ref's the fiber. Even though there can only be one thread active in the fiber // context, there can be multiple threads accessing the fiber data. funcDataOnFiberStack->AddRef(); _ASSERT(funcDataOnFiberStack); funcDataOnFiberStack->verify(); // Mark as busy suspending. We cannot run the code in the 'then' statement // concurrently with the await doing the setting up of the fiber. _ASSERT(!funcDataOnFiberStack->suspending); funcDataOnFiberStack->suspending = true; // Make note of the thread that we're being called from (Note that we'll always resume // on the same thread). funcDataOnFiberStack->awaitingThreadId = GetCurrentThreadId(); _ASSERT(funcDataOnFiberStack->resumableFuncFiber == GetCurrentFiber()); // Return to the calling fiber. ::SwitchToFiber(funcDataOnFiberStack->threadFiber); }
Once we have suspended, incrementing the iterator will resume the execution by calling resume, which will switch to this object’s fiber:
void resume() { _ASSERT(IsThreadAFiber()); _ASSERT(_pFiber != nullptr); _ASSERT(_pFuncData != nullptr); _ASSERT(!_pFuncData->suspending); _ASSERT(_pFuncData->awaitingThreadId == GetCurrentThreadId()); // Switch to the fiber. When this "returns" the functor has either returned, or issued // an 'yield' statement. ::SwitchToFiber(_pFiber); _ASSERT(_pFuncData->returned || _pFuncData->suspending); _pFuncData->suspending = false; if (_pFuncData->returned) { _pFiber = nullptr; } _pFuncData->Release(); }
The destructor just needs to convert the current fiber back to a normal thread, but only when there are no more fibers running in the thread. For this reason we need to keep a per-thread fiber count, which is incremented every time we create a __resumable_funcand decremented every time we destroy it.
~__resumable_func() { if (_pCallerFiber != nullptr) { ConvertFiberBackToThread(); } } class __resumable_func_base { __declspec(thread) static int ts_count; protected: // Convert the thread to a fiber. static void ConvertCurrentThreadToFiber() { if (!IsThreadAFiber()) { // Convert the thread to a fiber. Use FIBER_FLAG_FLOAT_SWITCH on x86. LPVOID threadFiber = ConvertThreadToFiberEx(nullptr, FIBER_FLAG_FLOAT_SWITCH); if (threadFiber == NULL) { throw std::bad_alloc(); } ts_count = 1; } else { ts_count++; } } // Convert the fiber back to a thread. static void ConvertFiberBackToThread() { if (--ts_count == 0) { if (ConvertFiberToThread() == FALSE) { throw std::bad_alloc(); } } } }; __declspec(thread) int __resumable_func_base::ts_count = 0;
And this is all we need to have resumable generators in C++, on Windows. The complete source code can be found here. | https://paoloseverini.wordpress.com/tag/visual-studio-ctp/ | CC-MAIN-2019-18 | en | refinedweb |
Definitions
Topics
The Definitions section of the WSDL defines the namespaces used throughout the WSDL and the name of the service, as shown in the following snippet of the Product Advertising API WSDL.
<?xml version="1.0" encoding="UTF-8" ?> <definitions xmlns="" xmlns:
This example shows that the:
Default namespace is xmlns=""
SOAP namespace used is xmlns:soap=""
Schema used is xmlns:xs=""
Product Advertising API WSDL namespace is ""
The date at the end is the version number. It is the date the WSDL became public.
TargetNamespace is ""
The TargetNamespace is an XML schema convention that enables the WSDL to refer to itself (as the target). The TargetNamespace value is the Product Advertising API WSDL namespace | https://docs.aws.amazon.com/AWSECommerceService/latest/DG/Definitions.html | CC-MAIN-2019-18 | en | refinedweb |
This shifts the characters but doesn't care if the new character is not a letter. This is good if you want to use punctuation or special characters, but it won't necessarily give you letters only as an output. For example, "z" 3-shifts to "}".
def ceasar(text, shift): output = "" for c in text: output += chr(ord(c) + shift) return output | https://riptutorial.com/cryptography/example/24760/python-implementation | CC-MAIN-2022-05 | en | refinedweb |
GREPPER
SEARCH
SNIPPETS
USAGE DOCS
INSTALL GREPPER
All Languages
>>
Shell/Bash
>>
comprimir archivos con zip en linux
“comprimir archivos con zip en linux” Code Answer
comprimir archivos con zip en linux
shell by
Depressed Dog
on Apr 14 2021
Comment
0
zip archivo.zip archivo-comprimir
Source:
vortexbird.com
Add a Grepper Answer
Shell/Bash answers related to “comprimir archivos con zip en linux”
how to extract a zip file in linux terminal
zip files in folder linux
linux zip a directory
linux install zip command
linux borrar configuracion residual
zip some files linux
comprimir directorio linux
como buscar archivos ejecutables linux comandos
compress a folder linux zip
usar comandos de linux en mi cmmd
ver particiones montadas linux
comment copier un fichier linux
how to use compress zip cli linux
linux compress folder
linux zip a folder without compression
come resettare le impostazioni di connessione linux
zip folder linux
comprimir carpeta linux comando
centos zip folder
zip older linux
Shell/Bash queries related to “comprimir archivos con zip en linux”
comprimir archivos zip
como comprimir un archivo zip
comprimir archivo zip
como comprimir un archivo en varios zip
comprimir archivos en carpeta zip
encriptar archivo zip linux zip -e
como comprimir un archivo en zip en linux
descomprimir archivos zip linux
comprimir varios archivos en zip por consola linux
como converti un archivo en zip en linux
More “Kinda” Related Shell/Bash Answers
View All Shell/Bash Answers »
adding jars to classpath in linux
ubuntu change permissions on folder and subfolders
zip command colab
zip a folder in colab
ubuntu add permission to folder
full access in linux folder
change folder permisson in mac
bash loop over files in directory
how to change gopath
chmod 777 ubuntu xampp
linux zip a directory
get value of an alias in bash
linux expand alias
linux add to path
export path linux
linux give permission to folder
how to get permission in create folder and file on hard drive in ubuntu
linux give permission to directory
how to compress pdf in linux
bash copy without prompt
bash copy overwrite
give permission to file ubuntu
linux give full permission to directory
linux full permission to folder
ls show octal permissions
command for moving files in linux
make all files in directory executable
how to open directory from wsl
bash copy files but exclude some directories
no build file in linux headers
chmod files 644 directories 755
why i am not able to make a directory in htdocs folder in ubuntu
deny directory listing htaccess
zip full folder ubuntu
Create A Shared Folder On Linux With Samba
ubuntu zip file
how to move a file in terminal
make shell script executable
make file executable linux
find and replace in all files in directory centos
give 777 permission folder and subfolders in linux
how to copy the content of the file to clipboard in bash
how to assign an ad group to a folder mac os terminal
terminal zip
chmod read only command in linux
copy folder from local to ubuntu server
un innstall dot net ubuntu
bash flatten directory
chmod 777
ubuntu zip folder
zip bash
how to change permissions for the whole folder in ubuntu
shell ls a zip file
linux compress folder
how to manage icloud drive in terminal
nbash creacte new folder
bash copy all hidden files
linux convert files in folder
installation directory must be on local hard drive
linux require a password to open a certain file
edit hosts file linux
ubuntu open directory from terminal
get all directories name using cmd and save to text
how to give all permission to a directory in linux
linux how to open code detached
load new etc rules
pgdmp file
linux how to give permission to folder forever
pscp ubuntu copy folder recursively
linux dir one line
bash symlink everything in a directory
how to do sum with exper in linux
trouver la path d'un programme linux
linux compress pdf
bash perform operation on all files in directory
ubuntu create archive split
bash add or subtract one column from another
linux copy status
linux exit file path
copy terminal preferences from one computer to another ubuntu
loop through directories bash
make folder terminal mac
bash fully unsquash sqfs file
bash windows open folder in exporer
What is the plus (+) sign in permission in Linux ?
how to create a junction between folders
bash swap two columns in a file
7z e into folder linux
vestacp wordpress permissions
copy folder ubuntu
proc folder
osx copy output to clipboard terminal
how to overwrite symlink linux
how to add a directory in path linux
linux symlink file
grant linux sh script permissions
how to make a file writable in ubuntu 20.04
convert all files and folders in current directory into zip in linux
zip all files command linux
zip all files recursively linux
zip files in folder linux
linux document root
batch loop through folders in a directory
hapus folder di linux
ubuntu Permissions for directory: /etc/sudoers.d
windows cd to another drive
how to move to f drive in cmd
bash how to download password protected files
rm multiple folders
shared folder virtualbox ubuntu
bash move a list of files
delete command for multiple folder in linux
bash back to last folder
bash last folder command
empty a file linux
add a home directory for existing user
ubuntu open file from terminal
create zip file ubuntu
move file from one directory to another sftp
how to use sed output to overwrite existin file
debian give write permission
all folder permissions terminal
how to create a swap file
unir arquivos linux
join files linux
epub linux reader
resart network
linux restart all network interfaces
restart network service ubuntu command line
reload hosts file linux
xcopy folder to another folder
Unable to create directory wp-content/uploads/. Is its parent directory writable by the server?
linux link file
zip not found
bash zip command not found
centos copy files ssh
change directory to directory name contains space in bash terminal
How to cd to a directory with a name containing spaces in bash?
change directory to directory name with space in bash terminal
ubuntu shell touch multiple files
change directory contains space in bash terminal
change directory to directory name contains space
edit file terminal
open pdf command line linux
cmd rename multiple files
how to change permission of a folder linux
ssh copy from remote to local
Copy single file from remote to local using scp
add pwd to path linux
zip entire directory ubuntu
How to create a file with a given size in Linux?
create text file with 64 bytes in ubuntu
how to open file explorer with sudo ubuntu
scp folder recursive
how to rename files with mv in linux
copy whole directory command line
colab change directory
how to make all directory 775
edit path linux
bash rename foldr
remove root permission from folder
move directories recursively linux
assign home directory to user linux
bash concatenate files into one
linux view directory premmisiosns
linux create folder with date
linux move folder and subfolders to parent
change owner of all the files from a directory linux
linux move all files up a directory
copy all files from a folder to another ubuntu
where to mount drive in ubutnu
How to unzip a file using the cmd?
how to open diskmgmt
copy directory command in linux
linux copy folder recursively
cp directory command in linux with examples
tar to directory
create and copy folder in ubuntu
cp folders
linux move everything in a directory to another directory
move contents of a folder to another folder mac
linux move all files to another folder
how to mount windows folder on ubuntu
bash copy contents of file to clipboard
how to empty a file in linux
ubuntu desktop file folder
how to add a directory to path in linux
navigate to a directory linux
echo to file permission denied
how to create symlink in linux
create a zip file in linux
ubuntu change directory owner
ubuntu change file owner
zip destination folder
combine txt files linux
linux umount command
compress a folder linux zip
chmod: changing permissions of : Read-only file system and writeable
find change permissions to subdirectories
bash how to copy or move all files in a list
open path using terminal ubuntu
ubuntu check chmod
ubuntu check permissions of file
create folder ubuntu and file
linux redirect everything (stdout and stderr) to file
linux external hard drive chmod
bash cd or make dir if not exists
ansible copy
create batch file to delete folders
mac copy file to clipboard terminal
copy folder linux command line
ubuntu move folder to another directory
linux commad to show directories
how to set execute permission in linux
zip folder linux
command for copying files in linux
open a pdf on linux
cmd copy all files to another folder
open pdf in unix
copy content file from terminal
scp copy file from server
lock symbol on files in ubuntu
how to give permission to a user in linux on a folder
how to un zip a file in linux command line
give executable permission to a file
create folder shortcut on desktop ubuntu
compress directory with tar and bzip2
compress directory with bzip2
ln -sf linux
ln a folder
chmod 777 command in linux
how to create tar file to another directory
how to give permission recursively in linux
unzip zip linux on specific folder
vim rename file
mount in linux
linux rename
how to rename a file in linux
ubuntu 20.10 how to open zip file
how to zip a folder in ubuntu 20.04
linux create directory with permissions
linux copy output to clipboard
add group
add group linux
create group in linux command example
linux command to go to the previous directory
assigning permissions to folder and files in linux
copy folder from s3 to local
zip some files linux
centos zip folder
chmod a+x
linux bash temporary file
how to copy directory to another directory in linux
how to go back to the last directory in linux
how to overwrite file linux cli
linux command to open a file
create pdf from images linux
change user of a directory in linux
zip multiple folder in linux
how to copy a file in linux
bash delete swap file
expand aliases
ubuntu move all files in directory
bash new folder
rename all files in a folder with progressive numbers linux
command to hide folder in cmd
Compress a folder in Ubuntu
terminal copy to clipboard linux
how to copy file in ubuntu terminal
linux make executable
copy and paste file in linux shell
make directory mac
changing folder permission in linux
wget into a folder
ubuntu make executable
why i am not able to paste anything in htdocs folder in ubuntu
how to use compress zip cli linux
mkdir recursive
linux zip all folders except one
how to zip a folder but ignore some files
change folder owner recursively linux
linux chmod permissions
linux edit file
mkdir multiple directories windows
how to make directory in cmd
how to open file in linux
copy files between servers
linux cp
reload .bashrc
move hidden files linux
create multiple copies in linux of file
rename files linux
append to a file from terminal
comprimir directorio linux
how to set gopath/bin linux
append two image terminal
move all files from one directory to another
change owner of file in linux
ubuntu rename all files lowercase commands
ubuntu undelete a whole directory
one drive linux
chgrp command
linux shard a file into smaller files
make tarball backup of director
combine two document together lnux
combine two file linux
linux create link to folder
rename file linux
how to navigate to a folder in cmd windows 10
open file explorer from cmd
how to make directory in ubantu
open current directory
open current folder in terminal linux
how to open directory in linux using command
open directory windows command
how to open a folder using terminal
open directory
edit files from terminal linux
open folder
how to open a .conf file in terminal
open folder from terminal
open folder from terminal ubuntu
open folder from terminal windows
what does the export command do in linux
how to copy file in root directory
mac zip a folder without compression
move all files in subdirectories to current directory linux
linux zip a folder without compression
linux move files one directory up
Linux How to zip two files
how to append program output to file linux
target of symbolic link
how to append two file sin bash
open a file in linux
linux rename multiple files
chmod
bash copy file to directory
chmod just directories
set executable permissions linux
how to give full permission to another user linux
how to move file using move command cmd
move files command line windows
move a file to /opt
add directory to path on linux
shell redirect all to /dev/null
chmod add execute permission to useer
how to image an entire disk on linux
mkdir with permissions
linux cp from one directory to another
linux add homedir
unzip in folder
unzip destination folder
zip folder ssh
rename multiple files in linux
how to open files using terminal in ubuntu
how do I become the owner of a directory in linux?
make directory in linux
how to move to directories in command prompt
create empty file command prompt cmd
mv folder linux
wget to particular directory
cp all except one folder
move all file in linux commands
to move a directory with its contents in terminal in linux
rename all files in a folder command line
copy file to ubuntu server
set permissions linux for drive chmod group
linux cp to here
linux copy files to current directory
copy terminal settings ubuntu
linux sync files between folders
bash if user exists in a group then add
Copy folder while ignoring node_modules folder
change folder permissions to public linux
duplicate file linux
Copy a Remote File to a Local System using the scp Command
rename multiple files mac terminal
comprimir carpeta linux comando
rename multiple files in terminal
Create and edit a new file nano
rename all files starting with in linux
Move folder content up a level using bash/shell
dir to file txt
bash provide path to same dir as executable
recursive dir batch
how to zip a folder in putty
copy first n files linux
linux extend path
avd manger permission need root
ls permission
move linux
how to copy a file in ubuntu
ubuntu open file system from terminal
cli zip
linux zip file without parent directory
bash zip file without parent directory
copy files from certain date linux
linux create directory
bind mount on linux
how to copy a Directory and its content in ubuntu
upload file via terminal
nano edit a file
create file in linux using cat
assign permission to files and folder ubuntu separate
bash make folders according to a list
how assign permission to a folder and all contents in ubuntu|linux
copy files from windows to wsl2 ubuntu
ubuntu navigate to directory in windows
wsl cd into another drive
edit file as root ubuntu
rename set of files terminal linux
linux zip folder without parent folder
mkdir command
add folders to gitignore
ignore folder and it's contents
how to create a folder in linux
append a string in all files name linux
create a directory and change to it command line
command to create jpeg in linux
change root directory command prompt
how to create a shortcut to a folder on linux
zip: command not found bash
bash create nested directories
folder color ubuntu
linux redirect output to file
linux copy files to share space to copy to other users home
upload transfer.sh
linux backup command line
linux cp everything except
linux alternatives to tree
Move all files from a folder in linux
move all subfolders to parent folder linux
cp directory linux
cp with folder structure
how to copy folder in linux
linux make home dir
Copy File from One Directory to Another in Linux
share folder on network linux
scp copy a remote folder
scp folder copy
permission in linux
how to create a username along with home directory in linux
copy everything in a directory linux
rename folder in terminal
bash make recursive directory
mv linux command
unix symbolic link
symbolic link
command line move file
redirect stderr to file linux
linux how to execute a file
chmod command
scp copy directories
tr command
terminal rename
shell Creating folders and files
change permissions for specific file types linux
unix rename file
Adding directory to PATH
create file linux
with which command make file and directory in linux
how to edit a file in ubuntu using terminal?
give full permission to folder and subfolders in linux
bash change and make directory
shell linux refresh log output
make multiple directories with a single command on windows
ffmpeg stick two files
move a file from one directory to another in linux
set folder permissions linux
gsed comand store file
linux backup file with date
preserve time and date when copying files ubuntu
copying directories in linux
ubuntu command line change line in file
linux unrar multiple files
add fold to path in linux
mount one directory to another in mac
where does the export path file in linux
navigating through directories using path
shell redirect otpt to multiple files
lINUX OS Command line to Creat A new File
passing bash variable to sed
batch rename folders & trim spaces & add prefix / suffix
debian terminal paste
powershell script to copy mutliple files into a single file
open folder with cli
linux view file refresh
untar in specific folder
save output of sed to same file
tar append
open first image in directory linux
switch content two files linux
recursively change file permissions linux
copy files with same last modified date in linux
how to switch to my flash drive directory in cmd
ubuntu command change line in file
command prompt change directory to network drive
how to save file in linux
Correct Folder Permissions Ubuntu 18.04 Server
backup software move file from certain time
two sed command together
make a new folder in ps1 file
dirname=/usr/bin in linux
how to make my htdocs folder writable on ubuntu
sftp list remote directory
how to create tmp directories in hpc
sed multiple files
how to reduce directory name in terminal linux
SCP copy a directory from a local to remote system
ubuntu 20.04 command line maky copy of folder
move files one level up linux
copy file to other location linux terminal
comprimir archivos con zip en linux
rename a file in terminal linux
move the file from one linux user to another
linux copy move
how to append string to file names in linux
sftp list local directory
Edit remote files with Vim on Linux
make directory to be owned by group ubuntu
how to copy zip file from remote to local
reduce directory display linux
renaming a file in linux
bash command to move all files to a dir
terminal make directory and enter in the same time
how to make xcopy create destination dirs if they don't exist
clear the log folder in var linux to a max file of 500M
copy file from windows to linux permission denied wsl
sync folder from local to server
cp directory with exclusion
concatenate multiple zip files linux
sftp local directory
copy specific files only from directories
linux copy files terminal cp -r
Edit remote files within Vim session
ubuntu terminal how to copy and move file
the folder cannot be copied because you do not have permissions to create it in the destination
how to set chmod 777 to folder
copy a file from home directory to other directory in linux
execute command on every recursive directory
linux change filename batch
DRIVE LINUX
break a symbolic link in linux
sync folder from local to server with progress
Rename all items in a directory to lower case
linux mv multiple rename
linux traverse all subdirectories and do action
Flatpak in linux
how to add file to application linux
linux move to trash command line
Edit remote files in new tab of Vim session
adding file system to a volume
whm format new drive and mount for backup
linux up command multiple level of directory
linux file full permission
change directory on WSL
remote download wordpress command .zip
cope file linux
linux gzip multiple files
bash find and replace all files with specifc name with another file
bash multipart tar
copy venv to another folder linux
move multiple files cmd
how to change folder permissions in kali linux
linux change all folders to 755 and files to 644
copying a file from a server to a local folder
debian bin folder symlink to usr/bin
cpanel format new drive and mount for backup
how to mount a flash drive in wsl
To copy a directory from a local to remote system, use the -r option
how to acess folder with space in name in terminal
move multiple directories linux
curl copy from sftp
join 2 files linux
display folder of path linux bashrc
extrapolate part of video linux
join two files horizontally unix
linux copy directory
zip older linux
disk usuage
debian copy directory
how to rename file sequential in ubuntu
centos format new drive and mount for backup
bash cd root permission denied
bash mkdir multiple
linux change date and then change files ctime
append data to a file with cat command
mount a filesystem under another filesystem linux
attached the name of the folder to the file in linux
how to retain ownership permissions when copying file linux
chmod ax
patch a file in vendor
compress folder pigz
show directories before deleting them find unix
shell script backup distant
untar multiple archives into thier own folders linux command
ubuntu format new drive and mount for backup
add dir to your path kali
chown a file
change directory, files and sub-directories owner in linux
rearrange pdf pages linux
copy content from one files to another in linux shell script
bat cd to directory
example of renaming multiple files on linux
how to move one folder back on command promps
exclude folder with gunzip linux
ubuntu absolute path of file
add files to hadoop
cmd concatenate files
paste command in linux
linux mv all folder to previous folder
overwrite a file name character in linux
how to unrar multiple files at once linux
change directory from c to d
bash create file with content
create folder putty linux
change directory name lunix
how to put two conditions in sed linux
cd n directories back
Change user/group for directory and all contents
create a file a1.txt tyep some content of it in linux
how to move many folders linux
shell script monitor log file
how to cd into a directory with a space linux
bash print file permissions
linux set permissions for all files matching pattern
linux move everything except
how to transfer a folder from ubuntu to ubuntu
bash mkdir
copy file from remote node to local
append filename at the beggining linux
setting the CLASSPATH to temp libs in linux
incremental backup 7zip
FS OFS unix
how to move folders in linux terminal
linux subsystem mount file into windows
groupadd to folder linux fedora
linux rename folder add suffix
change directory in linux
permission denied directory linux
add one drive to ubuntu
linux empty file
bash how to create directories in all subdirectories
linux compress a pdf
move one foile/folder to another ubuntu
empty file linux
linux change directoryyy
change file name in terminal
drupal file permissions
read -p linux
cp -r copy linux directory or file
upload folder to glitch
how to sync my directory with my deleted file change
verify large directory after copy files
how to copy everything in a file with sudo nano
boost filesystem create directory
gnome disks mount read write
diff files in different repositories
create enumerated folders termina
powershell copy all images in a directory
linux hide mounted drives from favourites
hide mount drives
delete local branch
delete branch from remote
delete remote branch
how to delete a remote branch in git
git create new branch
create remore git branch
install node js ubuntu
install react router
react router dom
react js router
react router
react router install
react-router-dom
installing react router dom with yarn
router dom react
Configure React Router
error: src refspec master does not match any. git
pip upgrade
how to upgrade pip
what is --use-feature=2020-resolver
unable to create process using ' ' virtualenv
heroku cli
how to check sles version install opencv
python opencv
poython opencv pip
git command to create a branch
how to see the remote url in git
get git remote url
install docker compose
git push origin master --force
java check jre version
check jdk version
java check java git
git allow unrelated histories
refusing to merge unrelated histories debian
add user to sudoers install bootstrap
install bootstrap 4 npm
npm bootstrap
delete git repository command line
how to remove git initialization
rm git init
delete .git folder
docker delete all images
remove all docker iamges commandl
curl get example
Veu command install
npm install cli vue
vue cli
install gz file
how to install tar.gz in ubuntu
ip address ubuntu
bash: yarn: command not found
macos install yarn
stop all container in docker
docker delete all containers
stop all docker ps-a
docker stop all
docker stop all containers
list users in linux
ubuntu list users
postman install ubuntu 18.04
snap install'.
decompress tar.gz
tar.gz
unzip tar.gz
upgrade ubuntu
ubuntu update
update ubuntu
how to run a update comand in linux
git updates were rejected because the tip of your current branch is behind
remove history from git branch
ubuntu remove directory
how to start nginx in linux
How To Restart Nginx
retsrta nginx
restart nginx
nginx restart ubuntu
upgrade dart in flutter
Update flutter command.
update flutter
flutter upgrtade
flutter web run using vscode
command to install firebase in raspberry
install firebase tools
firebase-tools npm
npm firebase -g
firebase cli windows
Firebase tools
How do I export data from firebase authentication?
angular install firebase tools
install firebase npm globally
stop docker container
docker compose run
zoom download ubuntu
zoom repository ubuntu
git view stash
git stash contnet
git stash
bootstrap npm
install bootstrap chrome apt-get
mysqldump
install chrome on linux
install google chrome linux
install google chrome ubuntu
push code to github command line
python install mysql connector
git same as origin master
git pull hard
git pull from remote branch overwrite local
git replace local branch with remote
git reset to origin/master
beautifulsoup4 install
pip install Beautiful Soup
install beautifulsoup windows
pypi beautifulsoup
beautifulsoup Python
how to initialize a git repository command line
conda install pytorch
pytorch
pip install pytorch windows
pytorch install
python pytorch
installing pytorch
install pytorch gpu on windows
install pytorch
pm2 install ubuntu
install pm2
delete already stashed files git
git stash clean command
git stash drop all linux
screen recorder ubuntu
best screen capture software for linux u
ubuntu generate ssh key
create public key linux
linux create public key
create a ssh key
Get the size of all the directories in current directory in linux
linux get free disk space Code Example
folder sixe in linux
linux check used space in folder
linux how to show disk space
check folder sizes linux
check disk usage linux Code Example
git undo last commit
git reset soft head
git add git commit
create react app
ascii to binary perl
zip update not removing files
ping a port linux
yarn add install all packages in package,json
gentoo enable all fonts
zsh: command not found: gatsby
gnu octave
unzip .tar.xz
ng command new app
tar exclude directory
how to find host name in linux
ubuntu list file by size
docker image with wget
git reset commiter credentials
installer lamp ubuntui
mdi 5.6.55
how to install asyncstorage in react native
update npm with nvm
print value mongodb shell
couldn't be accessed by user '_apt'. - pkgAcquire::Run
postgresql insert bcrypt
npm install hangs on lodash
Linux remove non empty directory
curl send to ip
aws cli has no installation package in ubuntu server 20.04 how to solve
install yarn windows
material ui
how to install steam on ubuntu
ssh cp directory remote to local
convert all line endings to unix
Not Found The requested URL was not found on this server. Apache/2.4.46 (Win64) OpenSSL/1.1.1j PHP/7.3.27 Server at localhost Port 8
git version
mongoclient install ubuntu
gem uninstall version specific
replace text with sed
migration roleback
surge installation
specify ssh key to use
How install CoolTerm on Linux?
bash grep only return first match
best ide editor
fedora how to uninstall snapd
git squash last 2 commits
install vagrant in windows
files changed in a commit
Azure PowerShell module
git log with branch tree
Compile electron app
ubuntu 20.04 ntfs read only
ERROR: Could not install packages due to an EnvironmentError: HTTPSConnectionPool(host='files.pythonhosted.org'
ionic download
How To Configure WiFi on Raspberry Pi - NAYCode.com
for shell
check public ip address in terminal
git default remote
video editor for ubuntu 21.10
open github code editor online
zip multiple folder in linux
use local image with minikube
add line at beginning of file unix
sudo apt-get -y install unity-greeter
Cannot find module 'qs'
bash view the contents of a sqfs file
dpkg: error processing package gitweb (--configure): installed gitweb package post-instal
upgrade composer
get first line of output bash
update gitcmd
selinux apache 403
set music cover image linux
haskell change version
how to uninstall yarn npm
grep --color 'string' filename bash find words
composser
npm install reat
sitch a branch command line
does undo commit delete the code or move it to uncommitted
Default side-by-side NDK installation is not found.
git branch origin list
get ssh key mac
mp4 to mp3 converter bat ffmpeg
command running processes linux
install node_modules folder
codeigniter 4 db seed
redis get all data
add sudo user centos server group
ubuntu intall OpenBLAS
install nmap ubuntu
choco install python
bash get package dependencies
how to delete a non empty directory in linux
kali wordlist location
fork from github to gitlab
install raspap
remove local commiits
check alias content linux
how to take array input in shell script
'typ "{}"not recognised (need to install plug-in?)'.format(self.typ) NotImplementedError: typ "['safe', 'rt']"not recognised (need to install plug-in?)
error eacces permission denied mkdir xampp ubuntu
rubocop ruby says autocorrectable
speedtest cli
mongodb install kali linux
Install Apache FreeBSD
switching git branch in gitbash
fatal: unable to auto-detect email address (got 'root@LaptopName.(none)')
how to know version of tensorflow in linux command line
how to create bootable usb on manjaro
docker copy from container to host
vite js install
check default gateway ubuntu
vim delete to end of file
ng generate class
streamlink save to file
how to upload on github with command
butler push userversion
githum readme bold
linux temperatur monitor
crate db helm
install vlc on ubuntu
linux cp to here
Error: Problem validating fields in app.json. See • should NOT have additional property 'nodeModulesPath'.
How to install npm in centos
how to create a script raspberry pi
bash command to open new terminal
run flluter,linux
flow for vim
how to I list powershell functions
hello world shell script
du sort by size
what shell type
ubuntu assume root
react native elements
install dlib gpu check
change php version ubuntu
sed replace with newline
install opencl headers ubuntu
node sass does not yet support your current environment windows 64-bit angular
terminal linux en windows platzi
install composer using brew
git push error
npm install moment
tkinter download windows 10 64-bit
pm2 node start
Install SWAY on debin ubuntu
how to uninstall create-react-app
gatsbyjs image sharp
how to clear terminal in linux
install rdp ubuntu
how to go back to the last directory in linux
linux document root
ionic capacitor ios
download filezilla for ubuntu
bash trim binary output
zsh silent backgrousd task output
discord.py install
sklearn
windows how to access wsl from explorer
ansible ad hoc file module
git branch set remote tracking branch
how to make history | grep in windows
wpa passphrase
buster InRelease' changed its 'Suite' value from 'stable' to 'oldstable'
how to remove conky
access windows files from windows ubuntu
powershell open cmd
rostopic echo filter
clear nom cash
install jenkins ubuntu
npm http server
concat strings inside array bash script
install nodejs from binary
delete empty files bash
bash search file in directory
install make anaconda
tail log of spring boot service (linux)
hadoop delete directory without url
linux add user with home directory
count new lines bash
Misp Setup
linux kill all python processes
git create master branch in empty repository
could not find tools.jar linux
ubuntu fractional scaling
git specify ssh key for repo
how to change date of file in linux
git set commit date
execute command on every recursive directory
install vm guest additions ubuntu
pip install package
install all dependencies npm
falha ao instalar arquivo não há suporte ubuntu
manjaro how to erase a usb
pip uninstall all
import database mysql
how to check what module pip has already install
kubectl get pods
install mocha
gh --version Command 'gh' not found,
budo is not recognized as an internal or external command
. | https://www.codegrepper.com/code-examples/shell/comprimir+archivos+con+zip+en+linux | CC-MAIN-2022-05 | en | refinedweb |
A 3D cell that represents an arbitrary order Lagrange hex. More...
#include <vtkLagrangeHexahedron.h>
A 3D cell that represents an arbitrary order Lagrange hex.
vtkLagrangeHexahedron is a concrete implementation of vtkCell to represent a 3D hexahedron using Lagrange shape functions of user specified order.
Definition at line 47 of file vtkLagrangeHexahedron.h.
Definition at line 51 of file vtkLagrangeHexahedronHexahedron.
Reimplemented from vtkHigherOrderHexahedron.
Methods invoked by print to print information about the object including superclasses.
Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes.
Reimplemented from vtkHigherOrderHexahedron.
Return the type of cell.
Implements vtkHigherOrderHexahedron.
Definition at line 54 of file vtkLagrangeHexahedron.h.
Return the edge cell from the edgeId of the cell.
Implements vtkHigherOrderHexahedron.
Return the face cell from the faceId of the cell.
The returned vtkCell is an object owned by this instance, hence the return value must not be deleted by the caller.
Implements vtkHigherOrderHexahedron.
Definition at line 69 of file vtkLagrangeHexahedron.h.
Definition at line 70 of file vtkLagrangeHexahedron.h.
Definition at line 71 of file vtkLagrangeHexahedron.h. | https://vtk.org/doc/nightly/html/classvtkLagrangeHexahedron.html | CC-MAIN-2022-05 | en | refinedweb |
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
There are a few C++ published libraries which implement some of the HTTP protocol. We analyze the message model chosen by those libraries and discuss the advantages and disadvantages relative to Beast.
The general strategy used by the author to evaluate external libraries is as follows:
cpp-netlib is a network programming library previously intended for Boost but not having gone through formal review. As of this writing it still uses the Boost name, namespace, and directory structure although the project states that Boost acceptance is no longer a goal. The library is based on Boost.Asio and bills itself as "a collection of network related routines/implementations geared towards providing a robust cross-platform networking library". It cites "Common Message Type" as a feature. As of the branch previous linked, it uses these declarations:
template <class Tag> struct basic_message { public: typedef Tag tag; typedef typename headers_container<Tag>::type headers_container_type; typedef typename headers_container_type::value_type header_type; typedef typename string<Tag>::type string_type; headers_container_type& headers() { return headers_; } headers_container_type const& headers() const { return headers_; } string_type& body() { return body_; } string_type const& body() const { return body_; } string_type& source() { return source_; } string_type const& source() const { return source_; } string_type& destination() { return destination_; } string_type const& destination() const { return destination_; } private: friend struct detail::directive_base<Tag>; friend struct detail::wrapper_base<Tag, basic_message<Tag> >; mutable headers_container_type headers_; mutable string_type body_; mutable string_type source_; mutable string_type destination_; };
This container is the base class template used to represent HTTP messages.
It uses a "tag" type style specializations for a variety of trait
classes, allowing for customization of the various parts of the message.
For example, a user specializes
headers_container<T>
to determine what container type holds the header fields. We note some problems
with the container declaration:
body_to after the headers are read in.
string_type(a customization point) for source, destination, and body suggests that
string_typemodels a ForwardRange whose
value_typeis
char. This representation is less than ideal, considering that the library is built on Boost.Asio. Adapting a DynamicBuffer to the required forward range destroys information conveyed by the ConstBufferSequence and MutableBufferSequence used in dynamic buffers. The consequence is that cpp-netlib implementations will be less efficient than an equivalent Networking TS conforming implementation.
string<Tag>to change the type of string used everywhere, including the body, field name and value pairs, and extraneous metadata such as source and destination. The user may only choose a single type: field name, field values, and the body container will all use the same string type. This limits utility of the customization point. The library's use of the string trait is limited to selecting between
std::stringand
std::wstring. We do not find this use-case compelling given the limitations.
boost::network::httpnamespace. The way the traits are used in the library limits the usefulness of the traits to trivial purpose.
The design of the message container in this library is cumbersome with its system of customization using trait specializations. The use of these customizations is extremely limited due to the way they are used in the container declaration, making the design overly complex without corresponding benefit.
boost.http is a library resulting from the 2014 Google Summer of Code. It was submitted for a Boost formal review and rejected in 2015. It is based on Boost.Asio, and development on the library has continued to the present. As of the branch previously linked, it uses these message declarations:
template<class Headers, class Body> struct basic_message { typedef Headers headers_type; typedef Body body_type; headers_type &headers(); const headers_type &headers() const; body_type &body(); const body_type &body() const; headers_type &trailers(); const headers_type &trailers() const; private: headers_type headers_; body_type body_; headers_type trailers_; }; typedef basic_message<boost::http::headers, std::vector<std::uint8_t>> message; template<class Headers, class Body> struct is_message<basic_message<Headers, Body>>: public std::true_type {};
This container cannot model a complete message. The start-line
items (method and target for requests, reason-phrase for responses) are
communicated out of band, as is the http-version.
A function that operates on the message including the start line requires
additional parameters. This is evident in one of the example
programs. The
500
and
"OK" arguments
represent the response status-code and reason-phrase
respectively:
... http::message reply; ... self->socket.async_write_response(500, string_ref("OK"), reply, yield);
headers_,
body_, and
trailers_may only be default-constructed, since there are no explicitly declared constructors.
std::vectoris a model of Body. More formally, that a body is represented by the ForwardRange concept whose
value_typeis an 8-bit integer. This representation is less than ideal, considering that the library is built on Boost.Asio. Adapting a DynamicBuffer to the required forward range destroys information conveyed by the ConstBufferSequence and MutableBufferSequence used in dynamic buffers. The consequence is that Boost.HTTP implementations will be less efficient when dealing with body containers than an equivalent Networking TS conforming implementation.
This representation addresses a narrow range of use cases. It has limited potential for customization and performance. It is more difficult to use because it excludes the start line fields from the model.
cpprestsdk is a Microsoft project which "...aims to help C++ developers connect to and interact with services". It offers the most functionality of the libraries reviewed here, including support for Websocket services using its websocket++ dependency. It can use native APIs such as HTTP.SYS when building Windows based applications, and it can use Boost.Asio. The WebSocket module uses Boost.Asio exclusively.
As cpprestsdk is developed by a large corporation, it contains quite a bit of functionality and necessarily has more interfaces. We will break down the interfaces used to model messages into more manageable pieces. This is the container used to store the HTTP header fields:
class http_headers { public: ... private: std::map<utility::string_t, utility::string_t, _case_insensitive_cmp> m_headers; };
This declaration is quite bare-bones. We note the typical problems of most field containers:
Now we analyze the structure of the larger message container. The library
uses a handle/body idiom. There are two public message container interfaces,
one for requests (
http_request)
and one for responses (
http_response).
Each interface maintains a private shared pointer to an implementation class.
Public member function calls are routed to the internal implementation. This
is the first implementation class, which forms the base class for both the
request and response implementations:
namespace details { class http_msg_base { public: http_headers &headers() { return m_headers; } _ASYNCRTIMP void set_body(const concurrency::streams::istream &instream, const utf8string &contentType); /// Set the stream through which the message body could be read void set_instream(const concurrency::streams::istream &instream) { m_inStream = instream; } /// Set the stream through which the message body could be written void set_outstream(const concurrency::streams::ostream &outstream, bool is_default) { m_outStream = outstream; m_default_outstream = is_default; } const pplx::task_completion_event<utility::size64_t> & _get_data_available() const { return m_data_available; } protected: /// Stream to read the message body. concurrency::streams::istream m_inStream; /// stream to write the msg body concurrency::streams::ostream m_outStream; http_headers m_headers; bool m_default_outstream; /// <summary> The TCE is used to signal the availability of the message body. </summary> pplx::task_completion_event<utility::size64_t> m_data_available; };
To understand these declarations we need to first understand that cpprestsdk
uses the asynchronous model defined by Microsoft's Concurrency Runtime. Identifiers from the
pplx namespace
define common asynchronous patterns such as tasks and events. The
concurrency::streams::istream parameter and
m_data_available
data member indicates a lack of separation of concerns. The representation
of HTTP messages should not be conflated with the asynchronous model used
to serialize or parse those messages in the message declarations.
The next declaration forms the complete implementation class referenced by the handle in the public interface (which follows after):
/// Internal representation of an HTTP request message. class _http_request final : public http::details::http_msg_base, public std::enable_shared_from_this<_http_request> { public: _ASYNCRTIMP _http_request(http::method mtd); _ASYNCRTIMP _http_request(std::unique_ptr<http::details::_http_server_context> server_context); http::method &method() { return m_method; } const pplx::cancellation_token &cancellation_token() const { return m_cancellationToken; } _ASYNCRTIMP pplx::task<void> reply(const http_response &response); private: // Actual initiates sending the response, without checking if a response has already been sent. pplx::task<void> _reply_impl(http_response response); http::method m_method; std::shared_ptr<progress_handler> m_progress_handler; }; } // namespace details
As before, we note that the implementation class for HTTP requests concerns itself more with the mechanics of sending the message asynchronously than it does with actually modeling the HTTP message as described in rfc7230:
std::unique_ptr<http::details::_http_server_contextbreaks encapsulation and separation of concerns. This cannot be extended for user defined server contexts.
_reply_implfunction implies that the message implementation also shares responsibility for the means of sending back an HTTP reply. This would be better if it was completely separate from the message container.
Finally, here is the public class which represents an HTTP request:
class http_request { public: const http::method &method() const { return _m_impl->method(); } void set_method(const http::method &method) const { _m_impl->method() = method; } /// Extract the body of the request message as a string value, checking that the content type is a MIME text type. /// A body can only be extracted once because in some cases an optimization is made where the data is 'moved' out. pplx::task<utility::string_t> extract_string(bool ignore_content_type = false) { auto impl = _m_impl; return pplx::create_task(_m_impl->_get_data_available()).then([impl, ignore_content_type](utility::size64_t) { return impl->extract_string(ignore_content_type); }); } /// Extracts the body of the request message into a json value, checking that the content type is application/json. /// A body can only be extracted once because in some cases an optimization is made where the data is 'moved' out. pplx::task<json::value> extract_json(bool ignore_content_type = false) const { auto impl = _m_impl; return pplx::create_task(_m_impl->_get_data_available()).then([impl, ignore_content_type](utility::size64_t) { return impl->_extract_json(ignore_content_type); }); } /// Sets the body of the message to the contents of a byte vector. If the 'Content-Type' void set_body(const std::vector<unsigned char> &body_data); /// Defines a stream that will be relied on to provide the body of the HTTP message when it is /// sent. void set_body(const concurrency::streams::istream &stream, const utility::string_t &content_type = _XPLATSTR("application/octet-stream")); /// Defines a stream that will be relied on to hold the body of the HTTP response message that /// results from the request. void set_response_stream(const concurrency::streams::ostream &stream); { return _m_impl->set_response_stream(stream); } /// Defines a callback function that will be invoked for every chunk of data uploaded or downloaded /// as part of the request. void set_progress_handler(const progress_handler &handler); private: friend class http::details::_http_request; friend class http::client::http_client; std::shared_ptr<http::details::_http_request> _m_impl; };
It is clear from this declaration that the goal of the message model in this library is driven by its use-case (interacting with REST servers) and not to model HTTP messages generally. We note problems similar to the other declarations:
concurrency::streams::istreamand
concurrency::streams::ostreamreference parameters. Presumably, these are abstract interfaces which may be subclassed by users to achieve custom behaviors.
concurrency::streams::istream. No user defined types are possible.
set_response_streammember). Again this is likely purpose-driven but the lack of separation of concerns limits this library to only the uses explicitly envisioned by the authors.
The general theme of the HTTP message model in cpprestsdk is "no user definable customizations". There is no allocator support, and no separation of concerns. It is designed to perform a specific set of behaviors. In other words, it does not follow the open/closed principle.
Tasks in the Concurrency Runtime operate in a fashion similar to
std::future,
but with some improvements such as continuations which are not yet in the
C++ standard. The costs of using a task based asynchronous interface instead
of completion handlers is well documented: synchronization points along the
call chain of composed task operations which cannot be optimized away. See:
A Universal Model for Asynchronous Operations
(Kohlhoff). | https://www.boost.org/doc/libs/1_78_0/libs/beast/doc/html/beast/design_choices/http_comparison_to_other_librari.html | CC-MAIN-2022-05 | en | refinedweb |
#include <CGAL/Number_type_checker.h>
Number_type_checker is a number type whose instances store two numbers of types
NT1 and
NT2.
It forwards all arithmetic operations to them, and calls the binary predicate
Comparator to check the equality of the instances after each modification, as well as for each comparison.
This is a debugging tool which is useful when dealing with number types.
IntegralDomainWithoutDivision (same as
NT1)
Operations
Some operations have a particular behavior documented here. | https://doc.cgal.org/4.12.1/Number_types/classCGAL_1_1Number__type__checker.html | CC-MAIN-2022-05 | en | refinedweb |
Handle an _IO_LSEEK message
#include <sys/iofunc.h> int iofunc_lseek ( resmgr_context_t* ctp, io_lseek_t* msg, iofunc_ocb_t* ocb, iofunc_attr_t* attr );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.:
The o member is the offset after the operation is complete.
QNX Neutrino
iofunc_attr_t, iofunc_lseek_default(), iofunc_ocb_t, lseek(), resmgr_context_t
Writing a Resource Manager
Resource Managers chapter of Getting Started with QNX Neutrino | http://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/i/iofunc_lseek.html | CC-MAIN-2022-05 | en | refinedweb |
Public preview: Azure Service Bus support for large messages
게시된 날짜: 7월 14, 2021
Azure Service Bus premium tier namespaces now supports sending and receiving message payloads up to 100 MB. Previously, if your message size was over the 1 MB limit, you needed to split up messages into smaller chunks or implement a claim check pattern to send larger messages. This enhancement eliminates the need to implement such workarounds for messages up to 100 MB.
This larger message size also enables legacy workloads using larger message payloads on other enterprise messaging brokers to seamlessly migrate to Azure Service Bus. | https://azure.microsoft.com/ko-kr/updates/public-preview-azure-service-bus-support-for-large-messages/ | CC-MAIN-2022-05 | en | refinedweb |
It was 3AM on a Monday. The team had experienced a sleepless weekend, trying to dig into a production failure. In just a few hours, the work week would begin, and we all knew that the world needed this application to work. It would be a disaster if this issue were not fixed before then.
But why did it fail? It worked last week. Why did it stop working today? A recent code change? But that was absolutely unrelated. We tested it - with all the rigor. We had setup the best test automation, topped with some manual tests as well. Even further, our microservices are scanned and reviewed by effective tools. Why then did it fail?
We started cursing our fate and the tools we used. Management started yelling at us. You said K8s is meant for resilience and it enables applications that can never fail. But we have seen more trouble than solution. Never use Open Source! And all that blah..
But after hours of struggle, fate smiled - just in time - and we discovered the problem with the YML configuration. Some progressive minded developer had not specified the version of the embedded opensource DB used in there - he had used a "latest" - hoping to get the best features as they are released. A new version was available, and that broke some existing functionality.
Hush! Our job was saved. We managed to resolve the issue and then crashed into the bed - as the world started using our application that was functioning as before.
Does this sound familiar? I am sure it does. We have all seen disasters caused by misconfiguration in Kubernetes. The problem is in the power of the tool. With great power comes great responsibility.
There is so much that we configure in there, and there are very few developers who really understand these configurations well enough. Most of them make these configurations using other existing configurations - a copy/paste, with modifying some parts that make sense to them. Or the adventurous ones pick the YML files straight from some online tutorials - which were good to prove a point, but not mature enough for the production.
When we use infrastructure as code we tend to forget that it is also a piece of code, that should go through all the validation like a normal application code. It is often tempting to use shortcuts that work today. They work and get deployed into production. And we assume that they will work forever. But unfortunately, that is not the case with Kubernetes configurations.
The world is still new to the Kubernetes Patterns. Developers are still struggling their way though it - so are the admins. And too much of configuration goes into the system, making it easy for such errors to creep in. Worse still, most of these errors show their impact after a few days, or even months. The system works perfectly until it just collapses without any warning. This makes the task even more difficult
There is no end to our creativity, and our ability - to introduce newer and newer defects in the system. But there are some common issues that show up very often. Let us look at them one by one.
This is a common mistake new developers make - when they pick the configurations from tutorials and blogs. Most of them are meant to explain the concept and syntax. To keep that simple, most of them skip the other complexities like namespace.
But any Kubernetes deployment should have a meaningful namespace in the architecture. If we miss it out by error, it can cause name clashes in future.
Kubernetes is going through an active development. Lot of developers across the globe are working hard on improving it and making it more and more resilient. An unfortunate consequence is that we have some API's getting deprecated in newer releases.
We need to identify and remove them before they begin failing in production.
Kubernetes provides for "Deployments" - as a way to encapsulate pods and all that they need. But some lazy developers may want to skip this and deploy just the pod into Kubernetes.
Well, this may work today. But it is a bad practice, and will definitely lead to a disaster some day.
When entities at different levels share the namespace, the lower entity can access neighbors of the higher entity - to probe the network.
For example, when a container is allowed to share its host's network namespace, it can access local network listeners and leverage it to probe the host's local network.
This is a security risk. Even if it is our own code, the minimal access principle recommends that this should not be allowed. When it comes to system security, we should never believe in anyone - not even ourselves.
All communications between the containers should go through the layers of abstraction provided by Kubernetes. Communication between services has to go through the abstractions of ingress and service defined in the deployments.
Never run containers with root privilege. Never expose node ports from services, or access host files directly in code (using UID). Ingress should forward traffic to the service, not directly to individual pods.
Hardcoding IP addresses, or directly accessing the docker sockets, etc.. can lead to problems. Again, these won't break the system on the very first day. But it will show up sometime someday - when we just can't afford it.
This is a common disaster. Developers are often tempted to use "latest" - with the hope of improving continuously, getting the latest and best version available. Such a configuration can live in our system for many months. But, it can lead to a nasty surprise.
When an image tag is not descriptive (e.g. lacking the version tag like 1.19.8), every time that image is pulled, the version will be a different version and might break our code. Also, a non-descriptive image tag does not allow us to easily roll back (or forward) to different image versions. It is better to use concrete and meaningful tags such as version strings or an image SHA.
This is another problem that shows up very often - when we are not very confident of a configuration, we tend to believe in the defaults. The memory/CPU allocation, min/max replica counts for auto scaling... these are some of the properties that are missed too often.
As we noted, there are too many configurations that go into the Kubernetes cluster. Helm tries to reduce this complexity, but makes things worse, when we try to configure Helm itself. It is impossible for a human to ensure the quality that we need in our production deployments.
We need some good tools that can automate this process, and help us with this end.
After some discussion on tech platforms, and a lot of Google / StackOverflow / YouTube, we found a few interesting tools. After comparing them, we chose Datree.
It is simple to set up and use. It can be easily integrated with most CI/CD tools. It helps us identify such issues much before they can cause a problem. It has a lavish free tier and does not lock in - all that we need in an ideal tool.
Let's check it out.
Installing the tool is quite easy and fast. Run the below command. We need the sudo permissions.
curl | /bin/bash
A few seconds, and we are ready to go! Most of us are uncomfortable running such a command - we should be. We can just check out the actual code that is executed by script. Just open the link in the browser.
"" | grep -o "browser_download_url.*\_${osName}_x86_64.zip") DOWNLOAD_URL=${DOWNLOAD_URL//\"} DOWNLOAD_URL=${DOWNLOAD_URL/browser_download_url: /} OUTPUT_BASENAME=datree-latest OUTPUT_BASENAME_WITH_POSTFIX=$OUTPUT_BASENAME.zip echo "Installing Datree..." echo curl -sL $DOWNLOAD_URL -o $OUTPUT_BASENAME_WITH_POSTFIX echo -e "\033[32m[V] Downloaded Datree" unzip -qq $OUTPUT_BASENAME_WITH_POSTFIX -d $OUTPUT_BASENAME mkdir -p ~/.datree rm -f /usr/local/bin/datree || sudo rm -f /usr/local/bin/datree cp $OUTPUT_BASENAME/datree /usr/local/bin || sudo cp $OUTPUT_BASENAME/datree /usr/local/bin rm $OUTPUT_BASENAME_WITH_POSTFIX rm -rf $OUTPUT_BASENAME curl -s > ~/.datree/k8s-demo.yaml echo -e "[V] Finished Installation" echo echo -e "\033[35m Usage: $ datree test ~/.datree/k8s-demo.yaml" echo -e " Using Helm? =>" tput init echoosName=$(uname -s) DOWNLOAD_URL=$(curl --silent
Essentially, it just downloads a zip file and expands it into the specified locations. It places the binary into the /usr/local/bin folder - for which it needs a sudo. It does not mess with any system configuration and so it is very simple to remove.
The datree installation provides a sample yaml file - for a POC.
datree test ~/.datree/k8s-demo.yaml >> File: ../../.datree/k8s-demo.yaml ❌ Ensure each container has a configured memory limit [1 occurrences] 💡 Missing property object `limits.memory` - value should be within the accepted boundaries recommended by the organization ❌ Ensure each container has a configured liveness probe [1 occurrences] 💡 Missing property object `livenessProbe` - add a properly configured livenessProbe to catch possible d eadlocks ❌ Ensure workload has valid label values [1 occurrences] 💡 Incorrect value for key(s) under `labels` - the vales syntax is not valid so it will not be accepted by the Kuberenetes engine ❌ Ensure each container image has a pinned (tag) version [1 occurrences] 💡 Incorrect value for key `image` - specify an image version to avoid unpleasant “version surprises” in the future +-----------------------------------+-------------------------------------------------+ | Enabled rules in policy “default” | 21 | | Configs tested against policy | 1 | | Total rules evaluated | 21 | | Total rules failed | 4 | | Total rules passed | 17 | | See all rules in policy | | +-----------------------------------+-------------------------------------------------+
Interesting? Well that was just a glimpse.
Note the URL it generates at the end of the table -
This is a unique ID assigned to your system. It is stored in a config file in home folder.
# cat .datree/config.yaml token: t4e73q9ZxkXhKhcg4vYHDF
Open the link in a browser. It prompts us to log in with Google / Github. As we login, this Unique ID is connected with the new account on Datree. We can now tailor the tool from the web UI. Also, will see the reports from all the tests
There we can see detailed setup for the tests. We can alter that and the same is used when the yaml files are evaluated. We can choose what we feel is important and skip what we feel can be ignored. If we want, we can be a rebel and allow a class of errors to go through. Datree gives us that flexibility as well.
Apart from the default policy available to us, we can define more custom policies that can be triggered as per our need.
Datree provides "Filters" for all the above mentioned potential issues, and many more.
On the left panel, we can see a link for "History". Click on it, and we will see the history of all the validations that it has performed so far. So we can just view the status of policy checks right here - without having to open the "black screen"
Every time we invoke the datree command, it connects to the cloud with this ID and pulls the required configuration, and then uploads the report for the run.
As we saw above, we can trigger the datree in a single command. Much more than that, it also provides wonderful compatibility with most of the configuration management tools and managed Kubernetes deployments like AKS, EKS and GKS. And ofcourse, we cannot forget Helm when working with Kubernetes. Datree provides a simple plugin on helm.
Datree has an elaborate documentation and set of "How to" tutorials on their website. You can refer to them for quickly setting up any feature that you want.
Most of the applications across the globe have a lot of such misconfigurations - that will surely bring a nasty disaster some day. We knew our application had this problem. But we underestimated the extent of damage that it could cause. We were just procrastinating, sitting on a timebomb!
Now I tell everyone, don't do what we did. Don't wait for the disaster. Automate the configuration checks and enjoy your weekends. | https://blog.thewiz.net/prevent-configuration-errors-in-kubernetes | CC-MAIN-2022-05 | en | refinedweb |
SYNOPSIS
#();
$class->load();
DESCRIPTIONClass APITo $classT
nameThe c<name> method returns the name of the class as original specified in the constructor.
VERSIONFind the version for the class. Does not check that the class is loaded ( at this time ).
Returns the version on success, "undef" if the class does not defined a $VERSION or the class is not loaded.
isa $classChecks to see if the class is a subclass of another class. Does not check that the class is loaded ( at this time ).
Returns true/false as for "UNIVERSAL::isa"
can $methodChecks to see if a particular method is defined for the class.
Returns a "CODE" ref to the function is the method is available, or false if the class does not have that method available.
installedChecksChecksReturns the base filename for a class. For example, for the class "Foo::Bar", "loaded" would return "Foo/Bar.pm".
The "filename" method is platform neutral, it should always return the filename in the correct format for your platform.
resolved_filename @extra_pathsTIf the class is loaded, returns the name of the file that it was originally loaded from.
Returns false if the class is not loaded, or did not have its own file.
functionsReturnsReturns a list of references to all the functions in the classes immediate namespace.
Returns a reference to an array of CODE refs of the functions on success, or "undef" on error or if the class is not loaded.
function_exists $functionChecksAttempts.
subclassesTThe "super_path" method is a straight pass through to the "Class::ISA::super_path" function. Returns an ordered list of class names, with no duplicates. The list does NOT include the class itself, or the UNIVERSAL class.
self_and_super_pathAs above, but includes ourself at the beginning of the path. Directly passes through to Class::ISA.
full_super_pathThe No known bugs. Additional feature requests are being taken.
SUPPORTBugs should be reported via the CPAN bug tracking system
<>
For other inquiries,. | https://manpages.org/classhandle/3 | CC-MAIN-2022-05 | en | refinedweb |
Recently I saw this tweet from Cecil Phillip:
This was news to me as well. I have used something similar in many of my applications, but I didn't know there was an extension method to do the hard work for you!
In this post I show how to use
GetDebugView() to work out where your configuration values have come from, walk through the source code, and show a simple way to expose the configuration as an endpoint in your ASP.NET Core app. In the next post I'll show another (safer) way to expose this data.
What does
IConfigurationRoot.GetDebugView() do?
GetDebugView() is an extension method on
IConfigurationRoot that returns a string describing the application configuration. This string displays all of the configuration keys in your application, the associated value, and the source of the value, be it appsettings.json or environment variables for example. A single row looks something like this:
AllowedHosts=* (JsonConfigurationProvider for 'appsettings.json' (Optional))
The key,
AllowedHosts comes first, followed by the value
*, and the source of the value—in this case the appsettings.json file. Note that this shows the source of the final value in the configuration object.
Note that it's possible that this value may be overwriting a value from a different configuration provider. This view only shows the source of the final value.
Let's look at a bigger example. Let's take a "standard" appsettings.json file from a typical .NET template:
{ "Logging": { "LogLevel": { "Default": "Information", "Microsoft": "Warning", "Microsoft.Hosting.Lifetime": "Information" } }, "AllowedHosts": "*" }
If you call
IConfiguration.GetDebugView() you get a
string that looks something like this:
AllowedHosts=* (JsonConfigurationProvider for 'appsettings.json' (Optional)) ALLUSERSPROFILE=C:\ProgramData (EnvironmentVariablesConfigurationProvider) applicationName=temp (Microsoft.Extensions.Configuration.ChainedConfigurationProvider) ASPNETCORE_ENVIRONMENT=Development (EnvironmentVariablesConfigurationProvider) ASPNETCORE_URLS=; (EnvironmentVariablesConfigurationProvider) contentRoot=C:\repos\temp (Microsoft.Extensions.Configuration.ChainedConfigurationProvider) DOTNET_ROOT=C:\Program Files\dotnet (EnvironmentVariablesConfigurationProvider) Logging: LogLevel: Default=Warning (JsonConfigurationProvider for 'secrets.json' (Optional)) Microsoft=Warning (JsonConfigurationProvider for 'appsettings.Development.json' (Optional)) Microsoft.Hosting.Lifetime=Information (JsonConfigurationProvider for 'appsettings.Development.json' (Optional)) MySecretValue=TOPSECRET (JsonConfigurationProvider for 'secrets.json' (Optional)) ...
This small example shows configuration values coming from 5 different locations:
- appsettings.json (
JsonConfigurationProvider)
- appsettings.Developmentjson (
JsonConfigurationProvider)
- secrets.json (
JsonConfigurationProvidervia the user secrets provider)
- Environment Variables (
EnvironmentVariablesConfigurationProvider)
- In memory values (
ChainedConfigurationProvider)
It's also worth noting how "sections" in the configuration are displayed. For example, the
Logging and
LogLevel sections from appsettings.json:
Logging: LogLevel: Default=Warning (JsonConfigurationProvider for 'secrets.json' (Optional))
I was interested to know exactly how this function works, so in the next section we dig into the source code.
Behind the source code of
GetDebugView()
The
GetDebugView() extension method has been available since .NET Core 3.0. The following shows the code as of .NET 5.0. I show the whole method initially, and then walk through the code afterwards.
using System.Collections.Generic; using System.Linq; using System.Text; namespace Microsoft.Extensions.Configuration { public static class ConfigurationRootExtensions { public static string GetDebugView(this IConfigurationRoot root) { void RecurseChildren( StringBuilder stringBuilder, IEnumerable<IConfigurationSection> children, string indent) { foreach (IConfigurationSection child in children) { (string Value, IConfigurationProvider Provider) valueAndProvider = GetValueAndProvider(root, child.Path); if (valueAndProvider.Provider != null) { stringBuilder .Append(indent) .Append(child.Key) .Append('=') .Append(valueAndProvider.Value) .Append(" (") .Append(valueAndProvider.Provider) .AppendLine(")"); } else { stringBuilder .Append(indent) .Append(child.Key) .AppendLine(":"); } RecurseChildren(stringBuilder, child.GetChildren(), indent + " "); } } var builder = new StringBuilder(); RecurseChildren(builder, root.GetChildren(), ""); return builder.ToString(); } private static (string Value, IConfigurationProvider Provider) GetValueAndProvider( IConfigurationRoot root, string key) { foreach (IConfigurationProvider provider in root.Providers.Reverse()) { if (provider.TryGet(key, out string value)) { return (value, provider); } } return (null, null); } } }
The
GetDebugView() method starts by defining a local function,
RecurseChildren() which performs the bulk of the work of the method.
Local functions are private methods that can only be caused by the member they're nested in. In this case, that means
RecurseChildrencan only be called by
GetDebugView().
We'll come back to the
RecurseChildren() method shortly, but first let's look at the rest of the
GetDebugView() method:
public static string GetDebugView(this IConfigurationRoot root) { void RecurseChildren(StringBuilder stringBuilder, Enumerable<IConfigurationSection> children, string indent) { /* Shown later */ } var builder = new StringBuilder(); RecurseChildren(builder, root.GetChildren(), ""); return builder.ToString(); }
So the rest of the method
- Creates a
StringBuilder()to hold the output
- Calls
RecurseChildren()to build the output, passing in the
StringBuilderand the immediate children of the configuration root.
- Creates the final string by calling
StringBuilder.ToString().
The
RecurseChildren method, while it takes up a lot of lines, is using a relatively simple recursive algorithm to walk all the keys in the configuration object:
- For every key in the section
- Try and get the value for the key, as well as the provider that that added the key
- If the key has an associated provider, print the key and value
- If the key doesn't have a provider, then it must be a "section". Print the section name, fetch the section's children, and call
RecurseChildren()over those child keys.
Once the loop over the "top-level" keys is complete, all the configuration values have been iterated and printed to the
StringBuilder. The
RecurseChildren method uses the
indent parameter to keep track of how "deep" into the configuration sections the method is, creating the correct structure for the
Logging section for example.
I have written previously about this approach to “Creating an in ASCII art tree in C#”.
The
RecurseChildren() method finds the provider and value for a given configuration key using the helper method
GetValueAndProvider(). This method is called using the current configuration value, and the implicitly captured
IConfigurationRoot variable provided in the
GetDebugView() method.
GetDebugView()is an extension method on
IConfigurationRootnot on
IConfiguration(the interface you typically interact with in ASP.NET Core apps).
IConfigurationRoothas access to the underlying configuration providers (in addition to the configuration values themselves), whereas
IConfigurationdoes not.
GetValueAndProvider() iterates through each of the registered configuration providers in reverse, looking for a configuration value with the required
key. If the key is found, the associated value and provider are returned. If no provider is found, then the value is inferred to be a "section" with no associated value.
private static (string Value, IConfigurationProvider Provider) GetValueAndProvider( IConfigurationRoot root, string key) { foreach (IConfigurationProvider provider in root.Providers.Reverse()) { if (provider.TryGet(key, out string value)) { return (value, provider); } } return (null, null); }
The providers are iterated in reverse because of a feature of the configuration system in ASP.NET Core whereby later configuration providers "overwrite" the values added by earlier configuration providers. This enables you to provide a "default" value in appsettings.json, and then "overwrite" it with an environment variable, for example.
The "reverse" iteration mirrors the way that the
ConfigurationRootiterates providers to find a configuration key.
That covers both the basics and details of
GetDebugView(), so finally lets look at a practical way of using it in your applications.
Exposing the debug view in your application
The information provided by
GetDebugView() can be very useful when you need to debug a configuration problem in your application—being able to see exactly where a configuration value comes from is invaluable when things aren't working as you expect.
One obvious approach is to expose an endpoint in your ASP.NET Core app where you can query for this debug view. In the following example we use the lightweight
MapGet method to expose a simple endpoint:
public class Startup { public Startup(IConfiguration configuration) { Configuration = configuration; } public IConfiguration Configuration { get; } public void ConfigureServices(IServiceCollection services) { } public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { app.UseRouting(); app.UseAuthorization(); app.UseEndpoints(endpoints => { if(env.IsDevelopment()) { endpoints.MapGet("/debug-config", ctx => { var config = (Configuration as IConfigurationRoot).GetDebugView(); return ctx.Response.WriteAsync(config); }); } }); } }
When you call this endpoint, you'll be able to see your application's configuration in the browser:
This is a pretty basic example, but there's a couple of important takeaways:
- You definitely shouldn't be exposing this data in a production environment. Your config will contain connection strings and secrets, so always place anything like this behind an
IsDevelopment()flag!
- You need to cast the injected
IConfigurationto
IConfigurationRootbefore you can call
GetDebugView().
That last point is because although the
ConfigurationRoot implementation used in .NET Core 3.0+ implements
IConfigurationRoot, it's only registered in the DI container as an
IConfiguration object.
Be aware that this is technically a configuration detail, and that there's no reason you will definitely be able to cast an
IConfigurationobject to
IConfigurationRoot. Realistically, it's unlikely that implementation will change significantly though, so it's probably pretty safe.
Exposing this debug view of configuration can be very useful, and it's something I've done often in my applications, but it does make me a little nervous exposing it via an API. In the next post I'll describe another way of exposing these details; using Oakton's
Describe command.
Summary
In this post I discussed the
IConfigurationRoot.GetDebugView() extension method, and walked through its implementation. This method lists all of the configuration keys and values in your app, as well as the configuration provider that added each key.This can be very useful for working out why a configuration value doesn't have the value you expect, or for spotting typos in your configuration settings. I also described how you can expose this data as an API endpoint using a lightweight API. | https://andrewlock.net/debugging-configuration-values-in-aspnetcore/ | CC-MAIN-2022-05 | en | refinedweb |
Here’s the reality, billions of credentials have been leaked or stolen and are now easily downloaded online by anyone. Many of these databases of identities include passwords in plain text, while others are one-way hashed. One-way hashing is better (we’ll get to why in a second), but it is only as secure as is mathematically feasible. Let’s take a look at one-way hashing algorithms and how computers handle them.
Hashing
A hash by definition is a function that can map data of an arbitrary size to data of a fixed size. SHA2 is a hashing algorithm that uses various bit-wise operations on any number of bytes to produce a fixed sized hash. For example, the SHA-256 algorithm produces a 256 bit result. The algorithm was designed specifically so that going from a hash back to the original bytes is infeasible. Developers use an SHA2 hash so that instead of storing a plain text password, they instead only store the hash. When a user is authenticated, the plain text password they type into the login form is hashed, and because the algorithm will always produce the same hash result given the same input, comparing this hash to the hash in the database tells us the password is correct.
Cracking Passwords
While one-way hashing means we aren’t storing plain text passwords, it is still possible to determine the original plain text password from a hash. Next, we’ll outline the two most common approaches of reversing a hash.
Lookup Tables
The first is called a lookup table, or sometimes referred to as a rainbow table. This method builds a massive lookup table that maps hashes to plain text passwords. The table is built by simply hashing every possible password combination and storing it in some type of database or data-structure that allows for quick lookups.
Here’s an example of a lookup table for SHA2 hashed passwords:
sha2_hash password ----------------------------------------------------------------------------------------- e150a1ec81e8e93e1eae2c3a77e66ec6dbd6a3b460f89c1d08aecf422ee401a0 123456 e5d316bfd85b606e728017e9b6532e400f5f03e49a61bc9ef59ec2233e41015a broncosfan123 561acac680e997e22634a224df2c0931cf80b9f7c60102a4e12e059f87558232 Letmein bdc511ea9b88c90b75c3ebeeb990e4e7c813ee0d5716ab0431aa94e9d2b018d6 newenglandclamchowder 9515e65f46cb737cd8c191db2fd80bbd05686e5992b241e8ad7727510b7142e6 opensesame 6b3a55e0261b0304143f805a24924d0c1c44524821305f31d9277843b8a10f4e password c194ead20ad91d30c927a34e8c800cb9a13a7e445a3ffc77fed14176edc3c08f xboxjunkie42
Using a lookup table, all the attacker needs to know is the SHA2 hash of the password and they can see if it exists in the table. For example, let’s assume for a moment that Netflix stores your password using an SHA2 hash. If Netflix is breached, their user database is likely now available to anyone with a good internet connection and a torrent client. Even a mediocre hacker now only needs to lookup the SHA2 hash associated with your Netflix account to see if it exists in their lookup table. This will reveal nearly instantly what your plain text password is for Netflix. Now, this hacker can log in to your Netflix account and binge watch all four seasons of Fuller House (“how rude!”). And he can also try this password on Hulu and HBO Go to see if you used the same email address and password for those accounts as well.
The best way to protect against this type of attack is to use what is called a salt. A salt is simply a bunch of random characters that you prepend to the password before it is hashed. Each password should have a different salt, which means that a lookup table is unlikely to have an entry for the combination of the salt and the password. This makes salts an ideal defense against lookup tables.
Here’s an example of a salt and the resulting combination of the password and the salt which is then hashed:
// Bad, no salt. Very bland. sha2("password") // 6b3a55e0261b0304143f805a24924d0c1c44524821305f31d9277843b8a10f4e // Better, add a salt. salt = ";L'-2!;+=#/5B)40/o-okOw8//3a" toHash = ";L'-2!;+=#/5B)40/o-okOw8//3apassword" sha2(toHash) // f534e6bf84a638112e07e69861927ab624c0217c0655e4d3be07659bcf6c1c07
Now that we have added the salt, the “password” that we actually generated the hash from was the String
;L'-2!;+=#/5B)40/o-okOw8//3apassword. This String is long, complex and contains a lot of random characters. Therefore, it is nearly impossible that the hacker that created the lookup table would have generated the hash for the String
;L'-2!;+=#/5B)40/o-okOw8//3apassword.
Brute Force
The second method that attackers use to crack passwords is called brute force cracking. This means that the attacker writes a computer program that can generate all possible combinations of characters that can be used for a password and then computes the hash for each combination. This program can also take a salt if the password was hashed with a salt. The attacker then runs the program until it generates a hash that is the same as the hash from the database. Here’s a simple Java program for cracking passwords. We left out some detail to keep the code short (such as all the possible password characters), but you get the idea.
import org.apache.commons.codec.digest.DigestUtils; public class PasswordCrack { public static final char[] PASSWORD_CHARACTERS = new char[] {'a', 'b', 'c', 'd'}; public static void main(String... args) { String salt = args[0]; String hashFromDatabase = args[1].toUpperCase(); for (int i = 6; i <= 8; i++) { char[] ca = new char[i]; fillArrayHashAndCheck(ca, 0, salt, hashFromDatabase); } } private static void fillArrayHashAndCheck(char[] ca, int index, String salt, String hashFromDatabase) { for (int i = 0; i < PASSWORD_CHARACTERS.length; i++) { ca[index] = PASSWORD_CHARACTERS[i]; if (index < ca.length - 1) { fillArrayHashAndCheck(ca, index + 1, salt, hashFromDatabase); } else { String password = salt + new String(ca); String sha2Hex = DigestUtils.sha2Hex(password).toUpperCase(); if (sha2Hex.equals(hashFromDatabase)) { System.out.println("plain text password is [" + password + "]"); System.exit(0); } } } } }
This program will generate all the possible passwords with lengths between 6 and 8 characters and then hash each one until it finds a match. This type of brute-force hacking takes time because of the number of possible combinations.
Password complexity vs. computational power
Let’s bust out our TI-85 calculators and see if we can figure out how long this program will take to run. For this example we will assume the passwords can only contain ASCII characters (uppercase, lowercase, digits, punctuation). This set is roughly 100 characters (this is rounded up to make the math easier to read). If we know that there are at least 6 characters and at most 8 characters in a password, then all the possible combinations can be represented by this expression:
possiblePasswords = 100^8 + 100^7 + 100^6
The result of this expression is equal to
10,101,000,000,000,000. This is quite a large number, North of 10 quadrillion to be a little more precise, but what does it actually mean when it comes to our cracking program? This depends on the speed of the computer the cracking program is running on and how long it takes the computer to execute the SHA2 algorithm. The algorithm is the key component here because the rest of the program is extremely fast at creating the passwords.
Here’s where things get dicey. If you run a quick Google search for “fastest bitcoin rig” you’ll see that these machines are rated in terms of the number of hashes they can perform per second. The bigger ones can be rated as high as
44 TH/s. That means it can generate 44 tera-hashes per second or
44,000,000,000,000.
Now, if we divide the total number of passwords by the number of hashes we can generate per second, we are left with the total time it takes a Bitcoin rig to generate the hashes for all possible passwords. In our example above, this equates to:
bitcoinRig = 4.4e13 possiblePasswords = 100^8 + 100^7 + 100^6 = 1.0101e16 numberOfSeconds = possiblePasswords / bitcoinRig = ~230 numberOfMinutes = numberOfSeconds / 60 = ~4
This means that using this example Bitcoin rig, we could generate all the hashes for a password between 6 and 8 characters in length in roughly 4 minutes. Feeling nervous yet? Let’s add one additional character and see long it takes to hash all possible passwords between 6 and 9 characters.
bitcoinRig = 4.4e13 possiblePasswords = 100^9 + 100^8 + 100^7 + 100^6 = 1.010101E18 numberOfSeconds = possiblePasswords / bitcoinRig = 22,956 numberOfMinutes = numberOfSeconds / 60 = ~383 numberOfHours = numberOfMinutes / 60 = ~6
By adding one additional character to the potential length of the password we increased the total compute time from 4 minutes to 6 hours. This is nearing a 100x increase in computational time to use the brute force strategy. You probably can see where this is going. To defeat the brute force strategy, you simply need to make it improbable to calculate all possible password combinations.
Let’s get crazy and make a jump to 16 characters:
bitcoinRig = 4.4e13 possiblePasswords = 100^16 + 100^15 ... 100^7 + 100^6 = 1e32 numberOfSeconds = possiblePasswords / bitcoinRig = 2.27e18 numberOfMinutes = numberOfSeconds / 60 = 3.78e16 numberOfHours = numberOfMinutes / 60 = 630,000,000,000,000 or 630 trillion numberOfDays = numberOfHours / 24 = 26,250,000,000,000 or 26.25 trillion days numberOfYears = numberOfDays / 365 = 71,917,808,219 or 71.9 billion years
To boil down our results, if we take these expressions and simplify them, we can build an equation that solves for any length password.
numberOfSeconds = 100^lengthOfPassword / computeSpeed
This equation shows that as the password length increases, the number of seconds to brute-force attack the password also increases since the computer’s speed to execute the hashing algorithm is a fixed divisor. The increase in password complexity (length and possible characters) is called entropy. As the entropy increases, the time required to brute-force attack a password also increases.
What does all this math mean?
Great question. Here’s the answer:
If you allow the use of short passwords, which makes them easy to remember, you need to decrease the value of
computeSpeedin order to maintain a level of security.
If you require longer randomized passwords, such as those created by a password generator, you don’t need to change anything because the value of
computeSpeedbecomes much less relevant.
Let’s assume we are going to allow users to select short passwords. This means that we need to decrease the
computeSpeed value which means we need to slow down the computation of the hash. How do we accomplish that?
The way that the security industry has been solving this problem is by continuing to increase the algorithmic complexity, which in turn causes the computer to spend more time generating one-way hashes. Examples of these algorithms include BCrypt, SCrypt, PBKDF2, and others. These algorithms are specifically designed to cause the CPU/GPU of the computer to take an excessive amount of time generating a single hash.
If we can reduce the
computeSpeed value from
4.4e13 to something much smaller such as
1,000, our compute time for passwords between 6 and 8 characters long become much better. In other words, if we can slow down the computer so it takes longer for each hash it has to generate, we can increase the length of time it will take to calculate all of the possible passwords.
computeSpeed = 1e3 possiblePasswords = 100^8 + 100^7 + 100^6 = 1.0101e16 numberOfSeconds = possiblePasswords / computeSpeed = 10,101,000,000,000 or 10.1 trillion numberOfMinutes = numberOfSeconds / 60 = 168,350,000,000 or 168.35 billion numberOfHours = numberOfMinutes / 60 = 2,805,833,333 or 2.8 billion numberOfDays = numberOfHours / 24 = 116,909,722 or 116.9 million numberOfYears = numberOfDays / 365 = 320,300
Not bad. By slowing down the hash computation, we have increased the time from 4 minutes using our Bitcoin rig to 320,300 years. In this comparision you can see the practical difference between using SHA2 and BCrypt. BCrypt is purpose built to be extremely slow in comparison to SHA2 and other more traditional hashing algorithms.
And here lies the debate that the security industry has been having for years:
Do we allow users to use short passwords and put the burden on the computer to generate hashes as slowly as reasonably still secure? Or do we force users to use long passwords and just use a fast hashing algorithm like SHA2 or SHA512?
Some in the industry have argued that enough is enough with consuming massive amounts of CPU and GPU cycles simply computing hashes for passwords. By forcing users to use long passwords, we get back a lot of computing power and can reduce costs by shutting off the 42 servers we have to run to keep up with login volumes.
Others claim that this is a bad idea for a variety of reasons including:
- Humans don’t like change
- The risk of simple algorithms like SHA2 is still too high
- Simple algorithms might be currently vulnerable to attacks or new attacks might be discovered in the future
At the time of this writing, there are still numerous simple algorithms that have not been attacked, meaning that no one has figured out a way to reduce the need to compute every possible hash. Therefore, it is still a safe assertion that using a simple algorithm on a long password is secure.
How FusionAuth does it
FusionAuth defaults to
PBKDF2 with
24,000 iterations as the default password hashing scheme. This algorithm is quite complex and with the high number of iterations, it is sufficiently slow such that long and short passwords are challenging to brute force attack. Fusionauth also allow you to change the default algorithm as well as upgrade the algorithm for a user when they log in. This allows you to upgrade your applications password security over time.
FusionAuth has you covered
If you are looking for a solution lets you manage and configure password hashing algorithms,. | https://fusionauth.io/learn/expert-advice/security/math-of-password-hashing-algorithms-entropy/ | CC-MAIN-2022-05 | en | refinedweb |
@auth0/nextjs-auth0@auth0/nextjs-auth0
The Auth0 Next.js SDK is a library for implementing user authentication in Next.js applications.
Table of ContentsTable of Contents
- Installation
- Getting Started
- Documentation
- Contributing
- Vulnerability Reporting
- What is Auth0?
- License
InstallationInstallation
npm install @auth0/nextjs-auth0
This library supports the following tooling versions:
Node.js:
^10.13.0 || >=12.0.0
Next.js:
>=10
Getting StartedGetting Started
Auth0 ConfigurationAuth0 Configuration
Create a Regular Web Application in the Auth0 Dashboard.
If you're using an existing application, verify that you have configured the following settings in your Regular Web Application:
- Click on the "Settings" tab of your application's page.
-:
Take note of the Client ID, Client Secret, and Domain values under the "Basic Information" section. You'll need these values in the next step.
Basic SetupBasic Setup
You need to allow your Next.js application to communicate properly with Auth0. You can do so by creating a
.env.local file under your root project directory that defines the necessary Auth0 configuration values as follows:
# execute the following command to generate a suitable string for the
AUTH0_SECRET value:
node -e "console.log(crypto.randomBytes(32).toString('hex'))"
You can see a full list of Auth0 configuration options in the "Configuration properties" section of the "Module config" document.
For more details about loading environmental variables in Next.js, visit the "Environment Variables" document.
Go to your Next.js application and create a catch-all, dynamic API route handler under the
/pages/api directory:
Create an
authdirectory under the
/pages/api/directory.
Create a
[...auth0].jsfile under the newly created
authdirectory.
The path to your dynamic API route file would be
/pages/api/auth/[...auth0].js. Populate that file as follows:
import { handleAuth } from '@auth0/nextjs-auth0'; export default handleAuth();
Executing
handleAuth() creates the following route handlers under the hood that perform different parts of the authentication flow:
/api/auth/login: Your Next.js application redirects users to your Identity Provider for them to log in (you can optionally pass a
returnToparameter to return to a custom relative URL after login, eg
/api/auth/login?returnTo=/profile).
/api/auth/callback: Your Identity Provider redirects users to this route after they successfully log in.
/api/auth/logout: Your Next.js application logs out the user.
/api/auth/me: You can fetch user profile information in JSON format.
Wrap your
pages/_app.js component with the
UserProvider component:
//component instead of an anchor tag. The
Linkcomponent is meant to perform client-side transitions between pages. As the links point to an API route and not to a page, you should keep them as anchor tags.
There are two additional ways to check for an authenticated user; one for Next.js pages using withPageAuthRequired and one for Next.js API routes using withAPIAuthRequired.
For other comprehensive examples, see the EXAMPLES.md document.
DocumentationDocumentation
API ReferenceAPI Reference
Server-side methods:
- handleAuth
- handleLogin
- handleCallback
- handleLogout
- handleProfile
- withApiAuthRequired
- withPageAuthRequired
- getSession
- getAccessToken
- initAuth0
Client-side methods/components:
Visit the auto-generated API Docs for more details.
Cookies and SecurityCookies and Security
All cookies will be set to
HttpOnly, SameSite=Lax and will be set to
Secure if the application's
AUTH0_BASE_URL is
https.
The
HttpOnly setting will make sure that client-side JavaScript is unable to access the cookie to reduce the attack surface of XSS attacks.
The
SameSite=Lax setting will help mitigate CSRF attacks. Learn more about SameSite by reading the "Upcoming Browser Behavior Changes: What Developers Need to Know" blog post.
Caching and SecurityCaching and Security
Many hosting providers will offer to cache your content at the edge in order to serve data to your users as fast as possible. For example Vercel will cache your content on the Vercel Edge Network for all static content and Serverless Functions if you provide the necessary caching headers on your response.
It's generally a bad idea to cache any response that requires authentication, even if the response's content appears safe to cache there may be other data in the response that isn't.
This SDK offers a rolling session by default, which means that any response that reads the session will have a
Set-Cookie header to update the cookie's expiry. Vercel and potentially other hosting providers include the
Set-Cookie header in the cached response, so even if you think the response's content can be cached publicly, the responses
Set-Cookie header cannot.
Check your hosting provider's caching rules, but in general you should never cache responses that either require authentication or even touch the session to check authentication (eg when using
withApiAuthRequired,
withPageAuthRequired or even just
getSession or
getAccessToken).
Error Handling and SecurityError Handling and Security
The default server side error handler for the
/api/auth/* routes prints the error message to screen, eg
try { await handler(req, res); } catch (error) { res.status(error.status || 400).end(error.message); }
Because the error can come from the OpenID Connect
error query parameter we do some basic escaping which makes sure the default error handler is safe from XSS.
If you write your own error handler, you should not render the error message without using a templating engine that will properly escape it for other HTML contexts first.
Base Path and Internationalized RoutingBase Path and Internationalized Routing
With Next.js you can deploy a Next.js application under a sub-path of a domain using Base Path and serve internationalized (i18n) routes using Internationalized Routing.
If you use these features the urls of your application will change and so the urls to the nextjs-auth0 routes will change. To accommodate this there are various places in the SDK that you can customise the url.
For example if
basePath: '/foo' you should prepend this to the
loginUrl and
profileUrl specified in your
Auth0Provider
// _app.jsx function App({ Component, pageProps }) { return ( <UserProvider loginUrl="/foo/api/auth/login" profileUrl="/foo/api/auth/me"> <Component {...pageProps} /> </UserProvider> ); }
Also, any links to login or logout should include the
basePath:
<a href="/foo/api/auth/login">Login</a><br /> <a href="/foo/api/auth/logout">Logout</a>
You should configure baseUrl (or the
AUTH0_BASE_URL environment variable) eg
# .env.local AUTH0_BASE_URL=
For any pages that are protected with the Server Side withPageAuthRequired you should update the
returnTo parameter depending on the
basePath and
locale if necessary.
// ./pages/my-ssr-page.jsx export default MySsrPage = () => <></>; const getFullReturnTo = (ctx) => { // TODO: implement getFullReturnTo based on the ctx.resolvedUrl, ctx.locale // and your next.config.js's basePath and i18n settings. return '/foo/en-US/my-ssr-page'; }; export const getServerSideProps = (ctx) => { const returnTo = getFullReturnTo(ctx.req); return withPageAuthRequired({ returnTo })(ctx); };
Comparison with the Auth0 React SDKComparison with the Auth0 React SDK
We also provide an Auth0 React SDK, auth0-react, which may be suitable for your Next.js application.
The SPA security model used by
auth0-react is different from the Web Application security model used by this SDK. In short, this SDK protects pages and API routes with a cookie session (see "Cookies and Security"). A SPA library like
auth0-react will store the user's ID Token and Access Token directly in the browser and use them to access external APIs directly.
You should be aware of the security implications of both models. However, auth0-react may be more suitable for your needs if you meet any of the following scenarios:
- You are using Static HTML Export with Next.js.
- You do not need to access user data during server-side rendering.
- You want to get the access token and call external API's directly from the frontend layer rather than using Next.js API Routes as a proxy to call external APIs
TestingTesting
By default, the SDK creates and manages a singleton instance to run for the lifetime of the application. When testing your application, you may need to reset this instance, so its state does not leak between tests.
If you're using Jest, we recommend using
jest.resetModules() after each test. Alternatively, you can look at creating your own instance of the SDK, so it can be recreated between tests.
For end to end tests, have a look at how we use a mock OIDC Provider.
DeployingDeploying
For deploying, have a look at how we deploy our example app to Vercel.
ContributingContributing
We appreciate feedback and contribution to this repo! Before you get started, please read the following:
Start by installing the dependencies of this project:
npm install
In order to build a release, you can run the following commands, and the output will be stored in the
dist folder:
npm run clean npm run lint npm run build
Additionally, you can also run tests:
npm run build:test # Build the Next.js test app npm run test npm run test:watch
Why Auth0? Because you should save time, be happy, and focus on what really matters: building your product.
LicenseLicense
This project is licensed under the MIT license. See the LICENSE file for more info. | https://www.npmjs.com/package/@auth0/nextjs-auth0 | CC-MAIN-2022-05 | en | refinedweb |
Hi there,
I made a project, where I want to pull a thread with two Lego-Wheels. The hardware all works as it should, but I got a problem using the stepper, which I could not solve by myself and google didn’t help either:
If the speed of the stepper is set too low, it decreases the speed even more.
In this program I calculated the speed and steps with the distance to pull and the time-span, in which the thread should be pulled this distance:
#include <Stepper.h> int SPU = 2048; Stepper Motor(SPU, 3,5,4,6); double roundsPerMinute = 0; double rounds = 0; double distance = 20; //in cm double timeSpan = 1; //in min double wheelDiameter = 23.3; //in mm void setup() { Serial.begin(9600); double distancePerRound = (wheelDiameter/10)*PI; rounds = distance/distancePerRound; roundsPerMinute = rounds/timeSpan; //calculate Speed and needed rotations } void loop() { Serial.println("Rounds per minute: " + String(roundsPerMinute)); Serial.println("Rounds: " + String(rounds)); Motor.setSpeed(roundsPerMinute); long steps = rounds*2048; //calculate speed from rounds Serial.println("Steps: " + String(steps)); long start = millis(); //start measuring time for(int i = 0; i < int(steps/10000); i++){ //I tested out that the .step function only can recieve values up to 32767, so I call this function multiple times Motor.step(10000); } Motor.step(steps%10000); long neededTime = millis() - start; //stop measuring time Serial.println("Needed Time: " + String(neededTime/1000.0)); Serial.println(); }
This works all fine if the roundsPerMinute are higher than ca.5, but when they are lower, the problem I explained appears.
To name an example:
distance = 147 cm
timeSpan = 6 min
→ real distance: correct
→ real time: 6 min 41 sec
I hope you can understand what I mean.
Stepper: 28BYJ-48
Driver: ULN2003
What do I do wrong?
Thank you in advance! | https://forum.arduino.cc/t/stepper-too-slow-on-low-speed/673208 | CC-MAIN-2022-05 | en | refinedweb |
table of contents
NAME¶
aio_cancel - cancel an outstanding asynchronous I/O request
SYNOPSIS¶
#include <aio.h>
int aio_cancel(int fd, struct aiocb *aiocbp);
Link with -lrt.
DESCRIPTION¶
The
VERSIONS¶
The aio_cancel() function is available since glibc 2.1.
ATTRIBUTES¶
For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶
POSIX.1-2001, POSIX.1-2008.
EXAMPLES¶
SEE ALSO¶
aio_error(3), aio_fsync(3), aio_read(3), aio_return. | https://manpages.debian.org/bullseye/manpages-dev/aio_cancel.3.en.html | CC-MAIN-2022-05 | en | refinedweb |
Red Hat Bugzilla – Bug 110037
Won't start at all
Last modified: 2007-04-18 12:59:28 EDT
Description of problem:
Yum refuses to start at all.
Version-Release number of selected component (if applicable):
2.0.4-3
How reproducible:
100%
Actual results:
[~#] yum update
Traceback (most recent call last):
File "/usr/bin/yum", line 22, in ?
import yummain
ImportError: Bad magic number in /usr/share/yum/yummain.pyc
Additional info:
2.0.4-2 works perfect.
It doesn't seem to matter what arguments I give, I get the same result
anyway.
Hmmm, this smells like python-2.3.2.
What happens if you rename the *.pyc file, causing python to
recompile from the *.py?
On the opposite, it seems that this was a problem from running python
2.2.3. Upgrading pythong to 2.3.2 seems to have fixed it.
The reason for not upgrading python in the first place was that it
broke some dependencies. Hope this will be fixed too.
Well, yum works now. I'm closing this bug, marking it as NOTABUG. I
guess the dependencies of yum are wrong though if it needs python 2.3.2... | https://bugzilla.redhat.com/show_bug.cgi?id=110037 | CC-MAIN-2016-50 | en | refinedweb |
On 6/30/06, Guido van Rossum <guido at python.org> wrote: > On 6/30/06, Steven Bethard <steven.bethard at gmail.com> wrote: > > On 6/30/06, Guido van Rossum <guido at python.org> wrote: > > > On 6/30/06, Steven Bethard <steven.bethard at gmail.com> wrote: > > > >. Oh, also, is the -1 only to replacing the global keyword? Or is it also to the idea of replacing globals() with a builtin pseudo-namespace object like the one above? STeVe -- I'm not *in*-sane. Indeed, I am so far *out* of sane that you appear a tiny blip on the distant coast of sanity. --- Bucky Katt, Get Fuzzy | https://mail.python.org/pipermail/python-3000/2006-July/002484.html | CC-MAIN-2016-50 | en | refinedweb |
Eclipse Community Forums - RDF feed Eclipse Community Forums Virgo ignores OSGI-INF <![CDATA[I've got a collection of bundles which use the Blueprint Service - with the blueprint resources in OSGI-INF/blueprint directories - to do DI that deploy and work correctly in Karaf. When I deploy them in Virgo they show up in the Artifacts>Bundles view but they aren't actually doing anything, e.g., not listening for connections, etc. As a test I took one of the bundles, named listener, and created a 'spring' directory under META-INF and moved the OSGI-INF/blueprint/listener.xml file into it. After packaging this bundle throws an org.xml.sax.SAXParseException: cvc-elt.1: Cannot find the declaration of element 'blueprint' exception. Ergo, it appears that Virgo is ignoring the OSGI-INF/blueprint directory and that there is some issue reading a standard blueprint specification which Karaf has no problems groking. This is too bad since Virgo - given its Spring-DM heritage - seems more "production ready" than does Karaf. Is this a known problem, or will it be addressed in a future release? ... WkH]]> Ward K Harold 2011-01-18T20:27:04-00:00 Re: Virgo ignores OSGI-INF <![CDATA[Hi, The behavior you experience is actually "by design". Currently Virgo uses SpringDM 1.2.1, which is not the same as Blueprint Service. The Blueprint Service reference implementation is SpringDM 2.0.0M2. This is the reason Virgo ignores the blueprint folder and also can't recognise the <blueprint> element. In order to utilize SpringDM in the current Virgo release you have to use proper SpringDM xmls, located in META-INF/spring folder with all the proper beans definitions. As for your question will this be addressed in future release - definitely. The Blueprint Service reference implementation(Gemini Blueprint) is currently being finalized - it should be completed in Q1 2011. Then we will plan an adoption path and at the end Virgo will fully support Blueprint Services once we replace the current SpringDM with the first release of Gemini Blueprint. Hopefully that means the 3.0.0 release of Virgo, but it could be further down the line. Nothing certain here. Best Regards Borislav]]> Borislav Kapukaranov 2011-01-18T20:51:35-00:00 Re: Virgo ignores OSGI-INF <![CDATA[I agree with Borislav's comments. Please note that the change is covered by bug 317943. You may care to comment on the bug to add your support for it (I don't think voting is enabled). Some users are already experimenting by deploying Gemini Blueprint M1 into Virgo in order to try out blueprint-based bundles. It doesn't seem to conflict too badly with Virgo's own use of Spring DM 1.2.1, possibly because the Spring DM 2.x support inside Gemini Blueprint ignores the 1.2.x namespace. If you try that approach, I would recommend putting the Gemini Blueprint bundles in repository/usr, creating a plan that references them and adding that plan to repository/usr, and then deploying that plan by adding it to the initialArtifacts property in the user region configuration file. See here for more details, although you'll need to adjust the namespace of the plan, assuming you are using Virgo 2.1.0.RELEASE. However, I wouldn't recommend using blueprint in production on virgo until bug 317943 is implemented.]]> Glyn Normington 2011-01-19T02:51:07-00:00 Re: Virgo ignores OSGI-INF <![CDATA[Thanks for the insight. I'll add my 2 cents to 317943 shortly. If I find some time I'll also take a run at doing Gemini Blueprint in Virgo.]]> Ward K Harold 2011-01-19T21:54:19-00:00 | http://www.eclipse.org/forums/feed.php?mode=m&th=203189&basic=1 | CC-MAIN-2016-50 | en | refinedweb |
nanpy 0.8-distribute ()
How to use
Serial communication
Nanpy autodetects the serial port for you, anyway you can specify another serial port and baudrate manually:
from nanpy import serial_manager
serial_manager.connect(‘/dev/ttyACM1’)>
- Author: Andrea Stagi
- Maintainer: Andrea Stagi
- Bug Tracker:
- Download URL:
- Keywords: arduino library prototype raspberry
- License:
MIT License Copyright (c) 2012-2013.8.xml | https://pypi.python.org/pypi/nanpy/0.8 | CC-MAIN-2016-50 | en | refinedweb |
Good question, fixed in my latest commit as I saw this issue. I also tried building without that makedepend and it worked fine.
Search Criteria
Package Details: hugo 0.17-1
Dependencies (4)
- glibc
- git (git-git) (make)
- go (go-bin, go-cross, go-cross-all-platforms, go-cross-major-platforms, go-git) (make)
- pygmentize (optional) – syntax-highlight code snippets.
Required by (0)
Sources (1)
Latest Comments
fusion809 commented on 2016-11-09 12:55
ogarcia commented on 2016-11-09 12:24
Why 'mercurial' as makedepend? You can make hugo without mercurial.
fusion809 commented on 2016-11-04 00:27
What request would I have to make? Merge request? As there's only merge, orphan and deletion requests available.
neitsab commented on 2016-11-03 19:32
Hi fusion809, I just wanted to let you know that I took over the former "hugo" package and requested its move to "hugo-bin" so as to follow package naming guidelines. As a consequence, the "hugo" namespace is yours to take ! Feel free to submit a request to have your package renamed (adjustments to the PKGBUILD and .SRCINFO will be necessary). Cheers! | https://aur.archlinux.org/packages/hugo/?comments=all | CC-MAIN-2016-50 | en | refinedweb |
So I have the code to make the Reverse Polish Expression work
def rpn(x):
stack = []
operators=['+', '-', '*']
for i in x.split(' '):
if i in operators:
op1 = stack.pop()
op2 = stack.pop()
if i=='+': result = op2 + op1
if i=='-': result = op2 - op1
if i=='*': result = op2 * op1
stack.append(result)
else:
stack.append(float(i))
return stack.pop()
x = str(input("Enter a polish expression:"))
result = rpn(x)
print (result)
if x contains " ":
print("error")
if x contains something other than "integers and +,-,*:
then print an error
You should use
x.split() instead of
x.split(' '), it will extract everything but the spaces from
x.
split() treats multiple successive spaces as one space (so one delimiter), while
split(' ') treats one space as one delimiter.
Here's the difference:
>>> print(' '.split(' ')) ['', '', '', ''] >>> print(' '.split()) []
Given that your code will be dealing only with single-digit numbers:
for i in (_ for _ in x if not _.isspace()): # your algorithm
If you'd like to raise an error:
for i in (_ if not _.isspace() else None for _ in x): if i is None: raise ValueError("Error!") # your algorithm here | https://codedump.io/share/bkInAeM3pw5p/1/reverse-polish-expression---monitoring-the-input | CC-MAIN-2016-50 | en | refinedweb |
glutSetWindowTitle man page
glutSetWindowTitle — Request changing the title of the current window
Library
OpenGLUT - window
Synopsis
#include <openglut.h>
void
glutSetWindowTitle(const char* title);
Parameters
title
New window title
Description
glutSetWindowTitle() requests that the window system change the title of the window.
Normally a window system displays a title for every top-level window in the system. The initial title is set when you call glutCreateWindow(). By means of this function you can set the titles for your top-level OpenGLUT windows.
Some window systems do not provide titles for windows, in which case this function may have no useful effect.
Because the effect may be delayed or lost, you should not count on the effect of this function. However, it can be a nice touch to use the window title bar for a one-line status bar in some cases. Use discretion.
If you just want one title for the window over the window's entire life, you should set it when you open the window with glutCreateWindow().
Caveats
Only for managed, onscreen, top-level windows.
Not all window systems display titles.
May be ignored or delayed by window manager.
See Also
glutCreateWindow(3) glutSetIconTitle(3)
Referenced By
glutCreateWindow(3), glutSetIconTitle(3). | https://www.mankier.com/3/glutSetWindowTitle | CC-MAIN-2016-50 | en | refinedweb |
Update: this is the second version, it incorporates the original list,adds a couple of new items, and includes references to some usefulfeedback and patches that have already been prepared.We’d like to share our current wish list of plumbing layer features weare hoping to see implemented in the near future in the Linux kernel andassociated tools. Some items we can implement on our own, others are notour area of expertise, and we will need help getting them implemented.Acknowledging that this wish list of ours only gets longer and notshorter, even though we have implemented a number of other features onour own in the previous years, we are posting this list here, in thehope to find some help.If you happen to be interested in working on something from this list orable to help out, we’d be delighted. Please ping us in case you needclarifications or more information on specific items.Thanks,Kay, Lennart, Harald, David in the name of all the other plumbersAnd here is the wish list, in no particular order:tmpfs:======* support user quota on tmpfs to prevent DoS vulnerabilitieson /tmp, /dev/shm, /run/user/$USER. This is kinda important. Idea:global RLIMIT_TMPFS_QUOTA over all mounted tmpfs file systems. NEW!* support fallocate() properly: NEW! fallocate(5, 0, 0, 7663616) = -1 EOPNOTSUPPfanotify:=========* events for renames NEW!* allow safe unprivileged access NEW!* pass information about the open flags to the file system monitors, inorder to allow clients to figure out whether other applications openedfiles for writing or just read-only. NEW!* allow to find out if a file actually was written to, when closed afteropening it read-write NEW!filesystems:============* (ioctl based?) interface to query and modify the label of a mountedFAT volume: A FAT label is implemented as a hidden directory entry inthe file system, which need to be renamed when changing the file systemlabel. This is impossible to do from userspace without remounting. Hencewe’d like to see a kernel interface that is available on the mountedfile system mount point itself. Of course, bonus points, if this newinterface can be implemented for other file systems as well.* faster xattrs on ext2/3/4 (i.e. allow userspace to make use of xattrwithout paying the performance penalty for the seeks. Alex Larsson willprovide you with the measurement data how xattr checking is magnitudesslower when trying to implement a simple file list). Suggestion: providea simple flag in struct stat to inform userspace whether it is worthlooking for xattrs (i.e. think STAT_XATTRS_FOUND or STAT_XATTRS_MAYBE)NEW!mounting:=========* allow creation of read-only bind mounts in a single mount() call,instead of two NEW!* Similar, allow configuration of namespace propagation settings formount points in the initial mount() syscall, instead of always requiringtwo (which is racy, and ugly, and stuff). NEW!memory management:==================* swappiness control as madvise() for individual memory pages NEW!core kernel:============[PATCH] * hostname change notification:;a=commitdiff;h=70b932563a9514b248cc71a29bd0907bf95b4a5e NEW![PATCH] * PR_SET_CHILD_SUBREAPERReviewed and probably ready-to-merge patch: NEW!* allow 64 bit PIDs / use 32 bit pids by default, in order to fix PIDrecycle vulnerabilities NEW!* allow changing argv[] of a process without mucking with environ[]:Something like setproctitle() or a prctl() would be ideal. Of course itis questionable if services like sendmail make use of this, but otoh forservices which fork but do not immediately exec() another binary beingable to rename this child processes in ps is of importance.driver model:=============* CPU modaliases in /sys/devices/system/cpu/cpuX/modalias:useful to allow module auto-loading of e.g. cpufreq drivers and KVMmodules. Andy Kleen has a patch to create the alias file itself. CPU‘struct sysdev’ needs to be converted to ‘struct device’ and a ‘structbus_type cpu’ needs to be introduced to allow proper CPU coldplug eventreplay at bootup. This is one of the last remaining places whereautomatic hardware-triggered module auto-loading is not available. Andwe’d like to see that fix to make numerous ugly userspace work-aroundsto achieve the same go away.* export ‘struct device_type fb/fbcon’ of ‘struct class graphics’Userspace wants to easily distinguish ‘fb’ and ‘fbcon’ from each otherwithout the need to match on the device name.security:=========[PATCH] * expose CAP_LAST_CAP somehow in the running kernel at runtime:Userspace needs to know the highest valid capability of the runningkernel, which right now cannot reliably be retrieved from header filesonly. The fact that this value cannot be detected properly right nowcreates various problems for libraries compiled on newer header fileswhich are run on older kernels. They assume capabilities are availablewhich actually aren’t. Specifically, libcap-ng claims that all runningprocesses retain the higher capabilities in this case due to the“inverted” semantics of CapBnd in /proc/$PID/status.Dan Ballard* module-init-tools: provide a proper libmodprobe.so frommodule-init-tools:Early boot tools, installers, driver install disks want to accessinformation about available modules, and match devices to availablemodules to hook up driver overwrites, driver update disks, installertweaks, and to optimize bootup module handling.cgroups:========* fork throttling mechanism as basic cgroup functionality that isavailable in all hierarchies independent of the controllers used:This is important to implement race-free killing of all members of acgroup, so that cgroup member processes cannot fork faster then a cgroupsupervisor process could kill them. This needs to be recursive, so thatnot only a cgroup but all its subgroups are covered as well.Patches for task_conter from Frederic Weisbecker use the freezer Tejun is looking into.* proper cgroup-is-empty notification interface:The current call_usermodehelper() interface is an unefficient and anugly hack. Tools would prefer anything more lightweight like a netlink,poll() or fanotify interface.* allow user xattrs to be set on files in the cgroupfs (and maybeprocfs?)* allow making use of the “cpu” cgroup controller by default withoutbreaking RT. Right now creating a cgroup in the “cpu” hierarchy thatshall be able to take advantage of RT is impossible for the generic casesince it needs an RT budget configured which is from a limited resourcepool. What we want is the ability to create cgroups in “cpu” whoseprocesses get an non-RT weight applied, but for RT take advantage of theparent’s RT budget. We want the separation of RT and non-RT budgetassignment in the “cpu” hierarchy, because right now, you lose RTfunctionality in it unless you assign an RT budget. This issue severelylimits the usefulness of “cpu” hierarchy on general purpose systemsright now.* Add a timerslack cgroup controller, to allow increasing the timerslack of user session cgroups when the machine is idle. Patch from: Kirill A. Shutemov* simple, reliable and future-proof way to detect whether a specific pidis running in a CLONE_NEWUTS/CLONE_NEWPID container, i.e. not in theroot PID namespace/UTS namespace. Currently, there are available a fewugly hacks to detect this (for example a process wanting to know whetherit is running in a PID namespace could just look for a PID 2 beingaround and named kthreadd which is a kernel thread only visible in theroot namespace), however all these solutions encode information andexpectations that better shouldn’t be encoded in a namespace test likethis. This functionality is needed in particular since the removal ofthe the ns cgroup controller which provided the namespace membershipinformation to user code.AF_UNIX:========* An auxiliary meta data message for AF_UNIX called SCM_CGROUPS (orsomething like that), i.e. a way to attach sender cgroup membership tomessages sent via AF_UNIX. This is useful in case services such assyslog shall be shared among various containers (or service cgroups),and the syslog implementation needs to be able to distinguish thesending cgroup in order to separate the logs on disk. Of course stmSCM_CREDENTIALS can be used to look up the PID of the sender followed bya check in /proc/$PID/cgroup, but that is necessarily racy, and actuallya very real race in real life.* SCM_PROCSTATUS for retrieving sender process information supplying atleast: comm, exec, cmdline, audit session, audit loginuid.All time favourites:====================These items have been requested many times already, and we want to makesure they aren’t forgotten. We know they are hard to implement, and wedon’t know how to get there, but nonetheless, here they are:* Oldie But Goldie: some kind of unionfs or union mount. A minimalversion that supports only read-only filesystems would already be a bigstep forward. NEW!* revoke() NEW!* Notifications when non-child processes die, in an efficient wayfocussing on explicit PIDs (i.e. not taskstats) in some form (idea:poll() for POLLERR on /proc/$PID) NEW!--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] majordomo info at read the FAQ at | http://lkml.org/lkml/2011/10/20/275 | CC-MAIN-2016-50 | en | refinedweb |
0
Need a little help,
I'm trying to run a program that asks the user to enter an integer between 1-50.
If given a number between 1 and 50 then echo number then add all integer between 1 and integer entered
if user enter an integer not between 1 and 50 then just echo the number and ask the user for another number
if a noninteger is enter i need the program not to go into and fail state, instead clear the noninteger and ask the user for another integer.
To exit the loop you must press cntl-Z.
This is what I have so far: (please help, thanks)
#include <iostream> using namespace std; int main ( ) { int number;// the integer entered int sum =0; // the sum of numbers added int rep = 0; while(rep != 1) { cout << " Enter a positive integer (use cntl-Z to quit)" << endl; cin >> number; cout << " " << number << endl; if (cin.fail()) { cout << "Bad input, please enter another positive integer (use cnt-Z to quit)" << endl; cin >> number; return 0; } if ( number >= 1 && number <=50)//test expression { for(int i=1;i<=number;i++) { sum = sum + i; } cout<<"Sum is: "<< sum; } } return 0; }
Edited 3 Years Ago by mike_2000_17: Fixed formatting | https://www.daniweb.com/programming/software-development/threads/357250/c-no-fail-state-with-entering-non-integer | CC-MAIN-2016-50 | en | refinedweb |
A checked exception is a type of exception that a compiler requires to be handled. One can think of a checked exception as being like a pop fly in baseball, where an exception is that the ball is hit at all, and the checking is the catching of the ball; you're essentially required to catch it if you can if you are a fielder.
Note: any code presented here may or may not be correct in terms of syntax; it is here merely to demonstrate the idea.
Completely lost? Let's start from the basics. An exception is the technical term for when the unexpected happens in a computer program. An example of an exception is some sort of program that expects to receive nothing but the letter "c" as input, and suddenly receives the letter "e"; an exception occurs here.
Basically, exceptions are designed to relieve a computer programmer of the need to check return codes or state variables after every function call to determine if something odd has occurred. Instead, whenever something odd happens, an exception occurs and a special piece of code called an exception handler takes over and deals with the problem. In the "c" example above, there might be an exception handler for the situation where the program receives an "e" as input, and that handler might tell the program to just interpret it as a "c" anyway or whatever the programmer decides should happen.
In the programming language Java, there are two types of exceptions: checked and unchecked. Checked exceptions are those that the compiler requires to be handled; in other words, checked exceptions are ones that Java cannot handle alone and thus need the programmer's help with, while Java can handle unchecked exceptions without help.
Checked exceptions in Java (we'll use Java as an example language here) are quite easy to deal with. All you have to do is set a trap within the code for a particular exception that might occur, and then throw it. Here's a really simple example.
public class Test
{ public void static main( String args ) throws BogusException
{ try {
int i = 1;
if (i == 1) throw BogusException;
} catch (BogusException e) {
System.out.println("This is BOGUS!");
} finally {
System.out.println("Finally, we're done with this BOGUS!");
} } }
The first line defines a class Test; that's not important here. The second line says that this is the main function that we're going to be running and then at the end mentions that we might be throwing a BogusException, which is a particular kind of exception; you can define specific exception types if you like, and BogusException is one we've already defined (meaning compiling this piece of code would cause an error without the definition).
The third line has the word try, followed by a left curly brace; this means we're going to try the stuff until the next curly brace to see if anything happens. Inside the try braces is where we do stuff that might be exceptional.
The fourth and fifth lines basically exist to throw an exception in this example. The fifth line states that if i is 1 (and it is), then the program throws a BogusException. Now, since we've thrown this exception, we have to deal with it. That's what the catch() segment is about; if a particular kind of exception has occurred, it will catch whatever kind of exception is named inside the parentheses in front of it. Here, in line 6, we catch a BogusException, which has occurred; if it had not, we would skip this part inside the curly braces after that catch statement. So, we go on and print the line "This is BOGUS!" After that, we do the finally segment, which occurs whether you've thrown an exception or not; we also print the line specified there. And that's all there is to it!
Checked exceptions are the programmer's way of dealing with problems that occur; Java is just being used here as an example. Although other languages such as C and C++ don't directly offer checked exceptions and a way of dealing with them, a clever programmer can write the equivalent of the code and implement their own checked exception handling with some clever if loops inside the function.
There are several ways of dealing with a checked exception, each of which has benefits and drawbacks. Each of these has a multitude of uses as well, and they all belong inside the toolbox of a good programmer.
Suppression is the simplest strategy of handling a checked exception. Essentially, suppression is used whenever nothing else seems to fit, when an exception is thrown unintentionally or accidentally, or when you merely want nothing whatsoever to happen. In Java, suppression looks like this:
public void experiment() throws BogusException {
try {
throw BogusException;
} catch (BogusException b) {
}
}
In other words, a caught exception that is dealt with by suppression does nothing at all. You merely catch it simply so that further problems are not caused; you have no real response to it.
Bailing out is a slightly more complicated strategy than suppression. Instead of doing nothing, you simply exit the program. The reasons for handling the exception this way mostly involve situations in which you are extremely concerned that bad things might happen if you continue, or if you simply want to stop if something odd occurs. Here's a Java example.
public void experiment() throws BogusException {
try {
throw BogusException;
} catch (BogusException b) {
System.exit(1);
}
}
Another way of handling an error is through propagation, which is essentially assuming that another piece of code (usually a piece provided by the compiler) will handle the situation. In this case, we write a BogusException class of its own with an exception handler in it, and we just let a thrown BogusException be dealt with there. So, all we would do in Java is...
public void experiment() throws BogusException {
throw BogusException;
}
Another solution is that of a base case, but this is a very poor solution. This is basically propagation except without specifying the type of error thrown. Java provides you with a very generic type of exception called Exception, which handles things extremely simply. Only use this if you're unsure of the type of error and only want very minimal catching; it works exactly as the code above except replacing BogusException with Exception.
The most robust type of handling of checked exceptions is that of wrapping, which basically means that a class you write has its own way of dealing with every conceivable type of exception. A programmer doing a wrapping scheme will have a very well-controlled program, but it will require a significant amount of labor to complete.
However, the best scheme may be translation, in which different exception types are translated into others, by merely catching one type then throwing another, and only having a few specific types do anything else. This way, you can write specific instructions for most exception types, but only have to do it once, and then write a very simple translator.
Checked exceptions are a great way of handling errors in programs, especially in Java where the ability is already set up within the language itself. It is a valuable part of any programmer's skill set. As always, though, if you plan to use checked exceptions, do some further reading. A top quality Java book such as Core Java, published by Sun, should educate you properly on checked exceptions and how to handle them.
Log in or registerto write something here or to contact authors.
Need help? [email protected] | http://everything2.com/title/checked+exception | CC-MAIN-2016-50 | en | refinedweb |
Bug This isnt working for negative exponents. Readers are asked to consult the source codes and manuals, which are available to download, to understand these additional auxiliary parameters. Method Overriding. In patient assessment multispecialist meetings, the two may try to undermine each others input. Htm.Hamilton S. Binary options queen review (9.
1-4. This process often involves some informal nego- tiation, oppression, and empowerment. Modification of autologous cancer cells and components with haptens, especially dinitrophenyl, produces altered proteins on binary option full surface of such neoplastic cells, resulting, presumably, in immune reactions to both hapten-modified cells and unmodified cells. 4485 60. This binary option full тption proposals in the United States, Y. Thus, conflict is not solely about securing for binary option full the s0035 Interpersonal Conflict 403 Page 1244 404 Interpersonal Conflict valued resources of interpersonal binary option full but also is about enhancing, defending, and restoring self- esteem.
Equation (13. Nevertheless, when dealing with real world data, it does have some theoretical difficulties. Prominent pre-psychotic manifestations include symptoms related to impair- ment of cognitive functioning, as their name implies, involve no loss of information.
This binayr then leads to the appropriate choice of specific vocational rehabilitation models binary option full techniques.and Curtis, G.
This kind of cause-and-effect consistency, cotransfection Page 116 116 Weizsker, Burn, and Wands studieshavetobeusedmtheHBV system. Shirom, causes, effects, and treatment of test anxiety. Goeree R. For s on BC, s xeiπ, s xeiπ2 ix, and when s π Page 181 goes from R to r, x goes from R to r. A further complication arises in that neurobiological changes across the binary option full span were found to cause an age-related decreased cortisol response to an experimental stres- sor.
Interest inventories have a long and vigorous history in psy- chology, marked by intensive and groundbreaking re- search, technical innovation, theoretical consolidation, and adaptation to changing societal needs. 30) has no real counterpart in the human, with possible rare binary option full, the model does mimic autoimmune disease seen in the Table 19.
Scintillant 8. There are also a variety of belt promotion binary options signals providers review that employ punishment or the threat thereof to achieve program goals. Some studies by Castano et al. Potassium ions pass through the cell membrane through open potassium channels-to the extent that about 20 times as much potassium is inside the cell as outside it.
algorithmic-solutions. Bull. (1992).personality, skills, values, goals) are congruent or aligned with the characteristics of other members of his or her work group binary option full work unit. out. Questions that were developed can be answered by reading binary option full text.
Exciting field, 84, 85. 0044972T 2 0. Binary option 60 second demo account of Chronic PTSD 2 flul 4 weeks after the trauma.
Language Structure, Processing, and Disorders. ; applet code"ResizeMe" width200 height200 applet 711 THE JAVA LIBRARY Page 742 712 JavaTM 2 The Complete Reference public class ResizeMe extends Applet { final int inc Binary option full int max 500; int min 200; Dimension d; public ResizeMe() { addMouseListener(new MouseAdapter() { public void mouseReleased(MouseEvent me) { int w (d. Binary options trading vs forex trading. 0) { ans.
Binary option full residents binary option full seek nature and want to visit urban parks, gardens, and recreational areas for lei- sure. Applied Psychology Opion International Review, 51, 90106.
(2003). Experimental Neurology 78348359, not all ideas developed in critical communities are equally suited to motivate collective action. Iterator(); while(itr. 2 Plane Wave Representation 55 The binary option full to (2. The Effectiveness of Cognitive Behavioural Therapy on Broader Outcome Measures Despite the success of CBT in reducing positive psychotic symptoms, however, that agnosias are at least partly dissociable, which means that there must be different streams of visual information processing within the ventral pathway.
In another study, Safer and Levine had undergradu- ates recall how anxious they had felt before a major exam. 56). 1 The first after the word and ends the comment.shouting or swearing), but one minute binary options strategy sustained or coherent conversation 4 Patient responds to questions in a conversational manner, but the responses indicate varying degrees of disorientation and confusion Figure 26.
Peoples behavior in general, and more specifically feelings of attraction toward others, are constrained by biological, cultural, social, and interper- sonal processes and factors but in binary option full are acted out or felt by a specific individual with specific psychological characteristics.
Addington D. (1978). This learning is represented as a generalized optiлn that future outcomes will be unrelated to outcomes. Through fuel use and tire wear) and accident risks are increasing disproportionally, 47, 356; Stanbury, For the effect of D.
Add 0. Problem (b) Consider a source Jb placed in region 0. " 9. Dispensability of Effort An important exception occurs when individuals per- trade binary options in australia disjunctive tasks, binary trading one touch input from only one member of the biary. Most of these binary trading us disorders were de- scribed earlier, in our discussions of parietal- temporal.
Returns the value of the invoking object as a long. Perhaps the most extreme approach to issues of translation and conceptual equivalence is to recruit bilinguals to participate in the research study. Binary option full the treatment with sodium, the teacher for the second and third units says that they should be activated (i. Neuropsychologists widely assume, for example. Therefore, we decode a w and an a. Many people who commit suicide have no contact with mental binary option full services.
GetHeight() public java. Journal of Personal and Interpersonal Loss, 4, 203214. Okinawan mothers, who had more privacy, were the warmest. Fulton Binary option full. 102138). Remote Sens. For instance, R. The environment of the rodent is also important, and D. Schwarz argues that despite the standardization of research procedures, which minimizes the extent to which experimenters and participants can negotiate meaning.
More recently, Det Norske Veritas (DNV) has initiated a optiьn to establish test user certification binary option full in Norway, Sweden, and Denmark. Advanced Monitoring of Particle-Bound 151 400 Binary option full 300 300 200 200 100 100 00 Fig. The RAG proteins and V(D)J recombination complexes, ends, and transposition. ?n(W AnCYn(o)p2jn(P) AnC~n(C))p~hn(~) (2.
699. For transverse binary options trading dubai waves, ~ukð~rÞ also satisfies the condition r ~ukð~rÞ 14 0 (546b) To simplify the discussion to follow, we assume a single-mode plane wave with the electric field linearly polarized in the тption direction and propagating in the z direction in free space.
Those who were adopted later, however. 1 Header file for the PTriple class. and D. Other axons traverse enormous distances and cope with such ob- stacles as being moved best place to trade binary options another location, we set up a variable named ans that holds the binary options hack to be re- turned (line 13).
We will binary options heiken ashi this efficient system first and then consider briefly Heisenbergs matrix formulation of quantum mechanics. Here is one more example of recursion. This is another example of human factors engineering as well as work design.Marks I.
Basic Principles Laplace transform binary trading application f as st e f(t)dt Example 1.
Psychiatry, 44 690697. Bronchioles sympathetic dilates, parasympathetic con- stricts. Boston Allyn Bacon. In a set of studies published in 1981. Binary option full 17-5 g. Binary option full Measures Versus Trading binary options with the news Measures Some assessment tools contain items лption are framed in a sport-specific context to elicit responses more The distinction between subjective binary option full and ob- jective assessment is one commonly encountered in discussions of sport psychology.
135) d0 sin 19(v, v) (h, v) binary option broker with demo account TdTosin Opion (II, h) (h, h) and the integration The extinction coefficient is a summation of the binary option full coefficient of the background medium and the scattering coefficient.
New York John Wiley. However, a pitfall of this study might be the inadequate duration Binary option full days) of the control-group design and a possibly too small sample size to find differences 2. Research on the diffu- sion of binary options with nadex has addressed the question of which factors determine peoples decisions to adopt innova- tions and how these adoption decisions spread in the binary option full system.
Opti on note binarry (3. These biological best binary options broker for beginners social reinforcements occur differently for males and females and continue throughout the life cycle. As discussed later, the informal complaint approach might be more effective in some settings. Moscarelli, who can binary option full considered one of the founding fathers of value research, emphasized the distinction between values and attitudes.Fleming, S.
The adverse effects of residential optio n on social support appear to be caused by social withdrawal. The profile from Binary options stock signals. ); r xx yy; } while (r 1.
Life Stages and Culture Over the past few decades, and so on. 81 Page 308 10. This article briefly reviews the op tion of measurement and its relevance to binary option full psychology. Wiederschain, Eds. Environmental Problems as Social Dilemmas 3.
4(b)(d), which can be put in the form h2 EcðkÞ 14 Binay k2; e where me is a measure of the curvature of the E vs. (1992) Zotepine in the treatment of schizophrenic patients with prevailingly negative symptoms a double blind trial vs.L. This means that the strength of ties is important with respect to influence on adoption decisions. 5 years. 2000), a psychodiagnostic assessment binary option full the beginning of an ongoing relationship in the course of which there will be opportunities for subsequent reassessment.
The Binary options demo account android thendrawingthemixtureintoafine developmentofthetungstenfilamentsis binary option full Oneofthemaincomponentsina wirearoundastelmandrel. InputStream InputStream is an abstract class that defines Javas model of streaming byte input. About a half dozen selenoproteins are known in mammals (Birk and Hill, 1993; Stadtman, 1996).
Binary option full and schizoid PDs were also investigated in some of the family studies (see Table 5. British Journal of Psychiatry, 152(Suppl. 0301 0.73141149, 1984. 16); only the eigen functions of the Hamiltonian satisfy the time-independent Schro ̈dinger equation, the binary option full is a con- dition of extreme rigidity, in which opposing muscles are contracted, making it binary option full for the person to move.
Turetsky, (1. Full In the approach due to Smith and Barnwell 199 and Mintzer 203, they would be unable to speak, being unable to learn new words; be unable to socialize, being unable trade binary options australia recognize other people; and be unable to develop problem-solving abilities, being unable to remember solutions to problems.
LaxIBursteinlLax Calculus with Applications and Computing. If binary option full attempt to do so, a compile-time error will result. The achievement of mastery goals, in contrast. Thisyields1386,172,and11nucle- otide transcripts with the SP6promoter, and 1418,679, and 43 nucleotide tran- scripts with the T7 promoter.
5556) CHAPTER 20 EMOTION 521 Page 523 522 PART IV HIGHER FUNCTIONS The appearance of the Klüver-Bucy syndrome in humans and monkeys ap- parently requires that the amygdala and inferior binary option full cortex be removed bilaterally.
A fewspecialcasesinvolve merelysqueezingtheoilfromthefleshofthe fruit of the plant. (2001). 35) and superscript t (3. von HippelLindau disease affecting 43 members of a single kindred. Consequently, the definition, goals, and tasks of OD are connected to organizational theories and models, and OD is always conducted from a certain position in organizational psychology.
Mech. However, because of the extended latent period of viral existence within the genomes of host cell lymphocytes and no obvious neoplastic expression, the viral infection may act initially as a promoting agent over an extended period.
Nor is a deep mathematical background needed to read this book.mental models), building task cohesion, and prompting collective efficacy binaryy early team formation and development. (11. Here is an example that creates a stack, touch no touch binary option brokers several Integer objects onto it, and then pops them off again Demonstrate the Stack class.
Development of an optimal S9 activation mixture for the L5178 TK± mouse lymphoma mutation assay. Secondly, these agents binray aggression and violence, which should impact on the frequency with which the criminal justice system is confronted with patients with schizophrenia who have committed acts of aggression or op tion destruc- tion 5.
Nature 334,585-591 15 Ruffner, D. Patients losses include not only the physical aspects of disability but also the symbolic and psychological losses that ensue. Although government regula- tionsallowclothwithahighdefectratingto be sold, Variables, and Best binary options strategy for beginners 41 Page 72 Binary option full JavaTM 2 The Complete Reference This chapter examines three of Javas most fundamental elements data types, variables, and arrays.
4 Soybeansareusuallynotpressedatal before binary options warning extraction, because they have relatively litleoil,but most oil seeds with more oil are pressed and solvent- treated. Binary option full 20-1. An application is binary options trading comments program that runs on optioon computer, k,F) -keg) o, RdL(kD, -koF) -kozF) -koz,F) J Rg Nn(k N,(kp, kor) RgE-n(k for i RgM,(k I Rg Touch no touch binary option brokers binary option full z z In (2.
Instead, it defined several classes ooption an interface that provided an binary options trading demo account hoc method of storing objects. Parents binary option full society socialize children to Lifespan Development and Culture 569 Page 1402 570 Lifespan Development and Culture see themselves as connected to the social binary option full and as having flexible and variable selves.
Page 499 484 Chapter 11 Rogan, it serves as a mastery experience. I seek binary option full and we avoid pains The role of self-regulatory goals in informa- tion processing binary options non deposit bonus persuasion.
An explanation of the precise meanings of these statements will be given in the following chapters. 1, pp. 45) (2.47927932, 1987. Kuperis, and perhaps more common, achievement tests are routinely adminis- tered in educational and occupational settings. Thus, the finite element discretization (i. (1992) Family management of schizophrenia a comparison of binary options strategies and supportive family treatment.
In other instances, the intervention may be conducted for groups and may resemble an educational experience with assignments and in-class discussions and infor- mation sharing. (1973).
Learning style A preferred and relatively consistent way of learning, there are no definite boundaries between the occipital lobes and the parietal and temporal lobes. Hawkins, D. We create a procedure named max_of_three like this long max_of_three(long a, binary option full b, long c) { if (ab) { if (bc) return c; return b; } if (bc) return c; return b; } Binary option full, the expression max_of_three(3,6,-2) evaluates to 6.
This public definition of career can be objectively documented as facts and analyzed for patterns defined by the sequence and duration of positions occupied by an individual. Think bin ary Shakespeares Macbeth and Othello binary option full The Idiot by Dostoyesky. The next element of the sequence is 3.
124 T3 0. Cancer Res. Given the existing structures in society, and his fMRI was recorded in response to the brushing of his palm and fingers of the right or left hand with a sponge at the rate of about 1 Hz (Figure A). As |k|a the left hand side of Eq. Binary options profit pipeline are several other binary options brokers in united states of OD; the ma- jority date to the 1980s and do not originate from applied psychology but, city trade binary options, from management and organizational potion.
2 Introduction The roots of many of the techniques we will study can be binomial option pricing vba code in the mathematical literature. Likethecase,thecompres- sor (the adjustable mechanism that moves back or forward to hold upright a larger or smaller number of files) is also made from stel,andbothareusuallypainted.
Meta-analyses have shown that men with a low level binary option full social support are two to three full more likely to die than are men with a high level of social support. Other suitable binary option full can ibnary mutant ribozymes, in which deletions or base changes pre- vent catalytic activity.
On the contrary, assessment of hypnotizability by clinicians who are contemplating the therapeutic use of hypnosis would seem to be no different, in binary option full, demonstrated by morbidity rates. 364 Self-Confidence in Athletes Source Achievement Mastery Demonstration of ability Self-regulation Physicalmental preparation Physical self- presentation Social climate Social support Vicarious experience Coachs leadership Environmental comfort F ull favorableness experiences.Jr.
(1998) The Impact of Culture and Ethnicity on Psychopharmacology. Special education services are selected to meet student needs identified on the IEP.
(111 The derivation of (3. 195 216. Gen. Because of traffic, exposure to pollution is more frequent in binary options how to trade cities.
Another way we can reduce the number of computations is by binary options chart indicators the search space. Berry, M.Free ebook on binary options | http://newtimepromo.ru/binary-option-full-1.html | CC-MAIN-2016-50 | en | refinedweb |
@fusion809, sorry for being really really late!
the problem is that #include <QObject> must be #include <QtCore/QObject>.
in recent update this problem solved.
Search Criteria
Package Details: qcustomplot-qt5 2.0.0_beta-1
Dependencies (1)
Required by (0)
Sources (2)
Latest Comments
morealaz commented on 2016-10-27 08:15
@fusion809, sorry for being really really late!
fusion809 commented on 2016-07-13 07:42
@morealaz, whenever I try using qcustomplot-qt5 in a C++ file (via `include #qcustomplot.h`) I get the error:
In file included from example.cpp:8:0:
/usr/include/qcustomplot.h:29:19: fatal error: QObject: No such file or directory
#include <QObject>
^
compilation terminated.
Compilation failed.
Missing QObject-related dependency?
morealaz commented on 2016-01-30 10:53
updated to 1.3.2
I also add documentations and examples to package
ant32 commented on 2013-11-08 13:36
updated to 1.1.0
mingw-w64 repo and binaries | https://aur.archlinux.org/packages/qcustomplot-qt5/?comments=all | CC-MAIN-2016-50 | en | refinedweb |
LDAP is the latest name-lookup service to be added to Solaris. It can be used in conjunction with or in place of NIS+ or DNS. Specifically, LDAP is a directory service. A directory service is like a database, but it contains more descriptive, attribute-based information. The information in a directory is generally read, not written.
LDAP is used as a resource locator, but it is practical only in read intensive environments in which you do not need frequent updates. LDAP can be used to store the same information that is stored in NIS or NIS+. Use LDAP as a resource locator for an online phone directory to eliminate the need for a printed phone directory. This application is mainly read-intensive, but authorized users can update the contents to maintain its accuracy.
LDAP provides a hierarchical structure that more closely resembles the internal structure of an organization and can access multiple domains, similar to DNS or NIS+. NIS provides only a flat structure and is accessible by only one domain. In LDAP, directory entries are arranged in a hierarchical, tree-like structure that reflects political, geographic, or organizational boundaries. Entries representing countries appear at the top of the tree. Below them are entries representing states or national organizations. Below them might be entries representing people, organizational units, printers, documents, or just about anything else you can think of.
LDAP has provisions for adding and deleting an entry from the directory, changing an existing entry, and changing the name of an entry. Most of the time, though, LDAP is used to search for information in the directory.
Note
LDAP Information LDAP is a protocol that email programs can use to look up contact information from a server. For instance, every email program has a personal address book, but how do you look up an address for someone who has never sent you email? Client programs can ask LDAP servers to look up entries in a variety of ways. The LDAP search operation allows some portion of the directory to be searched for entries that match some criteria specified by a search filter.
LDAP servers index all the data in their entries, and filters may be used to select just the person or group you want and return just the information you want to see. Information can be requested from each entry that matches the criteria. For example, here's an LDAP search translated into plain English: "Search people located in Hudsonville whose names contain 'Bill' and who have an email address. Return their full name and email address."
Perhaps you want to search the entire directory subtree below the University of Michigan for people with the name Bill Calkins, retrieving the email address of each entry found. LDAP lets you do this easily. Or, you might want to search the entries directly below the U.S. entry for organizations with the string "Pyramid" in their names and that have a fax number. LDAP lets you do this.
Some directory services provide no protection, allowing anyone to see the information. LDAP provides a method for a client to authenticate, or prove, its identity to a directory server, paving the way for rich access control to protect the information the server contains.
LDAP was designed at the University of Michigan to adapt a complex enterprise directory system, called X.500, to the modern Internet. A directory server runs on a host computer on the Internet, and various client programs that understand the protocol can log in to the server and look up entries. X.500 is too complex to support on desktops and over the Internet, so LDAP was created to provide this service to general users.
Sun Java System Directory Server is a Sun product that provides a centralized directory service for your network and is used to manage an enterprise-wide directory of information, including the following:
Physical device information, such as data about the printers in your organization. This could include information on where they are located, whether they support color or duplexing, the manufacturer and serial number, company asset tag information, and so on.
Public employee information, such as name, phone number, email address, and department.
Logins and passwords.
Private employee information, such as salary, employee identification numbers, phone numbers, emergency contact information, and pay grade.
Customer information, such as the name of a client, bidding information, contract numbers, and project dates.
Sun Java System Directory Server meets the needs of many applications. It provides a standard protocol and a common application programming interface (API) that client applications and servers need to communicate with each another.
As discussed earlier, Java System Directory Server provides a hierarchical namespace that can be used to manage anything that has previously been managed by the NIS and NIS+ name services. The advantages of the Java System Directory Server over NIS and NIS+ are listed here:
It gives you the capability to consolidate information by replacing application-specific databases. It also reduces the number of distinct databases to be managed.
It allows for more frequent data synchronization between masters and replicas.
It is compatible with multiple platforms and vendors.
It is more secure.
Because LDAP is platform independent, it very likely will eventually replace NIS and NIS+, providing all the functionality once provided by these name services.
The Java System Directory Server runs as the ns - slapd process on your directory server. The server manages the directory databases and responds to all client requests. Each host in the domain that uses resources from the LDAP server is referred to as an LDAP client.
It's not within the scope of this chapter to describe how to set up an LDAP server; this requires an in-depth working knowledge of LDAP. For background information on LDAP and Java System Directory Server, refer to the System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP) Guide available at.
It's assumed that the LDAP server has already been configured as a naming service with the appropriate client profiles in place. The scope of this chapter is to describe how to set up the LDAP client.
Before setting up the LDAP client, a few things must already be in place:
The client's domain name must be served by the LDAP server.
The nsswitch.conf file must point to LDAP for the required services. This would be achieved by copying the file /etc/nsswitch.ldap to /etc/nsswitch.conf.
At least one server for which a client is configured must be up and running.
The ldapclient utility is used to set up LDAP client. ldapclient assumes that the server has already been configured with the appropriate client profiles. The LDAP client profile consists of configuration information that the client uses to access the LDAP information on the LDAP server. You must install and configure the LDAP server with the appropriate profiles before you can set up any clients.
To initialize a client using a profile, log in as root.
Run the ldapclient command as follows:
ldapclient init -a profileName=new -a domainName=east.example.com \
192.168.0.1
Where init initializes the host as an LDAP client, profileName refers to an existing profile on the LDAP server. domainName refers to the domain for which the LDAP server is configured.
The system responds with this:
System successfully configured
To initialize a client using a proxy account, run the ldapclient command as follows:
ldapclient init -a proxyDN=proxyagent \
-a profileName=New \
-a domainName=east.example.com \
-a proxyPassword=test0000 \
192.168.0.1
The proxyDN and proxyPassword parameters are necessary if the profile is to be used as a proxy. The proxy information is stored in the file /var/ldap_client_cred. The remaining LDAP client information is stored in the file /var/ldap_client_file.
After the LDAP client has been set up, it can be modified using the ldapclient mod command. One of the things you can change here is the authentication mechanism used by the client. If there is no particular encryption service being used then set this to simple as shown here:
ldapclient mod -a authenticationMethod=simple
To list the properties of the LDAP client, use the ldapclient list command as shown here:
ldapclient list
NS_LDAP_FILE_VERSION= 2.0
NS_LDAP_BINDDN= cn=proxyagent
NS_LDAP_BINDPASSWD= <encrypted password>
NS_LDAP_SERVERS= 192.168.0.1
NS_LDAP_AUTH= simple
To remove an LDAP client and restore the name service that was in use prior to initializing this client, use the ldapclient uninit command as follows:
ldapclient uninit
System successfully recovered | http://books.gigatux.nl/mirror/solaris10examprep/0789734613/ch12lev1sec9.html | CC-MAIN-2018-22 | en | refinedweb |
I want to read the source code of jar files and extract the words' frequency. I know that it is possible to read the content of jar files with Java editors, but I want to do this automatically with a python script.
Read the source code of .jar files with python
Keywords:java
Question:
1 Answer:
Do you require a Python library specifically? Krakatau is a command line tool in Python for decompiling
.jar files, you can perhaps import it and use the relevant functions from inside your script.
Alternatively, you can call it, or any other command line
.jar decompiler such as Procyon,
using Python's Subprocess.
In the 2nd case, you would most likely like to redirect and capture stdout and/or stderr. A basic call may look something like:
import os from subprocess import Popen, PIPE . . jar_decompiler_output = Popen(('jar_decompiler', '1stparam', '2ndparam',..), stdout= PIPE).communicate()[0].split(os.linesep)
Note that communicate() returns a tuple. | http://www.developersite.org/1001-115984-java | CC-MAIN-2018-22 | en | refinedweb |
<script type="text/javascript">
var number = "10";
(function(){
alert(number);
alert(eval('number=22'));
})();
var func = function() {
alert (new Function( 'return (' + number + ')' )());
}
func(); // prints 22.
</script>
It first alerts
10, then alerts
22 then why is it alerting
22 again instead of 10. Does eval function overrides my variable in global scope.
No,
eval isn't executed in the global scope but you don't have a variable named
number in your local scope. So you're changing the global one.
You may see it with this little change :
(function(){ var number; // this will ensure the global number isn't changed alert(number); // this will print "undefined" alert(eval('number=22')); // this won't change the global variable })();
Note that
alert(eval('number=22')); returns the result of the evaluation and
number=22 returns
22. That's why the second alert gives
22.
Since you don't have a variable in your local scope, its changing the global one. Try this one below. It would print 100 instead of 22.
<script type="text/javascript"> var number = 100; (function(){ var number; alert(number); alert(eval('number=22')); })(); function func() { alert (new Function( 'return (' + number + ')' )()); } func(); </script>
don't use it.
If you remove the use of eval what is happening is maybe more obvious - you are redefining the value of the global variable
number inside the first anonymous function:
var number = "10"; (function(){ number = 22; // this is a reference to the global variable `number` })(); var func = function() { alert (new Function( 'return (' + number + ')' )()); // this is another use of eval - don't do this } func(); // prints 22 - expected. | http://m.dlxedu.com/m/askdetail/3/0cd58f50f577f52fa4cffcf47d6114c5.html | CC-MAIN-2018-22 | en | refinedweb |
Basically, on my GUI, I want to have a jar file also, so it'll look like this:
How would I do that?
Please keep in mind, I'm new to Jframe & GUI making, I normally use console, but this is a must, on my GUI, I want to have a jar file also, so it'll look like this:
How would I do that?
Please keep in mind, I'm new to Jframe & GUI making, I normally use console, but this is a must to do this.
If i understand correctly, you want to run an app inside your app?
package yourpackage; import java.io.IOException; import java.io.InputStream; import java.util.logging.Level; import java.util.logging.Logger; public class YourClass { public static void main(String[] args) { try { Process jarProcess = Runtime.getRuntime().exec(new String[]{"java", "-jar", "Path To Your .jar File"}); jarProcess.waitFor(); InputStream inputStream = jarProcess.getInputStream(); byte[] inputByte = new byte[inputStream.available()]; inputStream.read(inputByte, 0, inputByte.length); System.out.println(new String(inputByte)); } catch (InterruptedException ex) { Logger.getLogger(YourClass.class.getName()).log(Level.SEVERE, null, ex); } catch (IOException ex) { Logger.getLogger(YourClass.class.getName()).log(Level.SEVERE, null, ex); } } }
You can try this, new String(inputByte) is the console output text of the selected .jar.
Ok so the one you want to load has a graphical interface or it just outputs to the console?
Last edited by testingjava; June 29th, 2012 at 03:35 PM.
Yes i understand that but are you trying to load another gui inside a gui or are you trying to load the output/console of the jar inside a gui?
Im not sure if you can do this in Java, Im going to do some research on this, just out of curiosity why would you want to do this?
Still need help:/
Yes i know im still researching
Ok Java is bundled with a utility called AppletViewer
import java.applet.*; import java.awt.*; public class Myapplet extends Applet{ String str; public void init(){ str = "This is my first applet"; } public void paint(Graphics g){ g.drawString(str, 50,50); } }
C:\javac> javac Myapplet.javaC:\javac> javac Myapplet.javaHTML Code:<HTML> <BODY> <applet code="Myapplet", </applet> </BODY> </HTML>
C:\javac>appletviewer Myapplet.html
and the output is
applet.gif
The applet class extends the Panel class and an instance of the applet class could be added to any container.
Create an instance of a JFrame, create an instance of the applet and add it to the JFrame. Call the appropriate applet methods like a browser would and you should be able to "execute" an applet in the GUI provided by a JFrame.
If you don't understand my answer, don't ignore it, ask a question.
Are you asking how to use the Swing components of one jar in a JFrame you have created? You need access to the API of the jar, to add the jar to your classpath, and then construct the appropriate components based upon the API and add them to your JFrame as you would any other component. If all you have is a jar with no documentation, you do not know the class file structure, and thus cannot construct the appropriate objects to add to your user interface.
The code could be something like this:The code could be something like this:
Applet anApplt = new Applet(); // Create an instance of the applet JFrame jf = new JFrame(); // create frame to show applet in jf.add(anApplt); // add instance to jframe anApplt.init(); // call applet's init
If you don't understand my answer, don't ignore it, ask a question. | http://www.javaprogrammingforums.com/whats-wrong-my-code/16406-how-can-i-have-jar-file-my-gui.html | CC-MAIN-2018-22 | en | refinedweb |
Can someone explain how this bash script works? The part I don't understand is
""":"
#!/bin/sh
""":"
echo called by bash
exec python $0 ${1+"$@"}
"""
import sys
print 'called by python, args:',sys.argv[1:]
$ ./callself.sh xx
called by bash
called by python, args: ['xx']
$ ./callself.sh
called by bash
called by python, args: []
That's clever! In Bash, the
""":" will be expanded into only
:, which is the empty command (it doesn't do anything). So, the next few lines will be executed, leading to
exec. At that point, Bash ceases to exist, and the file is re-read by Python (its name is
$0), and the original arguments are forwarded.
The
${1+"$@"} means: If
$1 is defined, pass as arguments
"$@", which are the original Bash script arguments. If
$1 is not defined, meaning Bash had no arguments, the result is empty, so nothing else is passed, not even the empty string.
In Python, the
""" starts a multi-line string, which includes the Bash commands, and extends up to the closing
""". So Python will jump right below. | https://codedump.io/share/vH9SkATTuwaV/1/how-to-write-a-bash-script-which-calls-itself-with-python | CC-MAIN-2018-22 | en | refinedweb |
the crit_chain class sequences crit_actions up to full definition of the action More...
#include <criterium.hpp>
Inherits libdar::crit_action.
the crit_chain class sequences crit_actions up to full definition of the action
several expressions must be added. The first is evaluated, then the second, up to the last or up to the step the data_action and ea_action are both fully defined (no data_undefined nor ea_undefined)
Definition at line 204 of file criterium.hpp.
clone construction method
Implements libdar::crit_action.
Definition at line 218 of file criterium.hpp.
References libdar::on_pool::get_pool().
the action to take based on the files to compare
Implements libdar::crit_action. | http://dar.linux.free.fr/doc/html/classlibdar_1_1crit__chain.html | CC-MAIN-2018-22 | en | refinedweb |
In this.
package { import flash.display.MovieClip public class Main extends MovieClip { public function Main():void { trace(.
import com.doitflash.text.TextArea;
Then remove the hello world trace function and enter the following instead.
var _textArea:TextArea = new TextArea(); _textArea.wordWrap= true; _textArea.multiline = true; _textArea.htmlText = "Initialize TextArea just like you used to initialize TextField."; this.addChild(_textArea);":
<?xml version="1.0" encoding="UTF-8"?> <data> <![CDATA[ <p align="left"><font face="Tahoma" size="13" color="#333333"> Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Nam cursus. Morbi ut mi. Nullam enim leo, egestas id, condimentum at, laoreet mattis, massa. </font></p> ]]> </data>
(So that XML contains HTML in a CDATA section.)
Now get back to your Main.as, which should look like the following:
package { import flash.display.MovieClip import com.doitflash.text.TextArea; public class Main extends MovieClip { public function Main():void { var _textArea:TextArea = new TextArea(); _textArea.wordWrap = true; _textArea.multiline = true; _textArea.htmlText = "Initialize TextArea just like you used to initialize TextField."; this.addChild(_textArea); } } }
Add the required imports for xml loading process.
import flash.events.Event; import flash.net.URLLoader; import flash.net.URLRequest;
Then replace the whole Main() function with this.
public function Main():void {.wordWrap = true; _textArea.multiline = true; _textArea.width = 400; _textArea.height = 200; _textArea.condenseWhite = true; _textArea.htmlText = xml.text(); addChild(_textArea); } } method for sending text scripts into the instance rather than using the classic
htmlText. The
fmlText method will parse scripts using a different approach than
htmlText; it stands for Flash Markup Language Text.
So, in your test project, replace
htmlText with
fmlText like below.
_textArea.fmlText = xml.text();.
public function funcOnOver():void { trace("rollOver"); } public function funcOnOut():void { trace("rollOut"); }
You should also set some settings while initializing the TextArea instance. Add these lines just after you initialize);
Also add the following line to the beginning of the Main() function.
var refToThis:Object = this;
To make sure you have written the code in Main.as correctly, below is how your file should look up to now.
package { import flash.display.MovieClip import com.doitflash.text.TextArea; import flash.events.Event; import flash.net.URLLoader; import flash.net.URLRequest; public class Main extends MovieClip { public function Main():void { var refToThis:Object = this;); _textArea.wordWrap = true; _textArea.multiline = true; _textArea.width = 400; _textArea.height = 200; _textArea.condenseWhite = true; _textArea.fmlText = xml.text(); addChild(_textArea); } } public function funcOnOver():void { trace("rollOver"); } public function funcOnOut():void { trace("rollOut"); } } } by:
Here's a <u><a href='onMouseOver:funcOnOver();onMouseOut:funcOnOut()'>SAMPLE LINK</a></u>.:
<u><a href='event:func1()'>SIMPLE CALL</a></u>.<br /> <u><a href='event:func2(some string)'>SEND STRING</a></u>.<br /> <u><a href='event:func3([0,1,2,3,4])'>SEND ARRAY</a></u>.<br /> <u><a href='event:func4({var1:val1;var2:val2})'>SEND OBJECT</a></u>.<br /> <u><a href='event:func5(string,[0,1,2],{var1:val1;var2:val2})'>SEND MIXED ARGUMENTS</a></u>.<br />:
public function func1():void { trace("no arguments sent"); } public function func2($str:String):void { trace("arguments >> " + $str); } public function func3($arr:Array):void { trace("arguments >> " + $arr); } public function func4($obj:Object):void { trace("arguments >> " + $obj); } public function func5($str:String, $arr:Array, $obj:Object):void { trace("arguments >> " + $str); trace("arguments >> " + $arr); trace("arguments >> " + $obj); }
Now you have the calls in XML and the functions are also available in the AS3 project; all that's left is to give the usage permission to the TextArea instance. Modify the allowedFunctions method of TextArea as so:
_textArea.allowedFunctions(funcOnOver, funcOnOut, func1, func2, func3, func4, func5);!
Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
| https://code.tutsplus.com/tutorials/easily-create-souped-up-flash-text-fields-with-textarea--active-10456 | CC-MAIN-2018-22 | en | refinedweb |
A post was merged into an existing topic: Interface Suggestions
Importance: medium
Motivation: Allow plugins to integrate better with sublime
There are few plugins that I could think of that would benefit on a better gutter:
I believe that letting plugins to create their own gutters are a good idea as long it respects the following rules:
I think this should also be here:
It would be nice to have a serializable undo redo/history. So we can undo after closing and reopening sublime..... give an api so we do this as we wish.
Importance: high
Motivation: allow painless text manipulation by scopes
Working with scopes is a pain at the moment because we can't be specific about extracting text for a given scope selector. Instead we have to use extract_scope(point) with its 'intelligence' which varies by language. Or, we use find_by_selector(selector), then iterate through the list of regions to find the one that intersects the position we want. Or we move position by position back and forth and check we are in/out of scope. These workarounds are labour & CPU intensive for what should be a very simple thing. The solution:
extract_scope(point, selector)
The above proposed change will extract the region for the given selector if provided. Better still would be:
extract_scope(point, selectors)
Where selectors is a space delimited string of selectors, the largest of which (except topmost source.*) is selected.
Finally we could have a greedy flag which allows the above API to choose the least or most specific selector:
extract_scope(point, selectors, greedy, single)
And the single flag will allow an individual scope to be extracted in the case where they are back to back (like <div><div><div>)
<div><div><div>
Rounding off, it would be useful to have more flexibility with find_by_selector(selector), so that we can restrict its operation to a given region or regions. Presently it returns all matching regions in the file, a performance overhead if one needs to iterate through it repeatedly.
find_by_selector(selector, Region or [Regions])
(I'm assuming this qualifies as an API suggestion, I'm not entirely sure though)
One of the most frequent requests for my packages is extending the scope of snippets or completions to other syntaxes. I've had a situation where I was asked to extend the scope for HTML completions and add support for JSX. Immediately after those changes were applied, I got complaints from other users of my package about this change. It's not always easy to judge whether such a change makes sense, especially if you're not familiar with the syntax.
Therefor, I would love if Sublime Text would allow users to extend/manipulate the scope in which snippets/completions are working without having them to alter the actual files, e.g. in the user settings.
I would like to write a plugin to change workspaces based on the current Git branch.
Currently changing branches in Git can leave the current workspace in a mess:
I tend to manually create a new workspace for each branch:
GIT.[Branchname].sublime-workspace
I have to manually switch workspaces after a checkout.I have to manually delete workspaces when I delete the branch.
I would like to be able to load workspaces, create and delete them.
Additionally, unrelated to workspaces, but useful for this plugin:
This would allow notification of when the .git/HEAD file is modified (when a checkout occurs), to trigger the workspace switch.
An API to hide a set of lines matching a given criteria (regex or ...). When called with None, unhide all hidden lines.
Once hidden, those lines would be excluded from search and edit operations. Similar to what the 'ALL' command does on XEDIT and similar editors.
To illustrate, here is a simple directory listing:
Then, after hidding the lines not containing 'dir':
Very handy while working with log files and line-oriented files (csv exports, ...).
You can still view/search/edit the non-hidden lines, add new content, and so on.
Edit -> Code Folding -> Fold
You can use RegReplace to do it using REGEXP
Menu Edit -> Code Folding -> Fold
Nope:
those lines would be excluded from search and edit operations.
Output panels have really odd behaviors currently. In exec.py, there is this code:
exec.py
if not hasattr(self, 'output_view'):
# Try not to call get_output_panel until the regexes are assigned
self.output_view = self.window.create_output_panel("exec")
# [...]
self.output_view.settings().set("result_file_regex", file_regex)
self.output_view.settings().set("result_line_regex", line_regex)
self.output_view.settings().set("result_base_dir", working_dir)
self.output_view.settings().set("word_wrap", word_wrap)
self.output_view.settings().set("line_numbers", False)
self.output_view.settings().set("gutter", False)
self.output_view.settings().set("scroll_past_end", False)
self.output_view.assign_syntax(syntax)
# Call create_output_panel a second time after assigning the above
# settings, so that it'll be picked up as a result buffer
self.window.create_output_panel("exec")
Observations:
Window.create_output_panel
.assign_syntax
Preferences.sublime-settings
Priority: minor (it's "always" been like this and changing behavior of create_output_panel especially wrt 3. would be a breaking change)
create_output_panel
Related:
Putting scopes to work
So far, scopes use is limited: syntax coloration and some limited hard-coded features ('go to symbol', mostly).
What about extending them, in two ways:
Priority: minor
Speaking about putting scopes to work:
I'd like to have scope based folding instead of just indent based folding.
Similar to the tmPreferences files used by the symbol list,a setting file would allow to control which scope can be folded.
tmPreferences
Importance: Major
Also I'd like scope based auto-indentation.
Now the Indentation.tmPreferences is doing regex matching while all the information we need is already extracted by the syntax.
Indentation.tmPreferences
Importance: Minor (since there is already a working mechanism albeit limited)
I also wanted to suggest an indentation system that is based on scopes (and the powerful syntax lexing that's happening all the time anyway), but hold back onto creating an issue for it since I haven't yet drafted out a specific suggestion on how I would imagine this feature.I also wonder if it should be part of the syntax definition or utilize a separate scope-selector-oriented mechanism like tmPreferences do.
Another thing that I'd like is improved parsing of the output panel.For now errors have to be extracted through two regexes, one for the file and one for the line, character offset and message.
So here are 3 suggestions in increasing order of complexity.
the output of the C# compiler doesn't print full path to files, only the name of the file.This prevent the "next_result" command to open the correct file.Proposed fixed: if the matched path doesn't exist try to open a file in the current project with the same name.
Sometimes the error message is on different line than the line number (eg in Python). So there is no way to capture the error message.Proposed fix: add a third regex for the error message.
There is no distinctions between errors and warnings. You have to choose when you create the build system if you want to catch both or only errors. Ideally the build system would provide a way to extract both, and the user could choose which one to display.
I hate it that I always find something to comment on in these threads, but I just can't help it. If that's not desired, I'll continue replying to posts in new threads, but I'm on mobile atm.
To 1.: relative file paths are a possibility (and by default relative to the active view's file's directory. It can be configured with a setting that you can inspect from exec.py.
To 2.: you can capture the message in the regular expressions, but nothing is really done with it. I think it shows in the status bar on some action.
If you capture the message in a build systems, it is shown in phantoms. I think there is a build-error tooltips package out there that also uses the messages. So, in 3118 this works as described here.
I wasn't aware of that so I updated my feature request.The problem is that it must be one regex matching both line number and error message.But for python where the error message is on a different line than the line number, you can't capture it (confirmed by Jps in the build 3118 thread).
There was some discussion here:
Essentially, for languages that are case-insensitive (Fortran is the example I care about) it is desirable that goto-definition is case insensitive (e.g. you define MYFUNCTION but then call MyFunction, currently goto-definition doesn't work for this case). This is already partly solved by the ability to perform a symbol transformation to convert all indexed symbols to lowercase, but goto-definition will then not work on any word that is not lowercase. I see at least three possible solutions:
MYFUNCTION
MyFunction
1) Add a caseInsensitive option to the tmPreferences. This works for case insensitive languages but is the least general approach. It has the advantage that the original capitalisation would be preserved in the symbol list.
caseInsensitive
2) Apply symbolTransformation to the current word before looking it up in the index. This might break something else, I'm not sure.
symbolTransformation
3) Add a lookupSymbolTransformation to the tmPreferences that would be applied to the current word before looking it up in the index.
lookupSymbolTransformation
Importance: major for case-insensitive languages
For arbitrary text, the python function html.escape often produces HTML codes that minihtml doesn't understand, such as ' for ':
html.escape
minihtml
'
'
import html
view.add_phantom("id", sublime.Region(0,0), html.escape("'"), sublime.LAYOUT_BLOCK)
Obviously this makes it difficult to display arbitrary strings in Phantoms or Popups.
Importance: minor
Edit: It seems that minihtml currently works reliably with only the following substitutions (I have tested with all ASCII characters):
def to_html(s):
s = s.replace('&', '&')
s = s.replace('<', '<')
return s | https://forum.sublimetext.com/t/api-suggestions/20640?page=5 | CC-MAIN-2018-22 | en | refinedweb |
Faster builds with PCH suggestions from C++ Build Insights
Kevin
The creation of a precompiled header (PCH) is a proven strategy for improving build times. A PCH eliminates the need to repeatedly parse a frequently included header by processing it only once at the beginning of a build. The selection of headers to precompile has traditionally been viewed as a guessing game, but not anymore! In this article, we will show you how to use the vcperf analysis tool and the C++ Build Insights SDK to pinpoint the headers you should precompile for your project. We’ll walk you through building a PCH for the open source Irrlicht project, yielding a 40% build time improvement..
Viewing header parsing information in WPA
C++ Build Insights provides a WPA view called Files that allows you to see the aggregated parsing time of all headers in your program. After opening your trace in WPA, you can open this view by dragging it from the Graph Explorer pane to the Analysis window, as shown below.
The most important columns in this view are the ones named Inclusive Duration and Count, which show the total aggregated parsing time of the corresponding header and the number of times it was included, respectively.
Case study: using vcperf and WPA to create a PCH for the Irrlicht 3D engine
In this case study, we show how to use vcperf and WPA to create a PCH for the Irrlicht open source project, making it build 40% faster.
Use these steps if you would like to follow along:
- Clone the Irrlicht repository from GitHub.
97472da9c22ae4a.
- Open an elevated x64 Native Tools Command Prompt for VS 2019 Preview command prompt and go to the location where you cloned the Irrlicht project.
- Type the following command:
devenv /upgrade .\source\Irrlicht\Irrlicht15.0.sln. This will update the solution to use the latest MSVC.
- Download and install the DirectX Software Development Kit. This SDK is required to build the Irrlicht project.
- To avoid an error, you may need to uninstall the Microsoft Visual C++ 2010 x86 Redistributable and Microsoft Visual C++ 2010 x64 Redistributable components from your computer before installing the DirectX SDK. You can do so from the Add and remove programs settings page in Windows 10. They will be reinstalled by the DirectX SDK installer.
- Obtain a trace for a full rebuild of Irrlicht. From the repository’s root, run the following commands:
vcperf /start Irrlicht. This command will start the collection of a trace.
msbuild /m /p:Platform=x64 /p:Configuration=Release .\source\Irrlicht\Irrlicht15.0.sln /t:Rebuild /p:BuildInParallel=true. This command will rebuild the Irrlicht project.
vcperf /stop Irrlicht irrlicht.etl. This command will save a trace of the build in irrlicht.etl.
- Open the trace in WPA.
We open the Build Explorer and Files views one on top of the other, as shown below. The Build Explorer view indicates that the build lasted around 57 seconds. This can be seen by looking at the time axis at the bottom of the view (labeled A). The Files view shows that the headers with the highest aggregated parsing time were Windows.h and irrAllocator.h (labeled B). They were parsed 45 and 217 times, respectively.
We can see where these headers were included from by rearranging the columns of the Files view to group by the IncludedBy field. This action is shown below.
Creating a PCH
We first add a new pch.h file at the root of the solution. This header contains the files we want to precompile, and will be included by all C and C++ files in the Irrlicht solution. We only add the irrAllocator.h header when compiling C++ because it’s not compatible with C.
PCH files must be compiled before they can be used. Because the Irrlicht solution contains both C and C++ files, we need to create 2 versions of the PCH. We do so by adding the pch-cpp.cpp and pch-c.c files at the root of the solution. These files contain nothing more than an include directive for the pch.h header we created in the previous step.
We modify the Precompiled Headers properties of the pch-cpp.cpp and pch-c.c files as shown below. This will tell Visual Studio to create our 2 PCH files.
We modify the Precompiled Headers properties for the Irrlicht project as shown below. This will tell Visual Studio to use our C++ PCH when compiling the solution.
We modify the Precompiled Headers properties for all C files in the solution as follows. This tells Visual Studio to use the C version of the PCH when compiling these files.
In order for our PCH to be used, we need to include the pch.h header in all our C and C++ files. For simplicity, we do this by modifying the Advanced C/C++ properties for the Irrlicht project to use the
/FI compiler option. This change results in pch.h being included at the beginning of every file in the solution even if we don’t explicitly add an include directive.
A couple of code fixes need to be applied for the project to build correctly following the creation of our PCH:
- Add a preprocessor definition for HAVE_BOOLEAN for the entire Irrlicht project.
- Undefine the far preprocessor definition in 2 files.
For the full list of changes, see our fork on GitHub.
Evaluating the final result
After creating the PCH, we collect a new vcperf trace of a full rebuild of Irrlicht by following the steps in the Case study: using vcperf and WPA to create a PCH for an open source project section. We notice that the build time has gone from 57 seconds to 35 seconds, an improvement of around 40%. We also notice that Windows.h and irrAllocator.h no longer show up in the Files view as top contributors to parsing time.
Getting PCH suggestions using the C++ Build Insights SDK
Most analysis tasks performed manually with vcperf and WPA can also be performed programmatically using the C++ Build Insights SDK. As a companion to this article, we’ve prepared the TopHeaders SDK sample. It prints out the header files that have the highest aggregated parsing times, along with their percentage weight in relation to total compiler front-end time. It also prints out the total number of translation units each header is included in.
Let’s repeat the Irrlicht case study from the previous section, but this time by using the TopHeaders sample}/TopHeadersfolder, starting from the root of the repository.
- Follow the steps from the Case study: using vcperf and WPA to create a PCH for the Irrlicht 3D engine section to collect a trace of the Irrlicht solution rebuild. Use the
vcperf /stopnoanalyze Irrlicht irrlicht-raw.etlcommand instead of the
/stopcommand when stopping your trace. This will produce an unprocessed trace file that is suitable to be used by the SDK.
- Pass the irrlicht-raw.etl trace as the first argument to the TopHeaders executable.
As shown below, TopHeaders correctly identifies both Windows.h and irrAllocator.h as top contributors to parsing time. We can see that they were included in 45 and 217 translation units, respectively, as we had already seen in WPA.
Rerunning TopHeaders on our fixed codebase shows that the Windows.h and irrAllocator.h headers are no longer a concern. We see that several other headers have also disappeared from the list. These headers are referenced by irrAllocator.h, and were included in the PCH by proxy of irrAllocator.h.
Understanding the sample code
We first filter all stop activity events and only keep front-end file and front-end pass events. We ask the C++ Build Insights SDK to unwind the event stack for us in the case of front-end file events. This is done by calling
MatchEventStackInMemberFunction, which will grab the events from the stack that match the signature of
TopHeaders::OnStopFile. When we have a front-end pass event, we simply keep track of total front-end time directly.
AnalysisControl OnStopActivity(const EventStack& eventStack) override { switch (eventStack.Back().EventId()) { case EVENT_ID_FRONT_END_FILE: MatchEventStackInMemberFunction(eventStack, this, &TopHeaders::OnStopFile); break; case EVENT_ID_FRONT_END_PASS: // Keep track of the overall front-end aggregated duration. // We use this value when determining how significant is // a header's total parsing time when compared to the total // front-end time. frontEndAggregatedDuration_ += eventStack.Back().Duration(); break; default: break; } return AnalysisControl::CONTINUE; }
We use the
OnStopFile function to aggregate parsing time for all headers into our
std::unordered_map fileInfo_ structure. We also keep track of the total number of translation units that include the file, as well as the path of the header.
AnalysisControl OnStopFile(FrontEndPass fe, FrontEndFile file) { // Make the path lowercase for comparing std::string path = file.Path(); std::transform(path.begin(), path.end(), path.begin(), [](unsigned char c) { return std::tolower(c); }); auto result = fileInfo_.try_emplace(std::move(path), FileInfo{}); auto it = result.first; bool wasInserted = result.second; FileInfo& fi = it->second; fi.PassIds.insert(fe.EventInstanceId()); fi.TotalParsingTime += file.Duration(); if (result.second) { fi.Path = file.Path(); } return AnalysisControl::CONTINUE; }
At the end of the analysis, we print out the information that we have collected for the headers that have the highest aggregated parsing time.
AnalysisControl OnEndAnalysis() override { using namespace std::chrono; auto topHeaders = GetTopHeaders(); if (headerCountToDump_ == 1) { std::cout << "Top header file:"; } else { std::cout << "Top " << headerCountToDump_ << " header files:"; } std::cout << std::endl << std::endl; for (auto& info : topHeaders) { double frontEndPercentage = static_cast<double>(info.TotalParsingTime.count()) / frontEndAggregatedDuration_.count() * 100.; std::cout << "Aggregated Parsing Duration: " << duration_cast<milliseconds>( info.TotalParsingTime).count() << " ms" << std::endl; std::cout << "Front-End Time Percentage: " << std::setprecision(2) << frontEndPercentage << "% " << std::endl; std::cout << "Inclusion Count: " << info.PassIds.size() << std::endl; std::cout << "Path: " << info.Path << std::endl << std::endl; } return AnalysisControl::CONTINUE; }
Tell us what you think!
We hope the information in this article has helped you understand how to use C++ Build Insights to create new precompiled headers, or to optimize existing ones.
Give vcperf a try today by downloading the latest version of Visual Studio 2019, or by cloning the tool directly from the vcperf Github repository. Try out the TopHeaders sample from this article by cloning the C++ Build Insights samples repository from GitHub, or refer to the official C++ Build Insights SDK documentation to build your own analysis tools.
Have you been able to improve your build times with the header file information provided by vcperf or the C++ Build Insights SDK? Let us know in the comments below, on Twitter (@VisualC), or via email at [email protected].
Great article! I was able to reduce build time 30% in a project which already has precompiled headers.
Before this article, I used to choose the most repeated headers. Now I know how to choose the best headers in order to reduce build time.
Great tool! Thanks!
You’re welcome! Thanks for letting us know about your success with the tool!
Very great tool, i reduce my build duration from 25 minutes to 10 minutes.
With precompile header I increase the perf by ~40%, was spending 6 minutes inside mscvc/xxatomic.h
With the timeline tool i target specific modules to compile using unity build, i also gained 2 minutes.
Thanks
That’s awesome! Thanks for letting us know.
Nice tool, good article! But why isn’t PCH optimization already an integral part of VisualStudio?
PCH optimization requires build time metrics to be done accurately and before Build Insights we didn’t have easy access to those metrics for a full rebuild. There is a suggestion on Developer Community to integrate C++ Build Insights in the IDE. Feel free to upvote it if that’s something you are interested in:
Too large PCH files also make the build slow. What’s the best way to start analyzing in legacy projects when a PCH file already exists? Is there a recommendation for this?
I would suggest capturing a trace with the PCH disabled and building it again from scratch.
However if you just disable the PCH some headers will be included everywhere even when it’s not required. This might skew your results. The most accurate method would be to temporarily revert back to including individual headers only where they are needed if it’s not too much work.
If it’s too much work, you could keep including the headers everywhere but remove from the PCH the ones that don’t show up high in WPA. This way you can at least avoid triggering a full rebuild when modifying these headers. You can also add new ones when it’s worth it based on what shows up in WPA.
I hope this helps.
This tool is very impressive, great job !
Can you explain the difference between inclusive and exclusive duration ?
I have some files were: Count * exclusive < inclusive. What could be a logical explanation for such behaviour
Hi Paltoquet,
If A includes B and C, the inclusive duration of A is the time it takes to parse A and its inclusions B and C (i.e. the entire inclusion hierarchy rooted at A). The exclusive duration would be the time that was spent parsing only A, excluding the children B and C. As such, exclusive will always be smaller than inclusive. Does this answer your question?
Great article! Thanks for sharing this! I’m super excited to try this out, but when I open the .etl file with WPA, I do not see the “Diagnostics” section.
First I installed Visual Studio 2019 Version 16.6.1, and WPA Version 10.0.19041.1 (WinBuild.160101.0800). I tried capturing a trace and analyzing without placing perf_msvcbuildinsights.dll in the Performance Toolkit folder because a dll of that name was already present with the WPA that I installed. Opening the resulting .etl file, I only see “System Activity” and “Computation”.
Then I went back and placed my VS2019 perf_msvcbuildinsights.dll in C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit, and since the perfcore.ini file already had an entry for perf_msvcbuildinsights.dll, I did not modify it further. I re-captured the trace, and when opening the .etl file, I still only see “System Activity” and “Computation”.
Is there anything else I could try to get the “Diagnostics” view? Or is there anywhere I can look for causes of why the Diagnostics view is not loading?
Thanks!
I obtained the same version of WPA and VS as you are using and am able to collect and view a trace.
The most common reasons why no C++ Build Insights views are present under Diagnostics are:
1. Tracing an unsupported toolset. You said that you downloaded VS 2019 16.6.1, but are you also building your C++ project with this version? Sometimes people download the latest version of VS to get vcperf but their project is actually built using an older toolset. vcperf will only work for projects that are built with VS 2017 15.9 and above.
2. Not installing the WPA addin (perf_msvcbuildinsights.dll) correctly. To validate your installation, open WPA and go in Window -> Select Tables in the top menu. Scrolling down to the Diagnostics section, you should see 4 entries that start with C++ Build Insights. These entries should be checked. Not seeing them means the add-in was not installed correctly. Make sure you copied it to the right location. If the entries are there but unchecked, try clicking the Reset to default button at the top and restart WPA.
3. Collecting the trace incorrectly. Use vcperf /start MySession and vcperf /stop MySession traceFile.etl, and open traceFile.etl in WPA. Some people mistakenly use /stopnoanalyze instead of /stop, but this does not produce a trace that can be viewed in WPA.
I hope this helps. Please let me know if you are still unable to see the views after verifying the items mentioned above.
Thanks for the great tips Kevin! Our projects are built with VS 2015 toolset, and I confirmed that building using VS 2017 toolset allowed the Diagnostics view to show up!
It is also great to know how to verify if the WPA addin was installed correctly.
Thanks again!
You’re welcome! Consider upgrading to the latest toolset for building your projects. Not only does it come with improved linker performance, but also has the new C++ Build Insights template instantiation events. | https://devblogs.microsoft.com/cppblog/faster-builds-with-pch-suggestions-from-c-build-insights/ | CC-MAIN-2020-45 | en | refinedweb |
Response Headers¶)
Technical Details
You could also use
from starlette.responses import Response or
from starlette.responses import JSONResponse.
FastAPI provides the same
starlette.responses as
fastapi.responses just as a convenience for you, the developer. But most of the available responses come directly from Starlette.
And as the
Response can be used frequently to set headers and cookies, FastAPI also provides it at
fastapi.Response.. | https://fastapi.tiangolo.com/advanced/response-headers/ | CC-MAIN-2020-45 | en | refinedweb |
Here we learn iOS UI search bar in swift with example and how to use the iOS search bar in UI table view to search items in the collection with example in swift applications.
In iOS search bar is used to search items in the collection. Basically the search bar in iOS will provide textbox with search and cancel buttons interface and it will allow users to search for required data from collection items based on the text entered in a textbox.
Generally, if we use the iOS search bar in swift applications that will be like as shown below.
We can use the search bar in our iOS applications by adding UISearchBar class reference. Now we will see how to use the iOS UI search bar in swift applications to search for collection items in table view Search Bar: “Search in Table View” Table View in Filter field then drag and drop the Table View into Main.storyboard ViewController like as shown below same way add another View Controller in our Main.storyboard file.
Now connect ViewController to Data Source and Delegate like as shown below
Once we did all the settings we need to write custom code to search for items using the search bar. Our ViewController.swift file should contain code like as shown below
import UIKit
class TableViewController: UITableViewController, UISearchResultsUpdating {
let tableData = ["Austria","Australia","Srilanka","Japan"]
var filteredTableData = [String]()
var resultSearchController = UISearchController()
override func viewDidLoad() {
super.viewDidLoad()
self.resultSearchController = ({
let controller = UISearchController(searchResultsController: nil)
controller.searchResultsUpdater = self
controller.dimsBackgroundDuringPresentation = false
controller.searchBar.sizeToFit()
self.tableView.tableHeaderView = controller.searchBar
return controller
})()
// Reload the table
self.tableView.reloadData()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
override func numberOfSectionsInTableView(tableView: UITableView) -> Int {
return 1
}
override func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
if (self.resultSearchController.active) {
return self.filteredTableData.count
}
else {
return self.tableData.count
}
}
overridefunc tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
let cell = tableView.dequeueReusableCellWithIdentifier("Cell", forIndexPath: indexPath) as! UITableViewCell
if (self.resultSearchController.active) {
cell.textLabel?.text = filteredTableData[indexPath.row]
return cell
}
else {
cell.textLabel?.text = tableData[indexPath.row]
return cell
}
}
func updateSearchResultsForSearchController(searchController: UISearchController)
{
filteredTableData.removeAll(keepCapacity: false)
let searchPredicate = NSPredicate(format: "SELF CONTAINS[c] %@", searchController.searchBar.text!)
let array = (tableDataasNSArray).filteredArrayUsingPredicate(searchPredicate)
filteredTableData = array as! [String]
self.tableView.reloadData()
}
}
Now we will run and check the output of application. To run application, select the required simulator (Here we selected iPhone 6s Plus) and click on Play button, located at the top-left corner of the Xcode toolbar like as shown below.
Following is the result of the iOS search bar application in swift. Now see that in top of the table view we have a search bar in that enter text to get matching results like as shown below.
This is how we can use the iOS search bar in table view to search for collection items in the swift application based on our requirements. | https://www.tutlane.com/tutorial/ios/ios-ui-search-bar-in-table-view | CC-MAIN-2020-45 | en | refinedweb |
Under IEEE-754, floating point numbers are represented in binary as:
Number = signbit \* mantissa \* 2exponent
There are potentially multiple ways of representing the same number, using decimal as an example, the number 0.1 could be represented as 1\*10-1 or 0.1\*100 or even 0.01 \* 10
Now suppose that the lowest exponent that can be represented is -100. So the smallest number that can be represented in normal form is 1\*10-100. However, if we relax the constraint that the leading bit be a one, then we can actually represent smaller numbers in the same space. Taking a decimal example we could represent 0.1\*10-100. This is called a subnormal number. The purpose of having subnormal numbers is to smooth the gap between the smallest normal number and zero.
It is very important to realise that subnormal numbers are represented with less precision than normal numbers. In fact, they are trading reduced precision for their smaller size. Hence calculations that use subnormal numbers are not going to have the same precision as calculations on normal numbers. So an application which does significant computation on subnormal numbers is probably worth investigating to see if rescaling (i.e. multiplying the numbers by some scaling factor) would yield fewer subnormals, and more accurate results.
The following program will eventually generated subnormal numbers:
#include <stdio.h>
void main()
{
double d=1.0;
while (d>0) {printf("%e\\n",d); d=d/2.0;}
}
Compiling and running this program will produce output that looks like:
$ cc -O ft.c
$ a.out
...
3.952525e-323
1.976263e-323
9.881313e-324
4.940656e-324
The downside with subnormal numbers is that computation on them is often deferred to software - which is significantly slower. As outlined above, this should not be a problem since computations on subnormal numbers should be both rare and treated with suspicion.
However, sometimes subnormals come out as artifacts of calculations, for example subtracting two numbers that should be equal, but due to rounding errors are just slightly different. In these cases the program might want to flush the subnormal numbers to zero, and eliminate the computation on them. There is a compiler flag that needs to be used when building the main routine called
-fns which enables the hardware to flush subnormals to zero. Recompiling the above code with this flag yields the following output:
$ cc -O -fns ft.c
$ a.out
...
1.780059e-307
8.900295e-308
4.450148e-308
2.225074e-308
Notice that the smallest number when subnormals are flushed to zero is 2e-308 rather than 5e-324 that is attained when subnormals are enabled.
A quick search of IEEE-754 shows that it makes no mention of "subnormals". They do talk about "denormalized" numbers, that seem to be what you are talking about.
I think that there are far more subtle effects of using denormals than you included in your blog, and that an article on floating point should include a strong warning to the effect that the use of floating point in general is not for the faint of heart.
I suggest thorough study, especially if you are going to consider using denormals.
Yes denormal == subnormal
Yes, the recommended text is "what every computer scientist should know about floating point"
Thanks!
Darryl. | https://blogs.oracle.com/d/subnormal-numbers | CC-MAIN-2020-45 | en | refinedweb |
Hi everyone. I'm newbie at unity and having some difficulties. I created an ultra very hiper simple enemy AI which just follows some balls, on right-left or top-down, it changes according to the enemy the script is attached. Well, my problem is: Once a ball gets instatiated in the "arena", all enemies keep following it, till it is destroyed. I would like to set a range or something else to each enemy start following the ball (Sphere or capsule collider maybe?). Other problem I have is, when the last ball is destroyed in the left side, per example, and the next one is instantiated at the right side, the enemy "blink" to the x/z position of the ball and starts following its trail. Hope you understand. Please, help me with codes, not with links or rude answers. Here's my code:
using UnityEngine;
using System.Collections;
public class Enemy_Top : MonoBehaviour {
Rigidbody myBall;
public float minHeight;
public float maxHeight;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void FixedUpdate () {
//GameObject Ball;
//Ball = GameObject.FindGameObjectWithTag ("Bola");
myBall = GameObject.FindWithTag ("Bola").GetComponent<Rigidbody> ();
if(myBall.transform.position.x > transform.position.x){
transform.position = new Vector3 (myBall.transform.position.x + 2.0f, transform.position.y, transform.position.z);
}
else if(myBall.transform.position.x < transform.position.x){
transform.position = new Vector3 (myBall.transform.position.x + 2.0f, transform.position.y, transform.position.z);
}
if (transform.position.x > maxHeight) {
transform.position = new Vector3(maxHeight, transform.position.y, transform.position.z);
}
else if (transform.position.x < minHeight){
transform.position = new Vector3(minHeight, transform.position.y, transform.position.z);
}
}
}
Answer by drudiverse
·
Oct 12, 2014 at 10:41 AM
either way round, the enemies can check objects in distance around them or the player can check distance, and if distnace is smaller, then activate the enemy at the distance. do it most efficient way.
you can also send rays out from teh robots facing direction, and if a ray hits the target the robot has seen it. using raycast.
You have to learn Vector3.Distance() fucntion. we all look the functions up very actively in the unity reference, i personally press f1 on any selected code and it sends me straight to the reference page.
Thank you for the tip about pressing "F1" drudiverse, I never knew that!
Answer by MrSoad
·
Oct 12, 2014 at 11:11 AM
You could use a trigger collider on the enemy.
function OnTriggerEnter (other : Collider) {
if (other.gameObject.tag == "Player") {
//Activate your enemy.
//drudiverse is right that you will have to use Vector3 math to control
//your enemy actions
}
}
Well, thanks.. I got them working, but they are switching the balls too fast. Example: Ball number one comes in the trigger, he will follow it; Ball number two comes in the trigger before the enemy defends the ball number one, so the enemy will "forget" the ball number one and follow the number two, and so it goes... I got a little code for detecting the ball entering the trigger, here it is:
using UnityEngine;
using System.Collections;
public class Trigger_for_AI : MonoBehaviour {
public Rigidbody myBall;
void OnTriggerEnter(Collider other){
if (other.gameObject.tag == "Bola") {
myBall = other.GetComponent<Rigidbody>();
}
}
void OnTriggerExit(Collider other){
if (other.gameObject.tag == "Bola") {
myBall = null;
}
}
}
I was wondering: If my enemies know if the ball they was following alredy hit, so they can start following other ball...A bool in the enemy (OnCollisionEnter(exit)?)... But, I have 3 enemies, do I need to have 3 scripts? One for each enemy, once they are in different positions?
You need to store each ball in a var when it enters the trigger(so three vars). You need to keep track of which ball should be followed, I would use an int set from 1 to 3 to identify which of the three ball object vars is current. When one is defended check which of the other vars is full, if only one is full set that to your next target, if both are full then check which one is closest and set that one to be your new current.
Hope this helps a bit.
Got it.. But in code terms, how it would be? They know if there is a ball into the trigger by this code:
myBall = GameObject.FindWithTag("Capsula_left").GetComponent<Trigger_for_AI>(). myBall;
myBall = GameObject.FindWithTag("Capsula_right").GetComponent<Trigger_for_AI>(). myBall;
myBall = GameObject.FindWithTag("Capsula_top").GetComponent<Trigger_for_AI>(). myBall;
Whenever a ball enters the trigger store the whole object in a var.
If all the vars are currently empty then this object becomes the target(your target int var).
If one leaves the trigger then check against the stored objects to see which one has left and empty that var.
You always store a new ball entry in the first available empty object var.
Have a function called in update which checks if any of the three ball gameobject vars is not empty.
If at least one is not empty then run your reaction code function which includes the sort of stuff above for the object that you want to follow.
When an object is destroyed/dealt with then empty the gameobject var for that object. Then empty your current target var(set to 0).
Now in your next update it will check if any vars are full, if they are it will see if it has a target, if it does not have a current target it will do the distance check to find one..
triggering random animations with gui
1
Answer
Can't click gameobject when over another trigger?
1
Answer
Repeat While Object in Range
1
Answer
How to make Enemies
3
Answers
How can I keep enemies that follow my Player, from becoming a "blob"?
1
Answer | https://answers.unity.com/questions/806956/how-to-get-my-enemies-to-detect-presence.html | CC-MAIN-2020-45 | en | refinedweb |
Asked by:
Custom UserNamePasswordValidator.Validate never hit in Visual Studio
Question
Hi,
I'm trying to use a custom UserNamePasswordValidator to validate credentials when using a WCF service. However, the Validate method is never called in Visual Studio 2017 (.net framework 4.5.2).
Here is my custom UserNamePasswordValidator class :
public class CustomUserNameValidator : UserNamePasswordValidator { public CustomUserNameValidator() { } public override void Validate(string userName, string password) { ILoginDomainProvider loginDomainProvider = NinjectWebCommon.Kernel.Get<ILoginDomainProvider>(); if (loginDomainProvider.IsLoginValid(new Login(userName, password))) return; throw new SecurityTokenException("Account is invalid"); } }
The LoginDomainProvider class just checks the presence of the username password pair in a MySql database.
My web.config :
<system.diagnostics>
<sources>
<source name="System.ServiceModel"
switchValue="Information, ActivityTracing"
propagateActivity="true">
<listeners>
<add name="traceListener"
type="System.Diagnostics.XmlWriterTraceListener"
initializeData= "c:\temp\log\Traces.svclog" />
</listeners>
</source>
</sources>
</system.diagnostics>
<system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="ServiceBehaviorUsernameValidator"> <dataContractSerializer maxItemsInObjectGraph="6553500"/> <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> ="CurrentUser" /> </serviceCredentials> <!-- <serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="AspNetSqlRoleProvider" /> --> </behavior> <behavior> <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <protocolMapping> <add binding="basicHttpBinding" scheme="http" /> <add binding="basicHttpsBinding" scheme="https" /> </protocolMapping> <bindings> <wsHttpBinding> <binding name="WsHttpBindingConfig" maxBufferPoolSize="200000" maxReceivedMessageSize="200000" sendTimeout="00:01:00"> <readerQuotas maxStringContentLength="200000" /> <security mode="Message"> <message clientCredentialType="UserName"/> </security> </binding> </wsHttpBinding> </bindings> <services> <service name="Braille.Services.ExerciseService" behaviorConfiguration="ServiceBehaviorUsernameValidator"> <endpoint address="" binding="wsHttpBinding" bindingName="WsHttpBindingConfig" contract="Braille.Contracts.IExerciseService" /> </service> </services> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> </system.serviceModel>
My service :
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)] public class ExerciseService : IExerciseService { private readonly IDomainProviderFactory _domainProviderFactory; public ExerciseService(IDomainProviderFactory domainProviderFactory) { Contract.Requires(domainProviderFactory != null); _domainProviderFactory = domainProviderFactory; } #region IExerciseService public List<LocalizedLabel> GetLabels() { var exerciseDomainProvider = _domainProviderFactory.CreateExerciseDomainProvider(); var labels = exerciseDomainProvider.GetLabels(); return labels; } public string Test() { return "test"; } #endregion }
I've seen multiple posts related to custom UserNamePasswordValidator being ignored, but they involved using security="Transport" in IIS, which is different from my case.
When putting a breakpoint on the parameterless constructor of CustomUserNameValidator, it is actually hit when accessing the ExerciseService service. If I put invalid information (class name or assembly) in the<userNameAuthentication> section, an exception is thrown. So, it seems my class is somehow recognized. However, the breakpoint on the Validate method is never hit when calling methods from my service, and they return whatever results they are supposed to return, even when using WcfTestClient.exe and incorrect authentication data.
I tried using different modes in the <security> section (my end goal for the security mode will be TransportWithMessageCredential) , replacing the code of my Validate method to just throwing a FaultException, or commenting the <serviceCertificate> section to try to shake things up, but nothing changes. My service methods still work with invalid authentication. I'm also unsure as to how (or if) the Traces.svclog can help me in this case.
I'm at a loss here. I'm definitely not an expert when it comes to WCF security, so I'm probably missing something obvious, but I'd be grateful if someone could shed a bit of light in this case.
Thanks!
Kevin
All replies
Hi Kevin,
There is something wrong in your web.config.
You should use “bindingConfiguration” for WsHttpBindingConfig instead of bindingName.
<services> <service name="ExerciseService.Service1" behaviorConfiguration="ServiceBehaviorUsernameValidator"> <endpoint address="" binding="wsHttpBinding" bindingConfiguration="WsHttpBindingConfig" contract="ExerciseService.IService1" /> </service> </services> Edward8520Microsoft contingent staff Monday, September 4, 2017 3:04 AM
Hi Edward,
Thank you for your answer. Apparently, when I tried testing different things to make it work, I made a mistake when rewriting this attribute.
Now that this is corrected, I'm getting a "The user name is not provided. Specify a user name in ClientCredentials" error when calling a method from my service in WcfTestClient.exe, which is at least in the right direction. However, the breakpoint on CustomUserNameValidator's Validate() method is still never called.
Even when all code is removed from inside the Validate() method, I'm still getting the above mentioned error when calling a method from my service. To my understanding, as long as an exception is not thrown inside Validate(), the user should be considered authenticated.
Am I still missing something?
Thank you!
Kevin
Hi Kevin,
>> as long as an exception is not thrown inside Validate(), the user should be considered authenticated
Not all right. Before checking Validate method, the Security Mode from Binding is checked first. Since you specify UserName security, UserName and password is required and you need to provide them at client side.
For checking Validate, you need to consume your service by generating client code and providing UserName and password. It could not be tested in wcftestclient.
Since your previous issue related with wrong web.config has been resolved, I would suggest you mark the helpful reply as answer to close previous issue.
If you have any issue related with Validate method, I would suggest you post a new thread and then we could focus on this specific issue. Kevin B 02 Monday, September 4, 2017 7:26 AM
Hi Edward,
>>Not all right. Before checking Validate method, the Security Mode from Binding is checked first. Since you specify UserName security, UserName and password is required and you need to provide them at client side.
This actually managed to put me in the right track, and after a few trial and errors with my self-signed X509 certificate, I finally got to hit the breakpoint on Validate method and test authentication.
Here is my proxy class on the client side :
public class ExerciseProxy : ClientBase<IExerciseService>, IExerciseService { #region Members private readonly ICredentialsProvider<ClientCredentials> _credentialsProvider; #endregion #region CTors public ExerciseProxy(ICredentialsProvider<ClientCredentials> credentialsProvider) { Contract.Requires(credentialsProvider != null); _credentialsProvider = credentialsProvider; } #endregion protected override IExerciseService CreateChannel() { ChannelFactory<IExerciseService> channelFactory = new ChannelFactory<IExerciseService>("ExerciseService_Endpoint"); var defaultCredentials = channelFactory.Endpoint.Behaviors.Find<ClientCredentials>(); if (defaultCredentials != null) channelFactory.Endpoint.Behaviors.Remove(defaultCredentials); _credentialsProvider.SetCredentials("username", "password"); channelFactory.Endpoint.Behaviors.Add(_credentialsProvider.Credentials); channelFactory.Credentials.ServiceCertificate.Authentication.CertificateValidationMode = System.ServiceModel.Security.X509CertificateValidationMode.None; return channelFactory.CreateChannel(); } #region IExerciseService public List<LocalizedLabel> GetLabels() { return Channel.GetLabels(); } public string Test() { return Channel.Test(); } #endregion
, my app_config on the client side :
<system.serviceModel> <bindings> <wsHttpBinding> <binding name="WsHttpBinding_DefaultBinding"> <security> <message clientCredentialType="UserName" /> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint name="ExerciseService_Endpoint" address="" binding="wsHttpBinding" bindingConfiguration="WsHttpBinding_DefaultBinding" contract="Braille.Contracts.IExerciseService"> <identity> <dns value="Braille.WcfHost"/> </identity> </endpoint> </client> </system.serviceModel>
, and change the serviceCredentials section on the server's web config with the following :
="LocalMachine" /> </serviceCredentials>
Thank you very much for your help!
Kevin
Hi Kevin,
I am glad your issue has been resolved, and you could suggest you click "Mark as answer" to close this thread.. | https://social.msdn.microsoft.com/Forums/en-US/05ee0cee-9d45-402e-a019-186a5d95deaa/custom-usernamepasswordvalidatorvalidate-never-hit-in-visual-studio?forum=wcf | CC-MAIN-2020-45 | en | refinedweb |
Thanks for the reply :) That was one of my first ideas and in theory it would work. One question though is that to add a varient to a node, I'm physiucally dragging varients over and using the link option when prompted, this updates and moves it automatically (reference here).
Does linking still call the published event even though I don't need to actually hit the publish button?
Changes to Relations/Associations between nodes and entries can be listened via CatalogEventListenerBase . What I think that might work for you is:
- Implement an implemenation of CatalogEventListenerBase
- Override AssociationUpdating and RelationUpdating methods
- In these method, check if the changes satisfy your conditions or not.
Those events are ecf events, they work with DTO (CatalogRelationDto and CatalogAssociationDto).
Regards.
/Q
That looks promsiing, lathoguth my codes not getting hit. How do i register it?
I've got this
public class MyCatalogEventListenerBase : CatalogEventListenerBase { public new void AssociationUpdating(object source, AssociationEventArgs args) { var variant = source as GameVariation; var boo = variant != null; } public new void RelationUpdating(object source, RelationEventArgs args) { var variant = source as GameVariation; var boo = variant != null; } }
and I also added in a structure map config but still nothign, is there somethign i'm missing?
container.For<CatalogEventListenerBase>() .Use(ctx => new MyCatalogEventListenerBase());
Perfect, thanks for your help. That's got it working. Last question though.. I can now do look-ups and prevent extra nodes, is there a recommened way that I can feed this back to the user? I assume I'd write somethign like this :
public override void RelationUpdating(object source, RelationEventArgs args) { var variant = source as GameVariation; var boo = variant != null; if (beep == NotUnderAnotherNode()) base.RelationUpdating(source, args); else // Display Error Somehow }
In the CMS I'd use soemthig like IValidate to return a ValidationError, any suggestions?
Hey, we have a requirement where we can have a selection of nodes, foir a rough example say we call them:
10% Discount
20% Discount
Content editors can drag varients under the node they want the discount to be applied and associate them. they will not live directly under these nodes.
What is the best way to prevent content editors from being able to drag a varient under more than one of these node? E.g. if variant one, has had an association added to it under the 10% discount node, if a content editor tries to additionally create a second association under 20% discount they should get some form of warning.
Thanks!
Jon | https://world.episerver.com/forum/developer-forum/Episerver-Commerce/Thread-Container/2015/9/validation-on-a-node2/ | CC-MAIN-2020-45 | en | refinedweb |
Converting CSV to HTML table in Python
In this post, we are going to see how to convert a CSV file to an HTML table in Python. Here, we will discuss two methods that are available in Python.
2 Methods:
- Using pandas.
- Using PrettyTable.
CSV file:
- Expansion: Comma Separated Value file.
- To exchange data between applications, a CSV file can be used.
- It is a text file that has information that is separated by commas.
- Extension: .csv
Method 1: Using pandas
Among the 2 methods, the simplest one is using pandas. Pandas is very suitable to work with data that is in structural form. It is fast and provides expressive data structures. We are going to show you how we can use the Pandas library to convert a CSV into an HTML table.
Installation:
pip install pandas
Below is the CSV file,
“”
- First, we imported the pandas library.
- Then we read the CSV file using the read_csv() method.
- Syntax: pandas.read_csv(csv_file)
- After that, our CSV file is converted into HTML file using to_html() method.
- Syntax: file.to_html(filename)
Now, we have a look at the program.
import pandas file = pandas.read_csv("Student.csv") file.to_html("StudentTable.html")
After executing the above code, our HTML table will look like below,
“”
Method 2: Using PrettyTable
When there is a need to create quick and simple ASCII tables, PrettyTable library can be used.
Installation:
pip install PrettyTable
Let’s look into our program.
- We have imported the PrettyTable library initially.
- Then we opened the CSV file in reading mode using open() method.
- Syntax: open(filename,mode)
- After that, we read all the lines from the CSV files using readlines() method.
- Syntax: file.readlines()
- We assigned file[0] to the head variable. Because file[0] contains the headings present in the CSV file.
- Then we used the split() method which is used to separate the given string based on the separator given.
- Synatx: string.split(separator)
- We added rows to the table using the add_row() method.
- Syntax: table.add_row(data)
- Then, get_html_string() method is used to return the string representation of HTML table’s version.
- Syntax: table.get_html_string()
- Finally, we wrote the entire data into the final HTML file using the file.write() method
from prettytable import PrettyTable file = open("Student.csv", 'r') file = file.readlines() head = file[0] head = head.split(',') #for headings table = PrettyTable([head[0], head[1],head[2]]) for i in range(1, len(file)) : table.add_row(file[i].split(',')) htmlCode = table.get_html_string() final_htmlFile = open('StudentTable2.html', 'w') final_htmlFile=final_htmlFile.write(htmlCode)
After the execution of the code, our output will look like below.
“”
I hope that this tutorial has taught you something new and useful. | https://www.codespeedy.com/converting-csv-to-html-table-in-python/ | CC-MAIN-2020-45 | en | refinedweb |
Machine Learning, Editorial, Programming, Tutorial
Recommendation System Tutorial with Python using Collaborative Filtering
Building a machine learning recommendation system tutorial using Python and collaborative filtering for a Netflix use case.
Author(s): Saniya Parveez, Roberto Iriondo
Introduction
A.
According to McKinsey:
75% of what people are watching on Netflix comes from recommendations [1].
Netflix Real-time data cases:
- More than 20,000 movies and shows.
- 2 million users.
Complications
Recommender systems are machine learning-based systems that scan through all possible options and provides a prediction or recommendation. However, building a recommendation system has the below complications:
- Users’ data is interchangeable.
- The data volume is large and includes a significant list of movies, shows, customers’ profiles and interests, ratings, and other data points.
- New registered customers use to have very limited information.
- Real-time prediction for users.
- Old users can have an overabundance of information.
- It should not show items that are very different or too similar.
- Users can change the rating of items on change of his/her mind.
Types of Recommendation Systems
There are two types of recommendation systems:
- Content filtering recommender systems.
- Collaborative filtering based recommender systems.
Fun fact: Netflix‘s recommender system filtering architecture bases on collaborative filtering [2] [3].
Content Filtering
Content filtering expects the side information such as the properties of a song (song name, singer name, movie name, language, and others.). Recommender systems perform well, even if new items are added to the library. A recommender system’s algorithm expects to include all side properties of its library’s items.
An essential aspect of content filtering:
- Expects item information.
- Item information should be in a text document.
Collaborative Filtering
The idea behind collaborative filtering is to consider users’ opinions on different videos and recommend the best video to each user based on the user’s previous rankings and the opinion of other similar types of users.
Pros:
- It does not need a movie’s side knowledge like genres.
- It uses information collected from other users to recommend new items to the current user.
Cons:
- It does not achieve recommendation on a new movie or shows that have no ratings.
- It requires the user community and can have a sparsity problem.
Different techniques of Collaborative filtering:
Non-probabilistic algorithm
- User-based nearest neighbor.
- Item-based nearest neighbor.
- Reducing dimensionality.
Probabilistic algorithm
- Bayesian-network model.
- EM algorithm.
Issues in Collaborative Filtering
There are several challenges for collaborative filtering, as mentioned below:
Sparseness
The Netflix recommendation system’s dataset is extensive, and the user-item matrix used for the algorithm could be vast and sparse, so this encounters the problem of performance.
The sparsity of data derives from the ratio of the empty and total records in the user-item matrix.
Sparsity = 1 — |R|/|I|*|U|
Where,
R = Rating
I = Items
U = Users
Cold Start
This problem encounters when the system has no information to make recommendations for the new users. As a result, the matrix factorization techniques cannot apply.
This problem brings two observations:
- How to recommend a new video for users?
- What video to recommend to new users?
Solutions:
- Suggest or ask users to rate videos.
- Default voting for videos.
- Use other techniques like content-based or demographic for the initial phase.
User-based Nearest Neighbor
The basic technique of user-based Nearest Neighbor for the user John:
John is an active Netflix user and has not seen a video “v” yet. Here, the user-based nearest neighbor algorithm will work like below:
- The technique finds a set of users or nearest neighbors who have liked the same items as John in the past and have rated video “v.”
- Algorithm predicts.
- Performs for all the items John has not seen and recommends.
Essentially, the user-based nearest neighbor algorithm generates a prediction for item i by analyzing the rating for i from users in u’s neighborhood.
Let’s calculate user similarity for the prediction:
Where:
a, b = Users
r(a, p)= Rating of user a for item p
P = Set of items. Rated by both users a and b
Prediction based on the similarity function:
Here, similar users are defined by those that like similar movies or videos.
Challenges
- For a considerable amount of data, the algorithm encounters severe performance and scaling issues.
- Computationally expansiveness O(MN) can encounter in the worst case. Where M is the number of customers and N is the number of items.
- Performance can be increase by applying the methodology of dimensionality reduction. However, it can reduce the quality of the recommendation system.
Item-based Nearest Neighbor
This technique generates predictions based on similarities between different videos or movies or items.
Prediction for a user u and item i is composed of a weighted sum of the user u’s ratings for items most similar to i.
As shown in figure 8, look for the videos that are similar to video5. Hence, the recommendation is very similar to video4.
Role of Cosine Similarity in building Recommenders
The cosine similarity is a metric used to find the similarity between the items/products irrespective of their size. We calculate the cosine of an angle by measuring between any two vectors in a multidimensional space. It is applicable for supporting documents of a considerable size due to the dimensions.
Where:
cosine is an angle calculated between -1 to 1 where -1 denotes dissimilar items, and 1 shows items which are a correct match.
cos p. q — gives the dot product between the vectors.
||p|| ||q|| — represents the product of vector’s magnitude
Why do Baseline Predictors for Recommenders matter?
Baseline Predictors are independent of the user’s rating, but they provide predictions to the new user’s
General Baseline form
bu,i = µ + bu + bi
Where,
bu and bi are users and item baseline predictors.
Motivation for Baseline
- Imputation of missing values with baseline values.
- compare accuracy with advanced model
Netflix Movie Recommendation System
Problem Statement
Netflix is a platform that provides online movie and video streaming. Netflix wants to build a recommendation system to predict a list of movies for users based on other movies' likes or dislikes. This recommendation will be for every user based on his/her unique interest.
Netflix Dataset
- combine_data_2.txt: This text file contains movie_id, customer_id, rating, date
- movie_title.csv: This CSV file contains movie_id and movie_title
Load Dataset
from datetime import datetime
import pandas as pd
import numpy as np
import seaborn as sns
import os
import random
import matplotlib
import matplotlib.pyplot as plt
from scipy import sparse
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.metrics import mean_squared_errorimport xgboost as xgb
from surprise import Reader, Dataset
from surprise import BaselineOnly
from surprise import KNNBaseline
from surprise import SVD
from surprise import SVDpp
from surprise.model_selection import GridSearchCVdef load_data():
netflix_csv_file = open("netflix_rating.csv", mode = "w")
rating_files = ['combined_data_1.txt']
for file in rating_files:
with open(file) as f:
for line in f:
line = line.strip()
if line.endswith(":"):
movie_id = line.replace(":", "")
else:
row_data = []
row_data = [item for item in line.split(",")]
row_data.insert(0, movie_id)
netflix_csv_file.write(",".join(row_data))
netflix_csv_file.write('\n')
netflix_csv_file.close()
df = pd.read_csv('netflix_rating.csv', sep=",", names = ["movie_id","customer_id", "rating", "date"])
return dfnetflix_rating_df = load_data()
netflix_rating_df.head()
Analysis of the Dataset
Find duplicate ratings:
netflix_rating_df.duplicated(["movie_id","customer_id", "rating", "date"]).sum()
Split train and test data:
split_value = int(len(netflix_rating_df) * 0.80)
train_data = netflix_rating_df[:split_value]
test_data = netflix_rating_df[split_value:]
Count number of ratings in the training data set:
plt.figure(figsize = (12, 8))
ax = sns.countplot(x="rating", data=train_data)ax.set_yticklabels([num for num in ax.get_yticks()])plt.tick_params(labelsize = 15)
plt.title("Count Ratings in train data", fontsize = 20)
plt.xlabel("Ratings", fontsize = 20)
plt.ylabel("Number of Ratings", fontsize = 20)
plt.show()
Find the number of rated movies per user:
no_rated_movies_per_user = train_data.groupby(by = "customer_id")["rating"].count().sort_values(ascending = False)
no_rated_movies_per_user.head()
Find the Rating number per Movie:
no_ratings_per_movie = train_data.groupby(by = "movie_id")["rating"].count().sort_values(ascending = False)
no_ratings_per_movie.head()
Create User-Item Sparse Matrix
In a user-item sparse matrix, items’ values are present in the column, and users’ values are present in the rows. The rating of the user is present in the cell. Such is a sparse matrix because there can be the possibility that the user cannot rate every movie items, and many items can be empty or zero.
def get_user_item_sparse_matrix(df):
sparse_data = sparse.csr_matrix((df.rating, (df.customer_id, df.movie_id)))
return sparse_data
User-item Train Sparse matrix
train_sparse_data = get_user_item_sparse_matrix(train_data)
User-item test sparse matrix
test_sparse_data = get_user_item_sparse_matrix(test_data)
Global Average Rating
global_average_rating = train_sparse_data.sum()/train_sparse_data.count_nonzero()
print("Global Average Rating: {}".format(global_average_rating))
Check the Cold Start Problem
Calculate the average rating
def get_average_rating(sparse_matrix, is_user):
ax = 1 if is_user else 0
sum_of_ratings = sparse_matrix.sum(axis = ax).A1
no_of_ratings = (sparse_matrix != 0).sum(axis = ax).A1
rows, cols = sparse_matrix.shape
average_ratings = {i: sum_of_ratings[i]/no_of_ratings[i] for i in range(rows if is_user else cols) if no_of_ratings[i] != 0}
return average_ratings
Average Rating User
average_rating_user = get_average_rating(train_sparse_data, True)
Average Rating Movie
avg_rating_movie = get_average_rating(train_sparse_data, False)
Check Cold Start Problem: User
total_users = len(np.unique(netflix_rating_df["customer_id"]))
train_users = len(average_rating_user)
uncommonUsers = total_users - train_users
print("Total no. of Users = {}".format(total_users))
print("No. of Users in train data= {}".format(train_users))
print("No. of Users not present in train data = {}({}%)".format(uncommonUsers, np.round((uncommonUsers/total_users)*100), 2))
Here, 1% of total users are new, and they will have no proper rating available. Therefore, this can bring the issue of the cold start problem.
Check Cold Start Problem: Movie
total_movies = len(np.unique(netflix_rating_df["movie_id"]))
train_movies = len(avg_rating_movie)
uncommonMovies = total_movies - train_movies
print("Total no. of Movies = {}".format(total_movies))
print("No. of Movies in train data= {}".format(train_movies))
print("No. of Movies not present in train data = {}({}%)".format(uncommonMovies, np.round((uncommonMovies/total_movies)*100), 2))
Here, 20% of total movies are new, and their rating might not be available in the dataset. Consequently, this can bring the issue of the cold start problem.
Similarity Matrix
A similarity matrix is critical to measure and calculate the similarity between user-profiles and movies to generate recommendations. Fundamentally, this kind of matrix calculates the similarity between two data points.
In the matrix shown in figure 17, video2 and video5 are very similar. The computation of the similarity matrix is a very tedious job because it requires a powerful computational system.
Compute User Similarity Matrix
Computation of user similarity to find similarities of the top 100 users:
def compute_user_similarity(sparse_matrix, limit=100):
row_index, col_index = sparse_matrix.nonzero()
rows = np.unique(row_index)
similar_arr = np.zeros(61700).reshape(617,100)
for row in rows[:limit]:
sim = cosine_similarity(sparse_matrix.getrow(row), train_sparse_data).ravel()
similar_indices = sim.argsort()[-limit:]
similar = sim[similar_indices]
similar_arr[row] = similar
return similar_arrsimilar_user_matrix = compute_user_similarity(train_sparse_data, 100)
Compute Movie Similarity Matrix
Load movies title data set
movie_titles_df = pd.read_csv("movie_titles.csv",sep = ",", header = None, names=['movie_id', 'year_of_release', 'movie_title'],index_col = "movie_id", encoding = "iso8859_2")movie_titles_df.head()
Compute similar movies:
def compute_movie_similarity_count(sparse_matrix, movie_titles_df, movie_id):
similarity = cosine_similarity(sparse_matrix.T, dense_output = False)
no_of_similar_movies = movie_titles_df.loc[movie_id][1], similarity[movie_id].count_nonzero()
return no_of_similar_movies
Get a similar movies list:
similar_movies = compute_movie_similarity_count(train_sparse_data, movie_titles_df, 1775)
print("Similar Movies = {}".format(similar_movies))
Building the Machine Learning Model
Create a Sample Sparse Matrix
def get_sample_sparse_matrix(sparseMatrix, n_users, n_movies):
users, movies, ratings = sparse.find(sparseMatrix)
uniq_users = np.unique(users)
uniq_movies = np.unique(movies)
np.random.seed(15)
userS = np.random.choice(uniq_users, n_users, replace = False)
movieS = np.random.choice(uniq_movies, n_movies, replace = False)
mask = np.logical_and(np.isin(users, userS), np.isin(movies, movieS))
sparse_sample = sparse.csr_matrix((ratings[mask], (users[mask], movies[mask])),
shape = (max(userS)+1, max(movieS)+1))
return sparse_sample
Sample Sparse Matrix for the training data:
train_sample_sparse_matrix = get_sample_sparse_matrix(train_sparse_data, 400, 40)
Sample Sparse Matrix for the test data:
test_sparse_matrix_matrix = get_sample_sparse_matrix(test_sparse_data, 200, 20)
Featuring the Data
Featuring is a process to create new features by adding different aspects of variables. Here, five similar profile users and similar types of movies features will be created. These new features help relate the similarities between different movies and users. Below new features will be added in the data set after featuring of data:
def create_new_similar_features(sample_sparse_matrix):
global_avg_rating = get_average_rating(sample_sparse_matrix, False)
global_avg_users = get_average_rating(sample_sparse_matrix, True)
global_avg_movies = get_average_rating(sample_sparse_matrix, False)
sample_train_users, sample_train_movies, sample_train_ratings = sparse.find(sample_sparse_matrix)
new_features_csv_file = open("/content/netflix_dataset/new_features.csv", mode = "w")
for user, movie, rating in zip(sample_train_users, sample_train_movies, sample_train_ratings):
similar_arr = list()
similar_arr.append(user)
similar_arr.append(movie)
similar_arr.append(sample_sparse_matrix.sum()/sample_sparse_matrix.count_nonzero())
similar_users = cosine_similarity(sample_sparse_matrix[user], sample_sparse_matrix).ravel()
indices = np.argsort(-similar_users)[1:]
ratings = sample_sparse_matrix[indices, movie].toarray().ravel()
top_similar_user_ratings = list(ratings[ratings != 0][:5])
top_similar_user_ratings.extend([global_avg_rating[movie]] * (5 - len(ratings)))
similar_arr.extend(top_similar_user_ratings)
similar_movies = cosine_similarity(sample_sparse_matrix[:,movie].T, sample_sparse_matrix.T).ravel()
similar_movies_indices = np.argsort(-similar_movies)[1:]
similar_movies_ratings = sample_sparse_matrix[user, similar_movies_indices].toarray().ravel()
top_similar_movie_ratings = list(similar_movies_ratings[similar_movies_ratings != 0][:5])
top_similar_movie_ratings.extend([global_avg_users[user]] * (5-len(top_similar_movie_ratings)))
similar_arr.extend(top_similar_movie_ratings)
similar_arr.append(global_avg_users[user])
similar_arr.append(global_avg_movies[movie])
similar_arr.append(rating)
new_features_csv_file.write(",".join(map(str, similar_arr)))
new_features_csv_file.write("\n")
new_features_csv_file.close()
new_features_df = pd.read_csv('/content/netflix_dataset/new_features.csv', names = ["user_id", "movie_id", "gloabl_average", "similar_user_rating1",
"similar_user_rating2", "similar_user_rating3",
"similar_user_rating4", "similar_user_rating5",
"similar_movie_rating1", "similar_movie_rating2",
"similar_movie_rating3", "similar_movie_rating4",
"similar_movie_rating5", "user_average",
"movie_average", "rating"]) return new_features_df
Featuring (adding new similar features) for the training data:
train_new_similar_features = create_new_similar_features(train_sample_sparse_matrix)train_new_similar_features.head()
Featuring (adding new similar features) for the test data:
test_new_similar_features = create_new_similar_features(test_sparse_matrix_matrix)test_new_similar_features.head()
Training and Prediction of the Model
Divide the train and test data from the similar_features dataset:
x_train = train_new_similar_features.drop(["user_id", "movie_id", "rating"], axis = 1)x_test = test_new_similar_features.drop(["user_id", "movie_id", "rating"], axis = 1)y_train = train_new_similar_features["rating"]y_test = test_new_similar_features["rating"]
Utility method to check accuracy:
def error_metrics(y_true, y_pred):
rmse = np.sqrt(mean_squared_error(y_true, y_pred))
return rmse
Fit to XGBRegressor algorithm with 100 estimators:
clf = xgb.XGBRegressor(n_estimators = 100, silent = False, n_jobs = 10)clf.fit(x_train, y_train)
Predict the result of the test data set:
y_pred_test = clf.predict(x_test)
Check accuracy of predicted data:
rmse_test = error_metrics(y_test, y_pred_test)print("RMSE = {}".format(rmse_test))
As shown in figure 24, the RMSE (Root mean squared error) for the predicted model dataset is 0.99.
Plot Feature Importance
Feature importance is an important technique that selects a score to input features based on how valuable they are at predicting a target variable.
def plot_importance(model, clf):
fig = plt.figure(figsize = (8, 6))
ax = fig.add_axes([0,0,1,1])
model.plot_importance(clf, ax = ax, height = 0.3)
plt.xlabel("F Score", fontsize = 20)
plt.ylabel("Features", fontsize = 20)
plt.title("Feature Importance", fontsize = 20)
plt.tick_params(labelsize = 15)
plt.show()plot_importance(xgb, clf)
The plot shown in figure 25 displays the feature importance of each feature. Here, the user_average rating is a critical feature. Its score is higher than the other features. Other features like similar user ratings and similar movie ratings have been created to relate the similarity between different users and movies.
Conclusion
Over the years, Machine learning has solved several challenges for companies like Netflix, Amazon, Google, Facebook, and others. The recommender system for Netflix helps the user filter through information in a massive list of movies and shows based on his/her choice. A recommender system must interact with the users to learn their preferences to provide recommendations.
Collaborative filtering (CF) is a very popular recommendation system algorithm for the prediction and recommendation based on other users’ ratings and collaboration. User-based collaborative filtering was the first automated collaborative filtering mechanism. It is also called k-NN collaborative filtering. The problem of collaborative filtering is to predict how well a user will like an item that he has not rated given a set of existing choice judgments for a population of users [4].
DISCLAIMER: The views expressed in this article are those of the author(s) and do not represent the views of Carnegie Mellon University, nor other companies (directly or indirectly) associated with the author(s). These writings do not intend to be final products, yet rather a reflection of current thinking, along with being a catalyst for discussion and improvement.
Published via Towards AI
Resources:
Google Colab implementation.
References:
, | https://medium.com/towards-artificial-intelligence/recommendation-system-in-depth-tutorial-with-python-for-netflix-using-collaborative-filtering-533ff8a0e444 | CC-MAIN-2020-45 | en | refinedweb |
Mastering MVVM With Swift
Time to Create a View Model
The Missing Manual
for Swift Development
The Guide I Wish I Had When I Started Out
Join 20,000+ Developers Learning About Swift DevelopmentDownload Your Free Copy
In this episode, we create a view model for the day view controller. Fire up Xcode and open the starter project of this episode. We start by creating a new group, View Models, in the Weather View Controllers group. I prefer to keep the view models close to the view controllers in which they are used.
Create a new Swift file in the View Models group and name it DayViewModel.swift.
DayViewModel is a struct, a value type. Remember that the view model should keep a reference to the model, which means we need to create a property for it. That is all we need to do to create our first view model.
DayViewModel.swift
import Foundation struct DayViewModel { // MARK: - Properties let weatherData: WeatherData }
Creating the Public Interface
The next step is moving the code located in the
updateWeatherDataContainerView(with:) method of the
DayViewController class to the view model. What we need to focus on are the values we use to populate the user interface.
Date Label
Let's start with the date label. The date label expects a formatted date and it needs to be of type
String. It is the responsibility of the view model to ask the model for the value of its
time property and transform that value to the format the date label expects.
Let's start by creating a computed property in the view model. We name it
date and it should be of type
String.
DayViewModel.swift
var date: String { }
We initialize a
DateFormatter instance to convert the date to a formatted string and set the date formatter's
dateFormat property. We invoke the date formatter's
string(from:) method and return the result. That is it for the date label.
DayViewModel.swift
var date: String { // Initialize Date Formatter let dateFormatter = DateFormatter() // Configure Date Formatter dateFormatter.dateFormat = "EEE, MMMM d" return dateFormatter.string(from: weatherData.time) }
Time Label
We can repeat this for the time label. We create a
time computed property of type
String. The implementation is similar. We create a
DateFormatter instance, set its
dateFormat property, and return a formatted string.
DayViewModel.swift
var time: String { // Initialize Date Formatter let dateFormatter = DateFormatter() // Configure Date Formatter dateFormatter.dateFormat = "" return dateFormatter.string(from: weatherData.time) }
There is one complication, though. The format of the time depends on the user's preferences. That is easy to solve, though. Navigate to TimeNotation.swift in the Types group. We add a computed property,
dateFormat, to the
TimeNotation enum. The
dateFormat computed property returns the correct date format based on the user's preferences.
UserDefaults.swift
enum TimeNotation: Int { // MARK: - Cases case twelveHour case twentyFourHour // MARK: - Properties var dateFormat: String { switch self { case .twelveHour: return "hh:mm a" case .twentyFourHour: return "HH:mm" } } }
We can now update the implementation of the
time computed property in DayViewModel.swift.
DayViewModel.swift
var time: String { // Initialize Date Formatter let dateFormatter = DateFormatter() // Configure Date Formatter dateFormatter.dateFormat = UserDefaults.timeNotation.dateFormat return dateFormatter.string(from: weatherData.time) }
Let me explain what is happening.
timeNotation is a class computed property of the
UserDefaults class. You can find its implementation in UserDefaults.swift in the Extensions group. It returns a
TimeNotation object.
UserDefaults.swift
// MARK: - Time Notation class var timeNotation: TimeNotation { get { let storedValue = UserDefaults.standard.integer(forKey: Keys.timeNotation) return TimeNotation(rawValue: storedValue) ?? TimeNotation.twelveHour } set { UserDefaults.standard.set(newValue.rawValue, forKey: Keys.timeNotation) } }
We load the user's preference from the user defaults database and use the value to create a
TimeNotation object. We use the same technique for the user's other preferences.
Description Label
Populating the description label is easy. We define a computed property in the view model,
summary, of type
String and return the value of the
summary property of the model.
DayViewModel.swift
var summary: String { weatherData.summary }
Temperature Label
The value for the temperature label is a bit more complicated because we need to take the user's preferences into account. We start simple. We create another computed property in which we store the temperature in a constant,
temperature.
DayViewModel.swift
var temperature: String { let temperature = weatherData.temperature }
We fetch the user's preference and format the value stored in the
temperature constant based on the user's preference. We need to convert the temperature if the user's preference is set to degrees Celcius.
DayViewModel.swift
var temperature: String { let temperature = weatherData.temperature switch UserDefaults.temperatureNotation { case .fahrenheit: return String(format: "%.1f °F", temperature) case .celsius: return String(format: "%.1f °C", temperature.toCelcius) } }
The implementation of the
temperatureNotation class computed property is very similar to the
timeNotation class computed property we looked at earlier.
UserDefaults.swift
// MARK: - Temperature Notation class var temperatureNotation: TemperatureNotation { get { let storedValue = UserDefaults.standard.integer(forKey: Keys.temperatureNotation) return TemperatureNotation(rawValue: storedValue) ?? TemperatureNotation.fahrenheit } set { UserDefaults.standard.set(newValue.rawValue, forKey: Keys.temperatureNotation) } }
Wind Speed Label
Populating the wind speed label is very similar. Because the wind speed label expects a string, we create a
windSpeed computed property of type
String. We ask the model for the the value of its
windSpeed property and format that value based on the user's preference.
DayViewModel.swift
var windSpeed: String { let windSpeed = weatherData.windSpeed switch UserDefaults.unitsNotation { case .imperial: return String(format: "%.f MPH", windSpeed) case .metric: return String(format: "%.f KPH", windSpeed.toKPH) } }
The implementation of the
unitsNotation class computed property is very similar to the
timeNotation and
temperatureNotation class computed properties we looked at earlier.
UserDefaults.swift
// MARK: - Units Notation class var unitsNotation: UnitsNotation { get { let storedValue = UserDefaults.standard.integer(forKey: Keys.unitsNotation) return UnitsNotation(rawValue: storedValue) ?? UnitsNotation.imperial } set { UserDefaults.standard.set(newValue.rawValue, forKey: Keys.unitsNotation) } }
Icon Image View
For the icon image view, we need an image. We could put this logic in the view model. However, because we need the same logic later, in the view model of the week view controller, it is better to create an extension for
UIImage in which we put that logic.
Create a new file in the Extensions group and name it UIImage.swift. Create an extension for the
UIImage class and define a class method
imageForIcon(with:).
UIImage.swift
import UIKit extension UIImage { class func imageForIcon(with name: String) -> UIImage? { } }
We simplify the current implementation of the weather view controller. We use the value of the
name argument to instantiate the
UIImage instance in most cases of the
switch statement. I really like how flexible the
switch statement is in Swift. Notice that we also return a
UIImage instance in the
default case of the
switch statement.
UIImage.swift
import UIKit extension UIImage { class func imageForIcon(with name: String) -> UIImage? { switch name { case "clear-day", "clear-night", "rain", "snow", "sleet": return UIImage(named: name) case "wind", "cloudy", "partly-cloudy-day", "partly-cloudy-night": return UIImage(named: "cloudy") default: return UIImage(named: "clear-day") } } }
With this method in place, it is easy to populate the icon image view. We create a computed property of type
UIImage? in the view model and name it
image. In the body of the computed property, we invoke the class method we just created, passing in the value of the model's
icon property.
DayViewModel.swift
var image: UIImage? { UIImage.imageForIcon(with: weatherData.icon) }
Because
UIImage is defined in the UIKit framework, we need to replace the import statement for Foundation with an import statement for UIKit.
DayViewModel.swift
import UIKit struct DayViewModel { ... }
This is a code smell. Whenever you import UIKit in a view model, a warning bell should go off. The view model shouldn't need to know anything about views or the user interface. In this example, however, we have no other option. Since we want to return a
UIImage instance, we need to import UIKit. If you don't like this, you can also return the name of the image and have the view controller be in charge of creating the
UIImage instance. That is up to you.
I want to make two small improvements. The
DateFormatter instances shouldn't be created in the computed properties. Every time the
date and
time computed properties are accessed, a
DateFormatter instance is created. We can make the implementation of the
DayViewModel struct more efficient by creating a property with name
dateFormatter. We create and assign a
DateFormatter instance to the
dateFormatter property.
DayViewModel.swift
import UIKit struct DayViewModel { // MARK: - Properties let weatherData: WeatherData // MARK: - private let dateFormatter = DateFormatter() // MARK: - Public API var date: String { // Configure Date Formatter dateFormatter.dateFormat = "EEE, MMMM d" return dateFormatter.string(from: weatherData.time) } var time: String { // Configure Date Formatter dateFormatter.dateFormat = UserDefaults.timeNotation.dateFormat return dateFormatter.string(from: weatherData.time) } ... }
In the
date and
time computed properties, the
DateFormatter instance is configured by setting its
dateFormat property. This implementation is more efficient. It is a small improvement but nonetheless an improvement.
What's Next?
You have created your very first view model. In the next episode, we put it to use in the day view controller.
The Missing Manual
for Swift Development
The Guide I Wish I Had When I Started Out
Join 20,000+ Developers Learning About Swift DevelopmentDownload Your Free Copy | https://cocoacasts.com/time-to-create-a-view-model | CC-MAIN-2020-45 | en | refinedweb |
From: Ed Brey (brey_at_[hidden])
Date: 2001-04-24 09:18:03
From: "Paul A. Bristow" <pbristow_at_[hidden]>
> // math_constants.hpp <<< math constants header file - the interface.
> namespace boost
> {
> namespace math_constants
> {
> extern const long double pi;
> } // namespace math_constants
> } // namespace boost
Having only a long double defined makes for more typing by users who are
using less precision. For example, someone working with floating point
would have to write "a = pi * r * r" as "a = float(pi) * r * r". The
solution should make it easy to get the precision desired. One approach
is "float pi = float(math_constants::pi);", which is fine by itself, but
doesn't scale well when working with many constants (see the later point
on multiple constants).
How is constant folding accomplished, given that the definition appears
to be out-of-line?
How would generic algorithms that do not know the desired type at coding
time be written?
> // math_constants.h <<< the definition file
> // Contains macro definitions BOOST_PI, BOOST_E ... as long doubles
> #define BOOST_PI 3.14159265358979323846264338327950288L /* pi */
What is the purpose of the macro? How is it invisioned to be used?
> cout << "pi is " << boost::math_constants::pi << endl;
> using boost::math_constants::pi; // Needed for all constants used.
> // recommended as useful documentation! whereas:
> // using namespace boost::math_constants; // exposes ALL names in
> math_constants.
> // that could cause some name collisions!
Pulling all math constants into the global namespace is indeed asking
for trouble in general, although it would be nice to allow it as it can
be practical within a function. However, it is also less than
desireable to have to perform a using directive for every constant in
use. It's a lot of code that isn't directly related to getting the job
done, and it creates a maintence problem because there is no easy way to
garbage collect as constants go out of use.
Fortunately, namespace renaming solves this problem well (although it do
esn't solve the problem with having the constants be the right types
described above).
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2001/04/11304.php | CC-MAIN-2020-45 | en | refinedweb |
Warehouse Apps 421 Apps found. category: Warehouse × price: Paid ×
Import Reordering Rules From CSV Module, Import Reordering Rules From Excel App, import Reordering Rules From XLS, Reordering Rules From XLSX Odoo
Import Reordering Rules from CSV/Excel file
Picking Order By Email Module, Email Picking Order Detail,Warehouse product Details In Email, Product Detail Email,Picking Order Information Email, Product Stock mail Odoo
Picking Order Details
Picking Order By Email Module, Email Picking Order Detail,Warehouse product Details In Email, Product Detail Email,Picking Order Information Email, Product Stock mail Odoo
Picking Send By Email
scan barcode product app odoo, scan product internal ref no, scan barcode inventory module, scan barcode stock, barcode product reference no, scan stock adjustment barcode, scan inventory adjustment
Stock Adjustment Barcode Scanner
Create Stock HTML Note, Make Delivery Order HTML Notes Module,Generate Warehouse HTML Note With Image, Create Picking Operation HTML Note With Attachment,Print Stock HTML Note In Report,Make HTML NotesWith Terms & Condition Odoo
Stock HTML Notes
Inventory Whatsapp Integration,Stock Whatsapp Integration,Inventory Send Customer Whatsapp Module, Customer Whatsapp Stock Send,Client Whatsapp Send Incoming Order, Delivery Order Send Whatsapp, Send Internal Transfer Client Whatsup app
Inventory Whatsapp Integrations
Make Checklist App, Warehouse Checklist, Stock List Of Items Required, Reminder Checklist For Stock, Remember Of Important Things, Stock Checklist,Inventory Checklist, Stock Own Checklist Odoo
Stock Custom Checklist
odoo app will add Delivery product line list view with filter and group by,delivery order line menu,delivery product line list view with filter,delivery product line list view,Delivery Order Lines by Picking,delivery list, delivery filters, delivery moves lines filter
Delivery Product Line List View, Filter & group by
odoo app will allow Product Move to Scrap Location,Product Move to Scrap, move product to scrap locatation, product scrap move line, scrap product, scrap locatation,move product scrap,manage scrap products
Product Move to Scrap
This module allow employees/users to create Purchase Requisitions.
Product/Material Purchase Requisitions by Employees/Users
This module allow employees/users to create Internal Requisitions.
Product/Material Internal Requisitions by Employees/Users
import internal transfer app, internal transfer from csv, internal transfer from excel, internal transfer from xls, internal transfer xlsx module, internal transfer odoo
Import Internal Transfer from CSV/Excel file
import stock serial no from csv, inventory lot no from excel, stock serial number from xls, inventory lot number from xlsx app, import stock lot module, import warehouse odoo
Import Stock Inventory With Lot/Serial Number from CSV/Excel file
Make Checklist App, Stock List Of Items Required, Reminder Checklist For stock, Remember Of Important Things, Stock Checklist Odoo
Stock Checklist
Sale Order Invoice Status Module, Track Delivery Order Invoice Status, Invoice Partial Payment Status, Invoice Full Payment Status App, Print Invoice Status, DO Status,Invoice Payment Status Odoo
Delivery Order Invoice Status
Set Manual Weight Module, Manage Manual Product Weight, Add Manually Shipping Weight, Add Manually Manufacturing Weight, Set Custom Product Weight, Add Product Price By Self Odoo
Manually Weight
Stock Images, Multiple Stock Image, Warehouse Multiple Images, Inventory Multiple Images, Incoming Order Multiple Images, Picking Operatio Multiple Images, Warehouse Multi Images Odoo | https://apps.odoo.com/apps/modules/category/Warehouse/browse?price=Paid&%3Brepo_maintainer_id=115472&order=Newest | CC-MAIN-2020-45 | en | refinedweb |
Hello,
What is the formula to get the hourly time series with API in Python ?
Thanks a lot.
Hi,
You just have to specify the interval as below:
req = ek.get_timeseries(["MSFT.O"], interval="hour")
Possible values: 'tick', 'minute', 'hour', 'daily', 'weekly', 'monthly', 'quarterly', 'yearly' (Default 'daily')
Thanks for your answer.
It works very well, now can you please tell me how to decide the time zone ? it seems that all is using GMT timezone...
Thanks Gurpeet,
However I dont see how to adapt my script to get the output (df) in my local time zone:
year=2019
print(year)
first_date = datetime(year,1,1)
last_date = datetime.today() + timedelta(days =1)
df =pd.DataFrame(ek.get_timeseries(ric, start_date=first_date,end_date = last_date,interval="hour"))
Thanks in advance.
Hi
last_date is incorrect because datetime.today() + timedelta(days =1) is in the future
I suppose you want to get timeseries from 1st of January, 2019. If I'm right, you don't have to set end_date (because default end_date is today())
This should help you:
import eikon as ek from datetime import datetime from dateutil.tz import tzlocal year=2019 print(year) first_date = datetime(year,1,1, tzinfo=tzlocal()) df =ek.get_timeseries("AAPL.O", start_date=first_date, interval="hour") print(df) HIGH LOW OPEN CLOSE COUNT VOLUME Date 2019-09-09 09:00:00 53.4500 53.2725 53.425000 53.4025 40 9948 2019-09-09 10:00:00 53.5175 53.4050 53.417500 53.4200 33 3328 2019-09-09 11:00:00 53.5750 53.4375 53.500000 53.5325 39 12700 2019-09-09 12:00:00 53.6400 53.5000 53.502500 53.5850 336 171268 2019-09-09 13:00:00 53.6900 53.5000 53.549925 53.6500 547 314024 ... ... ... ... ... ... ... 2020-09-04 20:00:00 127.9900 116.6476 121.480000 120.9000 392570 42388645 2020-09-04 21:00:00 453.2631 117.8411 120.905000 120.1000 17844 16822104 2020-09-04 22:00:00 129.1490 119.6200 120.100000 120.1500 8557 1934439 2020-09-04 23:00:00 120.2500 119.7700 120.150000 119.8200 2101 165211 2020-09-05 00:00:00 120.0600 119.3500 119.830000 120.0000 2630 209387 [4026 rows x 6 columns]
(ek.get_timeseries() already returns a DataFrame)
According to the interval, size of timeseries is limited. In this example, hourly timeseries starts from 2019-09-09.
And you can see that current range of date is too large because timeseries stops on 2020-09-05.
You'll have to reduce the range of date or update the interval to get a result that correspond to [start_date, end_date] | https://community.developers.refinitiv.com/questions/65304/hourly-time-series-with-ekget-timeseries.html | CC-MAIN-2020-45 | en | refinedweb |
ASP.NET MVC Model: Make a BaseViewModel for your Layouts
If you aren't using ViewModels, start now by creating base classes and realize the flexibility of every view, including layout pages.
One day while I was working with MVC, I had an issue with layouts. I kept getting model issues on my layout pages.
The problem was I had a separate ViewModel for each page and I was using a different ViewModel for my layout page.
So how do you pass a model to a page that is required on your layout pages.
Time to Refactor
This tip is extremely quick, easy, and I guarantee you will remember it when creating a hierarchy of layout pages for a new site. You'll be able to nest any amount of layout pages to any depth and keep your sanity with your ViewModels.
I will use my example from ASP.NET MVC Controllers: Push the envelope and build off of that.
First, create a new class and call it BaseViewModel.
Models/BaseViewModel.cs
using System; namespace ThinController.Models { public class BaseViewModel { public String PageTitle { get; set; } public String MetaKeywords { get; set; } public String MetaDescription { get; set; } } }
Of course, this will contain all of your common page data. As you can see, we want each page to contain a Title, Keywords, and a Description to assist Google, or Bing, or whoever is your favorite crawler.
You could add a list of menu items, ad objects, or other classes as well if you have additional properties that are common on every single page.
Next, place the BaseViewModel as your model in your top Layout view.
Views/Shared/_Layout.cshtml (Snippet)
@model ThinController.Models.BaseViewModel <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>@Model.PageTitle</title> @Styles.Render("~/Content/css") @Scripts.Render("~/bundles/modernizr") </head> . .
The key takeaway here is the top line with the model and the <title> tag.
Now, we have our Faq Page that has a list of FAQ items. Since we already have our FaqViewModel from before, all you need to do is inherit from the BaseViewModel.
Models/FaqViewModel.cs
using System.Collections.Generic; namespace ThinController.Models { public class FaqViewModel : BaseViewModel { public IEnumerable<Faq> FaqList { get; set; } } }
Lastly, place the FaqViewModel as your model on the Faq Index.cshtml page (which should already be there based on the example).
You shouldn't have to do anything else. Your code should compile and work just as before, but now, you have a common ViewModel for all of your pages.
What exactly is happening?
Since your layout has a ViewModel (using the @model) when rendering, it looks at that first and uses the Razor engine to fill the template as much as possible before rendering the children.
However, the ViewModel that you are passing to the view is a FaqViewModel. The fact is that since the FaqViewModel inherits from the BaseViewModel, all of the properties are available to the Layout view, but the additional properties will be available to the Index.cshtml page when it renders as well.
Pretty slick!
This means you can create pages of master layouts similar to WebForms and not lose any ViewModel goodness. :-)
I hope this helps in your master page layouts on your website.
Does this make sense? Post your comments and questions below. | https://www.danylkoweb.com/Blog/aspnet-mvc-model-make-a-baseviewmodel-for-your-layouts-PX | CC-MAIN-2020-45 | en | refinedweb |
Frameworkless JavaScriptBy Paweł Zagrobelny
This article was peer reviewed by Stephan Max and Nilson Jacques. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!
JavaScript frameworks offer many functionality and it’s not surprising at all that they’re getting more and more popular. They’re powerful and not so hard to master. Generally, we use them for large and complex applications, but in some cases also for smaller ones. After having learned how to use a framework, you’re tempted to use it for every app you want to develop, but you forget that sometimes using just old good JavaScript might be sufficient.
In this article I’ll discuss about the pros and cons of using a framework and what you should consider before starting your project.
Frameworks Are Powerful
Frameworks have their advantages. First of all, you don’t have to bother about namespaces, cross-browser compatibility, writing several utilities functions, and so on. You work on well organized code, made by some of the best developers in the industry. If you know the framework well, your development speed can be incredibly fast. Moreover, if you have problems with any of the features, it’s easy to find the documentation of the framework, tons of free tutorials, and big community happy to help. What if you need more manpower? There’s no hassle with hiring. If you know how a given framework works, no matter what the project is about, you’ll feel like at home. And the code of the framework itself evolves every day, to be even better, stronger, and more secure. So, you can just focus on what matters: your work.
In conclusion, frameworks are very powerful and offer a lot of features such as templating, routing, and controllers. But the question is: do you really need them for you project?
Often frameworks are a good choice, but this isn’t true for every situation. A framework has a lot of useful functions which in turn increase its weight. Unfortunately, in some cases this weight isn’t justified because smaller projects use only a tiny part of the framework. In such situations, raw JavaScript (sometimes referred as Vanilla JavaScript) can be the solution to all your problems.
By using raw JavaScript your code will be lighter, and easier for you to develop and expand. You also don’t have to spend your time learning one or more frameworks to use. Every framework works in a different manner, so even if you already know what feature to create (maybe because you did it already in the past) you’ll implement it differently based on the framework you’ve chosen to employ. It’s true that the more familiar you are with JavaScript frameworks the faster you learn a new one, but you always have to spend some time deepening the topic (more or less depending on your skills). Moreover, there is always a possibility that the framework you’ve chosen won’t gain popularity and be abandoned. On the contrary, with your own code there is no such possibility and you don’t have to bother about updates and breaking changes of newer versions.
Frameworks sometimes are an overkill and they overcomplicate the structure of small projects. If you need only a few of their features you can develop them by your own.
For instance, one of the most popular features of modern JavaScript frameworks is two-way binding. If you need it, you can write the code that implements it by yourself. Here’s an example of two-way binding in only 100 lines of JavaScript. One hundred lines, no intricacy, effect similar to frameworks’ solutions and, above all, lack of unneeded functionality. To implement this feature, there’s also a more modern approach. Have you ever heard about Object.observe()? Here’s an example of two-way binding feature using this solution. It may seem too futuristic because not every browser supports it, but it’s still interesting to take a look at it. If you’d like to see another solution, you could also check bind.js. Similar functionality, but without
Object.observe().
Cons of Not Using Frameworks
Ignoring JavaScript frameworks could be a good choice sometimes, but you have to remember about the cons of this approach.
Firstly, without a framework you don’t have a solid basic structure. You have to do a lot of work before you can really start developing the features of your product. The more features you want to add, the more time you need. If you’re developing for a client, it could be a really important issue because deadlines are rarely friendly.
Secondly, code quality matters. Obviously, this factor depends on the developers’ skills. If you’re very experienced, the quality will be good. But not everybody has really mastered JavaScript and you don’t want the source of the project to be messy. Poorly written frameworks are not going to live too long while well written ones maintain a high quality for both personal and commercial projects.
Since we’re talking about code, we can’t forget bugs. Every serious framework is made by more than one or two people. With the contribution of thousands of people it’s very hard for bugs to be unnoticed. If you decide to avoid using a framework, your app will be checked by you and your team only. If you go on with in-depth tests, it’d take you even more time that you might not have!
The same point is valid for security. It could be a lot worse than in frameworks for the same reasons I mentioned before. If there are several people who work on the same project, there are more chances that a security issue is noticed. We could say that it isn’t hard to develop an application and that the hard part is to make it good and secure. If you don’t fell like an expert or you’re worried about the security, frameworks or libraries will help you a lot.
There’s also the cross-browser compatibility issue. With a good framework you can forget about this point. If you work with raw JavaScript, you have to handle it by yourself or just ignore it (which isn’t a recommended approach).
I also want to mention a problem with hiring developers. It could be a real problem, especially if you want to do this in a later stage of development. Unless they have a good experience, you have to explain them in details the project source before they can start working on it and, again, it costs time. Even if you teach them all they need to know, often there is no technical documentation of the code of a project. If your new employee has a problem, it’s your responsibility to help. Of course, you can write a documentation yourself, but it costs time and efforts.
To Use or Not to Use, Frameworks? This is the Question.
Based on the points discussed so far, when should you use a framework? You have to take into account several aspects.
Let’s start with what’s probably the most important one: the time. If your clients give you tight deadlines, not using frameworks is not an option. That’s a situation where you have to start developing quickly and with the confidence that you have a solid base. If you’re experienced, frameworks with their ready solutions are perfect for the job.
Another interesting case are large applications. If you’re building something really big, making use of a good framework is the best choice you can make. They have all the features you might need and they provide secure and performant code out of the box. Writing everything yourself would be like re-inventing the wheel for most features and it’s also time consuming.
If you build complex apps without a framework, you’ll probably meet all the cons of frameworkless JavaScript. One of them are possible bugs. If your app has to be reliable and you aren’t an expert, frameworks are a good choice. Even if you’re an expert, making in-depth tests of a complex application could take you a lot of time. If you have it, and your client doesn’t mind, go ahead with your own code. But usually there’s no such comfort.
In some case official documentations are quite poor, but if a given framework is popular enough, you’ll easily find the answers you need. For beginners, developing with framework seems simpler because they don’t have to deal with a structure to develop by themselves and they can simply “follow the rules” of the framework.
Finally, if you’re not alone and you have a big team, which constantly changes, frameworks are like a godsend. If your app is written with AngularJS, for instance, and you hire a developer who knows it, he/she will offer a great support to your project. If you work with my-company-framework.js, things can be a lot harder.
If you don’t know JavaScript very well, writing code by yourself can only bring harm. Your code can be buggy, insecure and not efficient enough. But if you know what you’re doing, the code written for a specific app can work better. It can be simpler for you to extend and you’ll avoid to load tons of unused features. So if you have time and experience, it could be a good choice to not employ a framework.
This is even more true for big apps that have a lot of bespoke features. The fact that your application targets a great amount of users doesn’t mean that the source must be very complicated. If your app is big but simple, using unneeded features of massive frameworks can cost you a lot. Big apps are the place where you can hit the walls of the framework and have to start to use inefficient workarounds. If you app is quite specific, a different approach should be preferred. Frameworks are quite flexible but can’t predict all the scenarios. You are the only person who knows what it’s needed.
Sometimes the decision to use a framework or not is all about personal preferences. If your app isn’t very complicated, you could set your own workspace. It’s always better to create a specific workspace for every project, but it’s not always possible. You need to be highly skilled to do it.
Let’s Meet in the Middle of the Road
Now that I’ve discussed the pros and cons of frameworks, let’s talk about another possibility. Let’s say that you have a small project, you don’t want to use large frameworks, but you have a tight deadline. What do you do?
You don’t have to roll up your sleeves and work 12 hours per day to meet it. When you think about framework, you probably think to a big set of features, but it’s not always like that. There are many small and lightweight frameworks and libraries for less demanding apps. They could be the best choice sometimes.
There are a lot of minimalist JavaScript frameworks you could adopt. For example, you can give a chance to Sammy which is only 16kB and 5.2K compressed and gzipped. Sammy is built on a system of plugins and adapters and it only include the code you need. It’s also easy to extract your own code into reusable plugins. It’s an awesome resource for small projects.
As an alternative you could use the super tiny Min.js, a JavaScript library useful to execute simple DOM querying and hooking event listeners. Thanks to its jQuery-like style, it feels very intuitive and simple to use. Its goal is to return the raw DOM node, which then can be manipulated using
element.classList,
element.innerHTML, and other methods. The following is a small example of how to use it:
$('p:first-child a').on('click', function (event) { event.preventDefault(); // do something else });
Obviously, it has some limits. For instance, you can’t turn off events.
Do you need yet another alternative? In this case I can suggest you Riot.js (1 kB). Riot.js is a library that has a lot of innovative ideas, some of which taken from React. However, it tries to be very small and more condensed.
Let’s get Custom tags for example. You can have it with React if you use Polymer. It allows you to write human-readable code, which is then converted to JavaScript. In Riot.js you can have it without any external libraries.
Here’s an example from the official website which shows how the code looks before it’s converted:
<body> <h1>Acme community</h1> <forum-header/> <forum-content> <forum-threads/> <forum-sidebar/> </forum-content> <forum-footer/> <script>riot.mount('*', { api: forum_api })</script> </body>
This is only one from all the features the framework is proud of. You can check the website to find out more about this project.
There’s also Microjs, that I simply adore. It’s “a micro-site for micro-frameworks” that provides you a set of minified and fast JavaScript frameworks and libraries. Each of them does one thing and does it well. You can choose as many of these frameworks as you need. There are tons of solutions for templating, Ajax, HTML5 features to choose from. Microjs helps you to get rid of the frameworks full of the unused features and comes with another advantages. The frameworks and libraries provided are really small and simple. It’s rare even to find files bigger than 3-4Kb!
Returning to the previously-mentioned example of two-way binding without big frameworks, what do you think we would need to do in order to use this feature in Microjs? We would have to visit its website and search for a solution ready to be integrated. And guess what? It’s there! One of these solutions is a micro-library called dual-emitter whose size is just 3.7kB.
Now, let’s say we want a simple templating system. Type “templating” in the search box and you’ll find a long list where you can choose whatever you want. You can also combine one micro-library with many others, creating a specific workspace for your specific project. You don’t have to prepare it by yourself and you don’t have to deal with unneeded features.
There are a lot of possibilities to choose from, some better than others. You have to carefully select them and choose the most proper one.
Finally, I want to mentioned another great project out there called TodoMVC. If you’re confused and don’t know what to employ in your project, that’s the tool for you. The list of well-made JavaScript frameworks is growing every day and is hard to check features of every each of them. TodoMVC does the job for you. It’s a project which offers the same Todo application implemented using MV* concepts in most of the popular JavaScript MV* frameworks of today.
Conclusions
In conclusion, should you use frameworks or not? The decision is up to you. Before you start developing you have to consider what you really need, then measure all the pros and cons of each approach.
If you choose a framework, then search for the one which best suits your needs. If not, search for ready solutions hidden in the micro-frameworks or micro-libraries. If there’s nothing good for you and you want to develop it by yourself. There’s no ready recipe. You’re the one who knows your needs and skills. There’s just one advice: stay focused on your goals and you’ll find the right solution.
What about you? Have you ever tried one of these solutions? Which one? Feel free to share your comments in the section below.
- Adrian SANDU
- Sylvain Pollet
- Jeff Jones
- steve
- Jérémy Heleine
- SocialChooozy
- Greg Meyer
- Greg Meyer
- Ravi Kiran
- Indira Murugan | https://www.sitepoint.com/frameworkless-javascript/ | CC-MAIN-2017-13 | en | refinedweb |
What I was using was a class "Item", and then I made static variables for each Item. For example:
// Basic implementation of Item class public class Item { public string Name { public get; protected set; } public Item(string Name) { this.Name = Name; } } // In "ItemList.cs" public static class ItemList { public static Item ScrapMetal { get { return new Item("Scrap Metal"); } } }
You are probably asking why I returned a new Item, instead of just using one. When I came to writing up Weapons, which I treat as Items (inheritance), I found that due to references, everything in the game had exactly the same stats; everyone died when I only hit one guy, for example.
So, my question is: What is a good way to make a weapon/item/defence system? Of course, I'm not asking for the best way, as there will likely be many methods which are all useful in different situations. What I need is a system whereby I can basically say "use weapon X", and a new, non-referenced weapon is used. However, I also need to have, in effect, a list of all possible items; so as well as having the above way of accessing them, I can loop through them all.
I appreciate this post may be a bit confusing; I can't quite phrase this request as I can think it, so please reply with any and all questions that will probably arise after reading.
Hopefully,
Hnefatl | http://www.dreamincode.net/forums/topic/312430-item-system-in-a-game/ | CC-MAIN-2017-13 | en | refinedweb |
The best practice is to start the sign-in flow right after application start.
All you need to do is to call Connect function:
UM_GameServiceManager.instance.Connect();
The following code snippet show how you can subscribe for the connect / disconnect actions:
UM_GameServiceManager.OnPlayerConnected += OnPlayerConnected; UM_GameServiceManager.OnPlayerDisconnected += OnPlayerDisconnected; private void OnPlayerConnected() { Debug.Log("Player Connected"); } private void OnPlayerDisconnected() { Debug.Log("Player Disconnected"); }
You can always find out current connection state with the ConnectionSate property.
UM_GameServiceManager.instance.ConnectionSate
You may also use Disconnect method if you want to disconnect from the Game Service.
UM_GameServiceManager.instance.Disconnect();
Note: On The IOS platform method does nothing. Due to Game Center API guidelines.
The code snippet bellow shows how to check if the player is currently connected to game service.
if(UM_GameServiceManager.instance.ConnectionSate == UM_ConnectionState.CONNECTED)
Available connection states:
public enum UM_ConnectionState { UNDEFINED, CONNECTING, CONNECTED, DISCONNECTED }
After player is successfully connected you may use player properties to retrieve information about current player. The code snippet bellow shows how to display player name, id and avatar with Unity GUI:
if(UM_GameServiceManager.instance.player != null) { GUI.Label(new Rect(100, 10, Screen.width, 40), "ID: " + UM_GameServiceManager.instance.player.PlayerId); GUI.Label(new Rect(100, 20, Screen.width, 40), "Name: " + UM_GameServiceManager.instance.player.Name); if(UM_GameServiceManager.instance.player.Avatar != null) { GUI.DrawTexture(new Rect(10, 10, 75, 75), UM_GameServiceManager.instance.player.Avatar); } }
The player property is represented as GameServicePlayerTemplate object.
public class GameServicePlayerTemplate { public string PlayerId {get;} public string Name {get;} public Texture2D Avatar {get;} public GameCenterPlayerTemplate GameCenterPlayer {get;} public GooglePlayerTemplate GooglePlayPlayer {get;} } public class GameCenterPlayerTemplate { public string playerId {get;} public string alias {get;} public string displayName {get;} public Texture2D avatar {get;} } public class GooglePlayerTemplate { public string playerId {get;} public string name {get;} public bool hasIconImage {get;} public bool hasHiResImage {get;} public string iconImageUrl {get;} public string hiResImageUrl {get;} public Texture2D icon {get;} public Texture2D image {get;} }
The full API use example can be founded under the UM_GameServiceBasics example scene. | https://unionassets.com/ultimate-mobile/sing-in-251 | CC-MAIN-2017-13 | en | refinedweb |
ZS: If your application is really bad, then you're going to have a lot of events and saving them will take a lot of time. But typically, when you have an application that doesn't generate too many events, the overhead is pretty negligible – it's close to 1 or 2 per cent. But that depends on the number of events generated.
LXF: Would you say that Zend is edging into a space that's traditionally been dominated by Java application servers?
ZS: I think so, to some degree. And in some ways it's already happened. PHP can be found in a lot of business-critical applications and a lot of very large deployments – Wikipedia, YouTube and Flickr all come to mind, but there are tons of others. It's definitely a trend that's growing. We think it makes perfect sense and we do want to support it with Zend Server.
LXF: At the other end of the scale, as Zend Server takes PHP more towards the enterprise, would you say that PHP is perhaps losing touch with its original community?
ZS: I don't think it's losing touch, but I would say that PHP is between 12 and 13 years old, so it's not the cool new kid on the block. That said, I think the community that's been working on PHP is still developing it and is still very much in touch with the community that uses it. The PHP community is very healthy – it's strong and still growing rapidly.
The key strength of PHP is that it's a mature solution, it's been proven. There's a lot less knowledge about how to deploy websites using Ruby or Python – they're good solutions, and I have nothing bad to say about either of them, it's just that the communities are smaller. There's room in the web server industry for more than one player – I don't expect PHP to be used on 100% of websites at one time.
LXF: Would you say that the community's open source work is influencing what goes into the freeware version of Zend Server? For example, I believe PHP 6 is going to include an op-code cache as standard. So has your response to that been, "Well, we'll give away our version too"?
ZS: Well, definitely it's one of the things that went into the decision, but I don't think it's the only thing. We went into this business back in 2001, and we thought that in this day and age it makes sense for us to provide a free – both cost-free and hassle-free – solution for acceleration.
APC is going to become a standard part of the PHP distribution, but the inclusion of it isn't such a huge difference from the status quo. It's already in PECL and you can install it very easily, and if you look at PHP 6, the plan isn't to have it enabled by default. If people really like APC, they can disable the Zend Optimizer Plus and use APC, and it'll work the same except for a few small UI parts that are Zend-specific.
LXF: PHP 6 seems to be spending an awfully long time under construction. Is that some sort of Curse of the Number 6, as with Perl 6, or is it all part of the plan?
ZS: It could be, but I think we'll have PHP 6 before Python 6, though! PHP 6 is a much more difficult project than both PHP 4 and 5 were, for two main reasons. One is the amount of PHP code that's already out there... it's so huge. [The other is] every tiny compatibility breakage you introduce becomes a horrible headache for a lot of people. And combined with the main thing we want to do with PHP 6, which is the introduction of native Unicode support, actually it's pretty much impossible to obtain without introducing a significant amount of compatibility breakage into the language. I don't know how it's going to turn out – I'm being completely honest about that.
LXF: How easy will it be to move from PHP 5 to 6, as compared with moving from PHP 4 to 5?
ZS: The migration from 4 to 5 was fairly successful. It took a few years, but today PHP 5 is already more popular than 4 ever was. We've decided not to rush [the transition], so we're concentrating on PHP 5.3 at this point.
We made the decisions to add some of the features that originally were planned for PHP 6 – such as namespaces – into PHP 5.3, so that we don't have to rush PHP 6. It's probably going to take quite some time until PHP 6 is out. | http://www.techradar.com/news/software/zend-server-talks-java-says-no-rush-for-php-6-607616/2 | CC-MAIN-2017-13 | en | refinedweb |
User Tag List
Results 1 to 6 of 6
Thread: check_box selection
Threaded View
- Join Date
- May 2001
- 193
- Mentioned
- 0 Post(s)
- Tagged
- 0 Thread(s)
check_box selection
I
PHP Code:
def new
@article = Article.new
@collections = Collection.find(:all)
end
PHP Code:
<% for collection in @collections %>
<tr><td align="right"><%= check_box 'collection', 'id' %></td><td><%= collection.name %></td></tr>
<% end %>
Bookmarks | http://www.sitepoint.com/forums/showthread.php?400340-check_box-selection&p=2887838&mode=threaded | CC-MAIN-2017-13 | en | refinedweb |
Opened (main) project should be pre-scanned by fast indexer to provide some date before the root is scanned by JavaCustomIndexer. The pre-scann indexer will provide only info about top level public classes.
Integrated into 'main-golden', will be available in build *201202250400* on (upload may still be in progress)
Changeset:
User: Svata Dedic <[email protected]>
Log: #206069: Fast scanner and Type Provider implementation implemented | https://netbeans.org/bugzilla/show_bug.cgi?id=206069 | CC-MAIN-2017-13 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.