text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
On 05/30/2013 01:39 AM, Michal Privoznik wrote: > On 30.05.2013 05:04, Eric Blake wrote: >> A cygwin build of the qemu driver fails with: >> >> qemu/qemu_process.c: In function 'qemuPrepareCpumap': >> qemu/qemu_process.c:1803:31: error: 'CPU_SETSIZE' undeclared (first use in this function) >> >> CPU_SETSIZE is a Linux extension in <sched.h>; a bit more portable >> is using sysconf if _SC_NPROCESSORS_CONF is defined (several platforms >> have it, including Cygwin). Ultimately, I would have preferred to >> use gnulib's 'nproc' module, but it is currently under an incompatible >> license. I'm still chasing that down (), but at this point, I'd feel safer delaying a gnulib bump until after 1.0.6 is out, since it has missed rc2. >> >> I'll wait for a review on this one, particularly since I'm still >> trying to solve another qemu failure on cygwin: >> >> qemu/qemu_monitor.c:418:9: error: passing argument 2 of 'sendmsg' from incompatible pointer type >> /usr/include/sys/socket.h:42:11: note: expected 'const struct msghdr *' but argument is of type 'struct msghdr *' As far as I can tell, this seems like a bug in cygwin's gcc (4.5.3); still no idea how I will work around that yet. >> -# define QEMUD_CPUMASK_LEN CPU_SETSIZE >> +# ifdef CPU_SETSIZE /* Linux */ >> +# define QEMUD_CPUMASK_LEN CPU_SETSIZE >> +# elif defined(_SC_NPROCESSORS_CONF) /* Cygwin */ >> +# define QEMUD_CPUMASK_LEN (sysconf(_SC_NPROCESSORS_CONF)) >> +# else >> +# error "Port me" >> +# endif >> >> typedef struct _virQEMUCloseCallbacks virQEMUCloseCallbacks; >> typedef virQEMUCloseCallbacks *virQEMUCloseCallbacksPtr; >> > > ACK Thanks; I've pushed this patch. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library Attachment: signature.asc Description: OpenPGP digital signature
https://www.redhat.com/archives/libvir-list/2013-May/msg01987.html
CC-MAIN-2015-14
en
refinedweb
Rob Camick wrote:Don't know if its any better than the API, but the Transforming Shapes, Text, and Images tutorial might help. Campbell Ritchie wrote:Suggest you try copying the Graphics object with its create() method. You might have to cast it to Graphics2D. Then use its scale, rotate, translate and shear methods. The API for AffineTransform tells you which matrix operations they use for the different transforms. Craig Wood wrote:not sure whether this method is suppose to take a matrix or if it's suppose to take a set of arbitrary points AffineTransform contains the matrix and does the matrix math in the background. The transform method takes an arry of arbitrary points (in any space) and transforms them into the space defined by the AffineTransform. import java.awt.*; import java.awt.geom.*; import javax.swing.*; public class TransformingPoints extends JPanel { Rectangle2D.Double rect = new Rectangle2D.Double(105, 132, 175, 100); Point2D.Double[] origVertices; Point2D.Double[] destVertices; int n = 4; public TransformingPoints() { origVertices = new Point2D.Double[n]; origVertices[0] = new Point2D.Double(rect.x, rect.y); origVertices[1] = new Point2D.Double(rect.getMaxX(), rect.y); origVertices[2] = new Point2D.Double(rect.getMaxX(), rect.getMaxY()); origVertices[3] = new Point2D.Double(rect.x, rect.getMaxY()); destVertices = new Point2D.Double[n]; for(int i = 0; i < destVertices.length; i++) { destVertices[i] = new Point2D.Double(); } }(rect); markPoints(origVertices, g2); // Rotate. double x = rect.getCenterX(); double y = rect.getCenterY(); AffineTransform at = AffineTransform.getRotateInstance(Math.PI/4, x, y); g2.setPaint(Color.green.darker()); g2.draw(at.createTransformedShape(rect)); at.transform(origVertices, 0, destVertices, 0, n); markPoints(destVertices, g2); } private void markPoints(Point2D.Double[] pts, Graphics2D g2) { double r = 4.0; for(int i = 0; i < pts.length; i++) { Point2D.Double p = pts[i]; Path2D.Double path = new Path2D.Double(); path.moveTo(p.x, p.y-r); path.lineTo(p.x+r, p.y); path.lineTo(p.x, p.y+r); path.lineTo(p.x-r, p.y); path.closePath(); Color color = g2.getColor(); g2.setPaint(Color.black); g2.draw(path); g2.setPaint(color); } } public static void main(String[] args) { JFrame f = new JFrame(); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.add(new TransformingPoints()); f.setSize(400, 400); f.setLocation(100,100); f.setVisible(true); } } Arron Ferguson wrote:But then I'm faced with the entire graphics context having the transforms/translations applied to it rather than local transforms/translations. Campbell Ritchie wrote: Arron Ferguson wrote:But then I'm faced with the entire graphics context having the transforms/translations applied to it rather than local transforms/translations. No, you only apply the transforms to the copy object. You need a copy object regardless; you can transform and un-transform a Graphics object, but the floating-point arithmetic is never quite precise, so your Graphics will be slightly skewed afterwards. That seems to be more of a problem with rotation and shearing than scaling or translation. And sorry for not replying earlier. And Craig Wood's code always works well, doesn't it
http://www.coderanch.com/t/451200/GUI/java/AffineTransform-transform-method
CC-MAIN-2015-14
en
refinedweb
On Tue, Jan 06, 2009 at 12:48:00PM +0000, Daniel P. Berrange wrote: > There are a number of problems breaking the windows / mingw > build currently. > > - Use of 'close' without importing unistd.h > - Use of non-existant localtime_r oops :-) > - ERROR macro from logging.h clashes with a symbol imported > from windows.h oh, crap, okay we need a namespaced one > So this patch does > > - Adds the missing unistd.h include > - Uses localtime() if localtime_r() is missing (as checked from > configure) > - Adds a VIR_ prefix onto all logging macros, keeping > DEBUG() around because its just used in sooooo many > places > > The use of localtime() on Windows is OK, because the MicroSoft > implementation of this uses thread-local storage: > > > > And all other OS we care about have localtime_r > > Finally I fix a few compile warnings, so that Mingw can now be > built with --enable-compile-warnings=error to detect these > problems more quickly in future. Patch looks fine. But I spotted we still use strerror, I though this had to be discarded because it wasn't thread-safe ? +1 Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit daniel veillard com | Rpmfind RPM search engine | virtualization library
https://www.redhat.com/archives/libvir-list/2009-January/msg00053.html
CC-MAIN-2015-14
en
refinedweb
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 11 results of 11 The code: ********************HeaderTemplateClasses.hpp BEGIN************************** template<unsigned n> class Class_A{ private: template<unsigned m> struct Class_B{ unsigned A[m]; Class_B(void){for(unsigned I=0;I<m;I++) A[I]=I;} }; static const Class_B<n> StDt; public: static unsigned GNum(unsigned index); }; template<unsigned l> unsigned Class_A<l>::GNum(unsigned i){ if(i < l)return StDt.A[i]; else return 0;} template<unsigned p> const Class_A<p>::Class_B<p> Class_A<p>::StDt; ********************HeaderTemplateClasses.hpp END************************** ********************HeaderTemplateClasses.cpp BEGIN************************** #include "HeaderTemplateClasses.hpp" ********************HeaderTemplateClasses.cpp END************************** ********************MAIN_FILE.cpp BEGIN************************** #include <iostream> #include <cstdlib> #include "HeaderTemplateClasses.hpp" using namespace std; int main(void) { unsigned index=0; do { cout << "Dame el indice que quieres conocer : "; cin >> index; cout << Class_A<3>::GNum(index) << " " << Class_A<5>::GNum(index) << " " << Class_A<7>::GNum(index) << endl; }while(index!=11 && cin); return 0; } ********************MAIN_FILE.cpp END************************** It is a copy&paste from the project for console in Dev-C++ 5 with MinGW 2.95.7 special. Are there any sugestions to make better the code or are there any problem in the code?. Thanks. Julián. I didn't know these lines: >// This is the static template data member definition. It should be in >// the same header where 'MyTemplateClass' is defined. > >template <unsigned num> >const mi_vector<num> MyTemplateClass<num>::MyStaticDataMember; Please perdon for the unchecked code and I'll try this lines in the code. Thanks. -- Oscar _______________________________________________ MinGW-users mailing list MinGW-users@... You may change your MinGW Account Options or unsubscribe at: Juli=E1n Calder=F3n Almendros <julian_calderon@...> writes: > How Do I Initialize this static data member: >=20 > **********File_1.hpp begin************** > template<unsigned number> struct mi_vector{ > unsigned[number] my_array; unsigned my_array[number];=20 > mi_vector(void){for(int I=3D0;I<number;I++) my_array[I]=3DI;} The above member function should be: void mi_vector(void) {=20 for(unsigned I=3D0; I<number; I++) my_array[I]=3DI; } Note the 'void' return type. In C++ you should declare the type returned by every function or 'void' if it does not return nothing. Also note that is a bad practice to compare signed integers with unsigned integers. 'for(int' changed to 'for(unsigned' > }; > template<unsigned num> class MyTemplateClass{ > static const mi_vector<num> MyStaticDataMember; > }; // This is the static template data member definition. It should be in // the same header where 'MyTemplateClass' is defined. template <unsigned num>=20 const mi_vector<num> MyTemplateClass<num>::MyStaticDataMember; > **********File_1.hpp end**************** > **********File_1.cpp begin************** > #include "File_1.hpp" > const my_vector<num> MyStaticDataMember(num);//=BF=BF?? Nope. See above. The definition of the template data member must be visible for all the units that specialize the template. And, over all, don't cofuse template parameters with function/constructor parameters. > **********File_1.cpp end**************** > **********File_Main.cpp begin*********** > #include "File_1.hpp" > void main(){ 'main' _must_ return 'int' although you can omit the final 'return'. In that case 'main' returns 0. So replace the above with int main() { > MyTempClass<9> WishedObject(); MyTemplateClass<9> WishedObject; For parameter-less constructors the parens should be avoided. In this case the compiler thinks you are _declaring_ a function called WishedObject who returns a MyTempClass<9>. So the above is not a constructor at all. Remove the parens. > }; That semi-colon at the end of a function definition is against the Standard too :-) Some compilers will report an error. > **********File_Main.cpp end*********** Next time please check the code before posting and remove the trivial errors. Use copy&paste to put the code in the msg. --=20 Oscar Just redefining this programming question for what it is, a programming question. Static variables are used in the Mingw runtime, but this does not appear to be Mingw related. On 7 Apr 2002 at 19:06, Julián Calderón Almendros wrote: >*********** > > > _______________________________________________ > MinGW-users mailing list > MinGW-users@... > > You may change your MinGW Account Options or unsubscribe at: > Hi folks, On 7 Apr 2002 at 4:31, Oscar Fuentes wrote: > . What Oscar said is correct. Isidro, you need to tell us what a .h file is to you first. To most people who program in either c or c++, .h files are specific files for specific uses. The .h on a file typically means .h(eader) file. Any good C/C++ programming documentation will point out to you what an .h file really is, as well as what it is and can (or can not) be used for. Finally, the question is a programming question that doesn't really have a bearing on the Mingw runtime or how Mingw runs/functions. In the future it might be a good idea, in order to maintain clarity and order on the list, to entitle these sorts of posts as either a) OT or b) Semi-OT. OT means "Off Topic". Thanks for your patience. Paul G. > > -- > Oscar > > > _______________________________________________ > MinGW-users mailing list > MinGW-users@... > > You may change your MinGW Account Options or unsubscribe at: >*********** Introduction ============ existence and development is impossible without community attention and contribution. Please submit bugreports via this link: (if you are registered SourceForge user, please submit when you are logged in). Submit patches for MinGW runtime/tools, corrections and additions for the web pages via . MinGW maintainers, greetings. sorry, i know this topic came up before, but i still can't seem to find the info i require. would like to know where i might find the headers for sgi's opengl 1.2 implementation. or are they already in the distribution packages? or should i use the headers that come with sgi's opengl 1.2? (those seem to be for msvc and borland though, i'm not sure if the msvc version could be used and have not tried... sorry...) i tried this site: but it seems to be down? i would assume i can use the .lib files supplied with sgi's opengl implementation without problems.. this part and below are somewhat off topic... (sorry) btw, which is the newer or better implementation of opengl? i would guess that sgi's implementation is superior. but some sites seem to say otherwise... weird... they claim that microsoft's implementation of opengl has replaced sgi's implementation... and therefore, sgi is also no longer distributing their implementation of opengl on win32. (i had a hard time looking for the binaries actually). yet, some sites claim that sgi made the implementation to counter directx and to show ms's implementation of opengl was sucky, and to correct the publics perception of opengl. can anyone shed some light? anyway, i would assume that sgi's implementation is free from certain bugs that haunt the ms implementation too?;en-us;Q272222 thanks in advance. rgds, kh I cannot know if anyone is still interested in this topic, but it's friday and I feel a holiday coming on and feel in the mood to expound out of compassion for others (nothing to with Easter necessarily, I am not a Xian but trying to be a Buddhist...). So here goes again... On Fri, 22 Mar 2002 11:34:50 -0600 (CST), "M Joshua Ryan" <josh@...> said: > On Fri, 22 Mar 2002, Soren Andersen wrote: > > > On Fri, 22 Mar 2002 09:33:15 -0600 (CST), "M Joshua Ryan" > > <josh@...> said: > > > > > then you probably need to look at CreatePipe, DuplicateHandle, > > > and CreateProcess in win32. for cygwin, you can stick with > > > pipe(2). > > > > Thanks, Joshua, but there's still some kind of breakdown in > > comprehension here. I think it might stem from lack of familiarity > > with UNI*-style tool pipelining, in shell scripts, on your end? In > > what i am doing here, the *SHELL* is going to be creating a pipe > > (sometimes, according to user decree, NOT programmer choice!!) and > > supplying all the context for what's going on here. This is a > > cross-platform stand-alone console application, not a Win32-only > > complex something-or-other. The real crux of my initial query lies > > in that porting applications to Windows from standard UNI* -style > > code is complexified because you cannot treat stdin and stdout on > > Win32 like any other filehandle, as in the UNI* credo "everything > > is a file". > as i understand it, normal windows batchfiles cannot create pipes. > it's part of their inheritance from DOS. i don't think there's a > terrible lack of understanding on my part. > unix shells create pipes between programs using the pipe(2) system > call to setup the proper connections between filehandles before > calling fork(2). since DOS didn't have a fork() (because it didn't > have multiple processes), the command shell implemented "piping" by > redirecting the output of the first program to a temporary file, > running it to completion, and then redirecting the input of the > second program to the temporary file. -- The makefiles which eventually result from using 'automake' 1.5x are monstrosities. Sheer hellish madness. Several dozen targets, named obscene things like "am_remake_your_mother"; utterly counter-intuitive, buried in 4 or 5 levels of indirection, swamped in a thousand lines of baffling, migraine-inducing auto-generated superfluity. [These] Makefiles ought to be taken out and bled to death slowly, shot, burned, staked through the heart, generally Buffy-ated to the maximum possible extent. -- Soren Andersen (me) in <> . -- Oscar Hi I would like to know the correspondence between the .h of MFC and = the .h of Mingw. could someone help me. Tks Isidro
http://sourceforge.net/p/mingw/mailman/mingw-users/?viewmonth=200204&viewday=7
CC-MAIN-2015-14
en
refinedweb
29 April 2011 20:36 [Source: ICIS news] HOUSTON (ICIS)--BASF is seeking a price increase on ?xml:namespace> The price increase nomination was for 4 cents/lb ($88/tonne, €59/tonne) effective 16 May, or as contracts allowed, according to a buyer. The initiative brings BASF in line with other SABIC Innovative Plastics announced a 5 cent/lb increase for its extruded pipe and sheet market effective 2 May, or as contracts allow. US domestic ABS prices were assessed by ICIS at 148-168 cents/lb DEL (delivered) for extrusion material and 139-158 cents/lb for injection material. One buyer said he expects the price nominations will go through, based on rising costs for acrylonitrile and butadiene. "Feedstocks are so tight, and the availability as far as production is concerned is limited," the buyer said. "Until things start to ease up a little bit, they can increase their prices." INEOS ABS also announced a price increase nomination of 5 cents/lb effective 1 June, on top of the 4 cents/lb they are seeking in May, according to a customer letter obtained by ICIS. Major producers of ABS are BASF, INEOS ABS, SABIC Innovative Plastics and Styron. ($1 = €0.67) For more on acrylonitrile-butadiene
http://www.icis.com/Articles/2011/04/29/9456128/basf-seeks-4-centlb-price-hike-for-may-us-abs.html
CC-MAIN-2015-14
en
refinedweb
Technical Support On-Line Manuals Compiler Reference Guide --using_std --no_using_std This option enables or disables implicit use of the std namespace when standard header files are included in C++. std This option is provided only as a migration aid for legacy source code that does not conform to the C++ standard. Its use is not recommended. This option is effective only if the source language is C++. The default is --no_using_std. Namespaces.
http://www.keil.com/support/man/docs/armccref/armccref_CHDECCJF.htm
CC-MAIN-2015-14
en
refinedweb
IRC log of svg on 2012-09-17 Timestamps are in UTC. 07:01:47 [RRSAgent] RRSAgent has joined #svg 07:01:47 [RRSAgent] logging to 07:01:49 [trackbot] RRSAgent, make logs public 07:01:49 [Zakim] Zakim has joined #svg 07:01:51 [trackbot] Zakim, this will be GA_SVGWG 07:01:51 [Zakim] I do not see a conference matching that name scheduled within the next hour, trackbot 07:01:52 [trackbot] Meeting: SVG Working Group Teleconference 07:01:52 [trackbot] Date: 17 September 2012 07:03:04 [heycam] Meeting: SVG WG Switzerland F2F Day 1 07:03:07 [heycam] Chair: Cameron 07:03:45 [birtles] birtles has joined #svg 07:04:22 [heycam] Agenda: 07:06:06 [krit] krit has joined #svg 07:06:31 [krit] Zakim, krit is me :D 07:06:31 [Zakim] I don't understand 'krit is me :D', krit 07:09:22 [Tav] Tav has joined #svg 07:10:39 [nikos] nikos has joined #svg 07:11:11 [nikos] scribenick: nikos 07:11:49 [nikos] Topic: Pass Path object to SVG path 07:11:49 [stakagi] stakagi has joined #svg 07:12:22 [nikos] krit: It would be better to discuss this when Tab is here 07:12:26 [nikos] Topic moved 07:13:12 [nikos] Topic: Linking to external style sheets — should we have <link>? 07:13:38 [nikos] heycam: I was wondering, in the spec at the moment (1.1), it says if you want to reference external stylesheets you should use xml processing instruction 07:13:42 [nikos] ... we should have another way as well 07:13:47 [nikos] ... we at least need to say @import works 07:13:55 [nikos] ... referencing css should get that for free 07:14:23 [nikos] krit: Is it not possible currently? 07:14:27 [nikos] heycam: You can 07:15:32 [heycam] s/can/can't/ 07:16:18 [nikos] heycam: you can't use the xml stylesheet processing instruction in html so we should decide what is the prefered way to link to external stylesheets 07:16:27 [nikos] ... one option is to have link in svg, just like in html 07:16:35 [nikos] birtles: yes 07:16:57 [nikos] ed: What would happen if you copy pasted svg with link element into html5 07:17:30 [nikos] ... link is current an html element so if you pasted svg that is using it into html would it go back to html? 07:17:40 [nikos] ... you can create it with the dom or put it in a foreignObejct 07:17:51 [nikos] s/foreignObejct/foreignObject 07:17:58 [nikos] heycam: it seems to work in firefox 07:18:57 [nikos] heycam: I was testing whether the parser breaks out into html 07:19:01 [heycam](document.getElementsByTagName(%27link%27 )%5B0%5D.namespaceURI)%3C%2Fscript%3E 07:19:13 [nikos] shepazu: Isn't there a white listing of svg elements? or is that just for changing case? 07:19:22 [nikos] heycam: Yes, just changing case 07:19:47 [nikos] krit: what happens if you move the node in the DOM? 07:20:02 [nikos] heycam: I think the link element in the html namespace doesn't have any effect at the moment 07:20:29 [nikos] ... if you try and reference some stylesheet would it work? 07:20:48 [nikos] ... it looks like the way it behaves is - you can have it anywhere in the document 07:21:00 [nikos] ... if we want it to work we specify it the same way except it's in the svg namespace 07:21:29 [andreas] andreas has joined #svg 07:21:40 [victor] victor has joined #svg 07:21:42 [nikos] ... I think we are eventually going to have a bunch of these elements that either share or that can work in both svg and html namespaces 07:21:57 [nikos] krit: I'm just testing safari, if you put a link elements it's in the svg namespace 07:22:55 [nikos] heycam: what do you think of the idea dirk? 07:23:11 [nikos] krit: I like it but I'm wondering what happens when you move nodes in the DOM 07:23:41 [nikos] ... I'm a bit afraid the namespace won't change if you move it in the DOM 07:23:48 [nikos] heycam: nothing changes automatically when you move it 07:23:52 [nikos] krit: and is that ok? 07:24:11 [nikos] shepazu: henry will complain about changes to the parser - are there any? 07:24:18 [nikos] heycam: not for this because it's already in the svg namespace 07:24:25 [nikos] ... maybe it would be a problem if it broke out 07:24:30 [birtles] s/henry/henri/ 07:24:32 [nikos] krit: I wish to have the link element 07:24:40 [nikos] heycam: it gets put in the html namespace 07:24:59 [nikos] ... you can't declare namespaces explicitly but the dom nodes get created in the namespace 07:25:20 [nikos] shepazu: I think the reason for doing it this way is to allow authors to do it without thinking about it - they already know how to do it in html 07:25:28 [nikos] heycam: I think it would work nice and obviously 07:26:07 [nikos] cabanier: it works in Chrome 07:27:06 [nikos] shepazu: if that's true, we should match that user agent 07:28:01 [nikos] heycam: I couldn't get it work in Chrome 07:28:48 [nikos] cabanier: I know it loads the sheet (can see it in the debugger) but I don't know if it applies it 07:28:56 [nikos] ... so it's like half implemented 07:30:04 [ed] 07:30:09 [nikos] heycam: anyone think it's bad idea 07:30:12 [nikos] all: no 07:30:20 [nikos] shepazu: we should ask the html working group 07:30:28 [nikos] heycam: we should ask them if there is anything we haven't thought of 07:30:54 [nikos] Resolution: We will add a link element to SVG that behaves in the same way as HTML 07:32:00 [nikos] Action: Cameron to email the HTML and WhatWG working groups to ask if there any problems related to adding the HTML link element into SVG 07:32:00 [trackbot] Created ACTION-3351 - Email the HTML and WhatWG working groups to ask if there any problems related to adding the HTML link element into SVG [on Cameron McCormack - due 2012-09-24]. 07:32:21 [konno] konno has joined #svg 07:33:23 [nikos] Topic: enable-background naming in relation to compositing and blending 07:33:35 [nikos] cabanier: The name is a bit confusing 07:33:44 [nikos] ... it doesn't match what users are familiar 07:33:50 [nikos] ... in the compositing spec it was replaced with isolation 07:34:16 [nikos] ... so we have the existing enable-background keyword, we can't get rid of it or it will break content 07:34:22 [nikos] ... we were thinking of having it shadow 07:34:26 [nikos] ... like an alias 07:34:35 [nikos] krit: css wg tries to define shadowing properly 07:34:47 [nikos] ... they have the same problem for some text properties 07:35:37 [andreas] andreas has joined #svg 07:36:01 [nikos] nikos: we are thinking that the property should apply to compositing and blending and filter effects 07:36:08 [nikos] ... you shouldn't be able to specify different modes for each 07:36:28 [nikos] heycam: so there's 2 issues really, having the property apply to both and the naming 07:36:37 [nikos] heycam: do you have an example of css shadowing? 07:36:56 [nikos] krit: not yet, we don't have a specification, but the css wg have expressed an interest in the idea 07:37:05 [nikos] heycam: what about if you use the css om to look up property values? 07:37:18 [nikos] ... like there's a single variable underneath but there's 2 properties 07:37:30 [nikos] krit: the question is which takes effect if both are specified 07:37:34 [nikos] cabanier: I think the last one wins 07:39:24 [nikos] ... what happens currently if you say 'opacity=1 opacity=0.5'? 07:39:51 [nikos] heycam: what happens currently for prefixed and not when you support them both? 07:40:03 [nikos] krit: they are both in the style declaration as far as I know 07:40:08 [nikos] heycam: with one underlying variable? 07:40:17 [nikos] krit: I'll have to check 07:40:30 [nikos] heycam: I assume the css working group wants to define how this works and not us 07:40:37 [nikos] cabanier: so are we ok combining them? 07:40:47 [nikos] heycam: I think so - as long as enable-background works for existing things 07:40:59 [nikos] ... so enable-background has numbers when you specify new? or did we get rid of that 07:41:03 [nikos] krit: we got rid of that 07:41:35 [nikos] krit: looking at the source code - we treat them as different properties 07:42:09 [nikos] ... for box-shadow and webkit-box-shadow we have two different style declarations 07:42:23 [nikos] ... you can set box-shadow and webkit-box-shadow and they can be different 07:42:36 [nikos] ... when rendering one will win but I'm not sure which 07:42:47 [nikos] heycam: I think that as long as it is worked out then I'm ok with it 07:43:01 [nikos] cabanier: do we know who is working on the shadowing? 07:43:08 [nikos] krit: no, we'll have to ask 07:44:07 [nikos] Resolution: We want isolation property to shadow enable-background and we will ask the CSS working group about the details 07:44:19 [nikos] Action: Rik to ask CSS WG about how shadowing will work for enable-background 07:44:19 [trackbot] Created ACTION-3352 - Ask CSS WG about how shadowing will work for enable-background [on Rik Cabanier - due 2012-09-24]. 07:45:02 [nikos] krit: I think we will put both properties in Filter Effects and mark one as preferable 07:45:24 [nikos] nikos: So is it ok for one property to control both filter-effects and compositing and blending? 07:45:28 [nikos] all: yes that's ok 07:46:03 [nikos] Resolution: The enable-background/isolation will apply to both Filter Effects and Compositing and Blending 07:46:34 [nikos] Topic: Filter Effects - keep new fe*elements that lack description ATM? 07:47:04 [krit] 07:47:17 [krit] 07:47:35 [nikos] krit: We have different filter effects that lack definition and I would like to know if we want to keep them and add description or is it not neccesary? 07:47:54 [nikos] ... I'm just talking about filter primitives and not the shorthands 07:48:15 [nikos] cabanier: does anybody implement these? 07:48:17 [nikos] krit: no 07:48:28 [nikos] krit: we are about to implement feCustom 07:48:33 [nikos] heycam: who wanted them ? 07:48:48 [nikos] ed: diffuse specular was meant to be an optimisation to give better performance 07:48:58 [nikos] ... I'm not sure if it's really needed as the filter chain does the optimisation 07:49:08 [nikos] heycam: would it make it better for authors? 07:49:11 [nikos] ed: probably not 07:49:17 [nikos] heycam: I say get rid of them then 07:49:31 [nikos] ed: I don't mind removing them if they don't seem to be useful 07:50:37 [nikos] ed: you could do feUnsharpMaskElement with scripting and other filter effects 07:50:41 [nikos] ... but we don't have a shorthand for it 07:50:46 [nikos] Resolution: Remove feUnsharp and feDiffuseSpecular from Filter Effects specification for now - may be added again in future 07:50:54 [krit] 07:51:05 [nikos] Topic: Filter Functions 07:51:19 [nikos] are other shorthands needed? 07:51:36 [nikos] krit: The question is do we put in a bug report on Safari for functions that are not implemented? 07:52:07 [ed] s/filter chain does the optimisation/implementation can analyze the filter chain and do the same optimisation 07:52:25 [nikos] krit: I had suggestions for other filter functions that we could have, but I have not had any feedback 07:52:31 [nikos] ... so I'm wondering if we can remove the issue 07:53:02 [nikos] heycam: it doesn't hurt to keep it in, but if you're wanting to finalise and go to last call then you can remove it 07:53:10 [nikos] krit: there's no problem adding new shorthands in the later version 07:53:17 [nikos] ... the question is do we freeze what we have now? 07:53:28 [nikos] heycam: I think the set that is there currently is a reasonable set 07:54:20 [nikos] Resolution: Remove filter function suggestions (issue 5) from Filter Effects spec 07:55:03 [nikos] Action: Dirk to file a bug and track filter functions (issue 5) removed from Filter Effects spec 07:55:04 [trackbot] Created ACTION-3353 - File a bug and track filter functions (issue 5) removed from Filter Effects spec [on Dirk Schulze - due 2012-09-24]. 07:55:53 [nikos] Topic: Removing pixelUnitToMillimeter{X,Y} and screenPixelToMillimeter{X,Y} 07:56:06 [nikos] heycam: I didn't know this existed and I'm not sure anyone would use these API functions 07:56:09 [nikos] krit: Is anyone using them? 07:56:14 [nikos] heycam: I'm not sure, I can check. 07:56:30 [nikos] andreas: is there an alternative way to get at the pixel size? 07:56:48 [nikos] heycam: I think not but I think it's been discussed in the CSS WG whether you should access the real DPI and do something based on that 07:56:57 [nikos] cabanier: you mean the HD stuff? 07:56:59 [nikos] heycam: I'm not sure 07:57:03 [nikos] krit: so why do you want to remove it? 07:57:17 [nikos] heycam: our implementation returns a constant value assuming 96 dpi. 07:57:22 [nikos] cabanier: what does webkit do? 07:57:25 [nikos] krit: the same thing 07:57:58 [nikos] cabanier: it could be implemented to not be constant in the future - for HD devices 07:58:26 [nikos] krit: the idea is that we want to know how many mm it is on the screen 07:58:36 [shepazu] shepazu has joined #svg 07:58:40 [nikos] heycam: I remember people objecting to that in some other context 07:59:00 [nikos] cabanier: how do you know what the screen size is? 07:59:44 [nikos] krit: Firefox used to expose that information in an API but removed it. 07:59:54 [nikos] heycam: now everyone has converged on CSS units being a fixed number of pixels 08:00:08 [nikos] krit: the problem is that some platforms don't give you the exact DPI, so it could not be implemented on some platforms 08:01:00 [nikos] heycam: I think one of the problems with physical units is you want to know it for different reasons - like touch events (finger size) and font size (but how far away are they from the screen) 08:01:16 [nikos] krit: I don't know how they would be used 08:01:45 [nikos] shepazu: you'd be surprised - some people implement for one browser and if it's implemented in one browser it gets used 08:02:04 [nikos] ed: Opera is hard coded also 08:02:28 [nikos] ... I think it's the same value as other implementations 08:03:04 [nikos] krit: the css working group is looking into ways to get screen pixels per CSS pixel. 08:03:15 [nikos] heycam: but it doesn't tell you the physical size 08:03:35 [nikos] cabanier: are you going to remove pixels per inch as well? 08:03:43 [nikos] heycam: yep there's not 4 methods 08:03:59 [nikos] cabanier: I think somebody somewhere is using them 08:04:05 [nikos] ed: I'd be surprised to see them used 08:05:09 [nikos] heycam: I just googled screenPixelToMillimeterX and got no hits 08:05:14 [nikos] ... with svg file types 08:05:34 [nikos] cabanier: I wonder if they will become more useful 08:05:39 [nikos] ... in future 08:08:01 [nikos] heycam: I would like to ask the CSS WG what they think about units 08:08:23 [nikos] krit: the question is how to get the physical size, but I don't think the CSS WG is working on that 08:08:39 [nikos] heycam: the topic about mm not being real mm anymore comes up often and I'd like to know the details 08:08:55 [nikos] ... is it because it's probably not what authors want 08:09:15 [nikos] krit: if I say 2cm but then project it on the wall I don't want it to be 2 cm 08:10:59 [nikos] heycam: I'll ask Chris 08:12:51 [nikos] Topic: How to internationalize "title"? 08:13:05 [nikos] Tav: This came up at SVG open. Someone was asking whether it could be done. 08:13:21 [nikos] heycam: My first suggestion would be to allow system language on the title element. 08:13:25 [nikos] ed: that's already done isn't it ? 08:13:30 [nikos] heycam: I'll look 08:13:42 [nikos] ... The question is how to internationalise the title 08:13:50 [nikos] shepazu: how about we use switch? 08:14:33 [nikos] Tav: how would it work? 08:14:44 [nikos] shepazu: you would have the switch element with different copies of the titles 08:15:02 [nikos] heycam: what is switch a child of ? 08:15:13 [nikos] shepazu: how about we change switch to allow text content ( if it doesn't already) 08:15:36 [nikos] ... I think switch only works at an element level now and not at a text level 08:15:41 [nikos] ... I think we should change it to text 08:15:51 [nikos] ... then you would switch on the actual text content rather than on the element 08:15:57 [nikos] ... title would be a child element 08:16:27 [nikos] krit: can you change paragraphs in HTML to change content based on the language? 08:16:31 [nikos] shepazu: there is no mechanism for doing that 08:16:57 [nikos] ... since we are changing switch anyway, we could add something like a span that is basically meaningless? We have tspan - it's not a child of title 08:17:10 [nikos] ... we have a few options 08:17:18 [nikos] ... we can change switch to have text content, but what is the child element of the switch 08:17:33 [nikos] ... we can change title apply to the parent of the switch 08:17:50 [nikos] ... if title is encased in a switch element it jumps up one level - the switch is transparent in regards to the title 08:18:11 [nikos] Tav: what about the desc element? 08:18:19 [nikos] shepazu: or tooltip, or whatever 08:18:27 [nikos] ... they'd be the same 08:18:45 [nikos] heycam: on switch you can have all the group attributes. 08:18:56 [nikos] shepazu: people shouldn't use switch for that - you can but you shouldn't 08:19:01 [nikos] ... switch should be transparent 08:19:12 [nikos] heycam: I think you should be able to do multiple title elements 08:19:37 [nikos] ... like <rect><title systemLanguage="..." /> 08:19:52 [nikos] ... or have language tags which are subsets 08:20:11 [nikos] ... we could resolve that the first title element that matches the conditional processing attributes is used 08:20:20 [nikos] ... if you want a fallback you have a title without conditional attributes 08:20:25 [ed] just playing a bit with the systemLanguage: 08:20:31 [nikos] ... and resolve that only one title applies to an element 08:20:46 [nikos] shepazu: that's good. So we'd be getting rid of the switch element for this case 08:21:02 [nikos] ... we should tell people, for legacy reasons, always put the default first 08:21:15 [nikos] krit: and instead of systemLanguage can we just use lang? 08:21:30 [nikos] shepazu: it might not be the system language it might just be the language of preference, so that sounds like a good idea 08:21:37 [nikos] ... we can change it in switch as well 08:21:52 [nikos] ... anything camel cased needs to die, die, die, die, die! 08:22:14 [nikos] ... I agree with something short but we'd be overloading lang then 08:22:18 [nikos] ... but that's actually useful 08:22:35 [nikos] ... the problem is, what happens if you do 08:22:57 [nikos] <text><tspan lang="de">Hallo</tspan><tspan lang="en">he said.... 08:23:11 [nikos] heycam: I would be fine with allowing lang on title as a special thing 08:23:14 [nikos] shepazu: it seems strange 08:23:27 [nikos] krit: in general you can just display one title 08:23:38 [nikos] shepazu: are we going to generalise this and get rid of the switch element in other circumstances? 08:23:45 [nikos] heycam: I don't think it would work in other cases 08:24:05 [nikos] shepazu: ok I'm fine with it for any meta element (description also) 08:24:28 [nikos] shepazu: we are giving special characteristics to title with respect to what the language is 08:24:31 [nikos] heycam: I'm ok with that 08:24:54 [nikos] ... I think systemLanguage makes more sense with the current functionality but lang looks nicer 08:25:18 [nikos] shepazu: lets leave lang as a special meta data thing that is a special case of switch 08:25:52 [nikos] ed: for title it works not sure about desc 08:26:01 [nikos] Tav: how is title as an attribute different from title as an element? 08:26:14 [nikos] ... do the browsers treat it differnet? 08:26:20 [nikos] s/differnet/differently 08:26:25 [nikos] shepazu: We just hint what you should do 08:26:45 [nikos] heycam: one of the arguments of having title as an element is good for languages which require disambiguation of direction 08:27:02 [nikos] ... svg doesn't have the elements to control that so I think it's a moot issue for us 08:27:42 [nikos] heycam: what about if we had title as a property? 08:27:59 [nikos] shepazu: that would promote poor practice for internationalisation 08:28:11 [nikos] heycam: I don't know if you'd want to download a big file with lots of languages 08:28:36 [nikos] shepazu: I quite like using many languages in SVG. it wasn't designed to be text heavy. 08:29:09 [nikos] ... one thing we didn't talk about is - a common workflow for skins is to point to an external resource for the languages 08:29:27 [nikos] ... it looks up the value and inserts it. We could enable something like that for SVG 08:29:37 [nikos] ... people could point off to their text file that contains different languages 08:29:45 [nikos] heycam: I think you could already do that for text other than title 08:29:52 [nikos] shepazu: we should define how that happens 08:29:56 [nikos] heycam: it's defined. 08:30:13 [nikos] shepazu: We should give examples of how it works to promote best practices 08:30:43 [nikos] ... it could be an href like tref and if it's defined you go and retrieve it and thats where all the switching stuff is done. 08:30:57 [nikos] ... if it doesn't get the file then it could use the default of what you put in the <text> 08:31:22 [nikos] ... I think it would allow people to customise these things more easily - it's just a proposal 08:31:36 [nikos] Tav: allowing lang on title solves the use case that was put to be at SVG open 08:33:04 [nikos] krit: so the first title doesn't need a lang but the rest do? 08:33:27 [nikos] heycam: current implementations look at the first title only, so if you are happy having that as a fallback that will always work you need to put it first 08:33:38 [nikos] and then you could still put lang on it if that's what you want for the default 08:35:15 [nikos] heycam: the issue is, I think, if you have the first title and it has an obscure language, you don't want that to be the default in the new system if other languages are provided 08:35:19 [nikos] krit: I don't agree, 08:35:50 [nikos] shepazu: for implementations that support this it will check every language and display the first match 08:35:54 [nikos] ... so it has the behaviour you want 08:36:01 [nikos] ... it's just for legacy purposes the default is the first 08:36:06 [nikos] krit: that's ok 08:36:20 [nikos] heycam: the only downside is the order checking is different from switch 08:36:30 [TabAtkins] TabAtkins has joined #svg 08:36:31 [nikos] shepazu: it's not switch though it's something different 08:36:44 [nikos] Resolution: Add lang to title and desc 08:36:53 [TabAtkins] Ugh, I slept straight through my alarm, wow. Where are we? Like, physically? 08:37:27 [TabAtkins] kk 08:38:19 [nikos] Action: Tav to add lang to tile and desc 08:38:20 [trackbot] Created ACTION-3354 - Add lang to tile and desc [on Tavmjong Bah - due 2012-09-24]. 08:41:09 [nikos] Topic: Removing pixelUnitToMillimeter{X,Y} and screenPixelToMillimeter{X,Y} 08:41:18 [nikos] andreas: I did some reasearch 08:41:43 [nikos] ... there's a property you can create in Javascript called window.devicePixelRatio 08:41:57 [nikos] ... Opera supports it but it has a different value than WebKit 08:42:04 [nikos] ... that's all I've really found that does that 08:42:17 [nikos] Tab: It's a proprietary webkit thing right now 08:42:28 [andreas] 08:42:56 [nikos] Tab: I think if they're useful remove them but they could be useful since we have the possibility of non-square pixels 08:43:23 [nikos] s/I think if they're useful remove them but they could be useful since we have the possibility of non-square pixels/I think if they're use-less remove them but they could be useful since we have the possibility of non-square pixels 08:43:59 [nikos] nikos: would it be more useful to just get the ratio and not worry about the size? 08:44:10 [TabAtkins] TabAtkins has joined #svg 08:44:24 [nikos] Tab: I think getting the X width is the most useful and then Y could either be an actual height or a ratio 08:45:03 [nikos] heycam: this seems to be a bit different since the SVG functions relate to a specific size on the screen rather than a ratio 09:12:29 [ed] scribeNick: ed 09:12:42 [ed] topic: passing path object to svg path 09:12:44 [krit] 09:12:52 [nikos] nikos has joined #svg 09:13:01 [cabanier] cabanier has joined #svg 09:13:06 [ed] DS: in html canvas we have an interface to build a path object 09:13:18 [ed] ... would be good if we could get the path and pass it to the svgdom 09:13:30 [ed] ... to append to the svg path 09:13:42 [ed] CM: can it serialize to the svg pathstring? 09:13:55 [ed] DS: no, there are some differences 09:14:03 [ed] ... arcto for example 09:14:21 [ed] ... suggest we leave it up to the browser to normalize the path 09:14:34 [ed] ... and not specify exactly how 09:14:47 [ed] CM: i think it should be specified 09:15:58 [ed] RC: if you draw a circle it stores a bunch of bezier curves 09:16:08 [ed] ... so if you read the path out it looks ok, but it's no longer a circle 09:16:35 [ed] doug: so if I do a circle in skia and a circle in cairo i would get the same result? 09:16:52 [ed] DS: might be different, but should look the same 09:17:10 [ed] doug: is this exposed to the user? 09:17:30 [ed] TA: the UA would need to remember the original commands, doesn't do this atm 09:17:54 [ed] DS: i'd prefer the first version to not specify the normalization 09:17:59 [ed] CM: i don't like it 09:18:14 [ed] TA: if you're doing it in script you're going to have to do it anyway 09:18:43 [ed] BB: scripts handle one segments produces in one browser, that's how most authors work 09:19:28 [ed] Doug: experienced that with freaky mouth example, didn't work in opera due to how path normalization worked there 09:19:37 [ed] CM: we could require one or more beziers 09:19:43 [ed] ... not knowing how many you get 09:19:59 [ed] Doug: doesn't help the author at all 09:20:19 [ed] CM: the predictable thing is that you'll get a number of beziers 09:20:34 [ed] doug: think about animation 09:20:58 [ed] DS: we could specify how arc is normalized, as quadratic curves 09:21:36 [ed] TA: all implementations support cubic beziers 09:22:00 [ed] CM: why not have a configurable thing to store the original path commands? 09:22:36 [ed] TA: we should specify how many beziers an arc gets turned into 09:23:24 [ed] BB: is there a concrete usecase for this? or is this just about the procedural api for creating the path? 09:23:32 [ed] TA: i think the api is a use-case 09:23:51 [ed] BB: we could just support the canvas path api in svg 09:25:14 [ed] TA: the path is a toplevel global api, so you can create the path object without having a canvas 09:26:34 [ed] ... will break paper.js, but that's fine, will just shadow the native interface 09:27:06 [ed] CM: there's an interface CanvasPathMethods that we could inherit 09:27:27 [ed] ... to have them directly on the <path> element 09:27:56 [ed] DS: but then you can't use this to create a canvas path, and reuse the path object 09:28:23 [ed] RC: you can contruct it with an svg path string 09:29:13 [ed] TA: we could also make it serialize out to something that works in svg path@d attribute 09:29:33 [ed] DS: like that idea 09:29:56 [ed] ...but it doesn't make sense that it has to serialize and then reparsed by svg 09:30:00 [ed] ED: agree 09:30:31 [ed] Doug: element.addPath(obj) to append that path 09:30:49 [ed] ... you might want to animate and reuse and so on 09:31:16 [ed] CM: how would you get arcs? 09:31:30 [ed] ... with the procedural api 09:31:44 [ed] DS: you can't, you'd get cubics back 09:32:10 [ed] Doug: everybody hates the arc syntax in svg 09:32:50 [ed] ... with catmull-rom we thought to translate that into beziers anyway 09:33:05 [ed] ... you should be able to find out what it converted to 09:33:35 [shepazu] shepazu: and you could chain commandas 09:33:46 [shepazu] s/commandas/commands/ 09:33:50 [ed] CM: would like the native api to be able to create the native path commands, like arc, catmull-rom etc 09:34:58 [ed] ... so if I create an arc with the API and put it into an svg <path> element that the arc is still an arc 09:35:12 [ed] DS: but then the mapping isn't 1:1 09:36:30 [ed] CM: you can't tweak everything afterwards if it's normalized 09:37:36 [ed] ... but the arc syntax might be different betweent the api and the svg commands 09:39:20 [ed] doug: there should be a way to get the non-normalized path out, you should be able to get both that and the normalized variant 09:40:44 [ed] TA: we want to normalize (which needs to be specified) lineto,moveto, cubic beziers and close path 09:41:09 [ed] CM: i want to be able to say .arc and have that turn into arc command 09:41:20 [ed] TA: why? the syntax is different 09:42:44 [ed] CM: if we have the nicer path syntax in svg then we could have a direct mapping 09:43:31 [ed] TA: so taking the path commands and adding it to svg, plus extending the api? 09:43:47 [andreas] andreas has joined #svg 09:44:12 [ed] CM: as long as it's possible procedural things and get the actual path commands 09:45:50 [ed] (discussion on spec stability) 09:47:53 [ed] TA: the path object stringifies to a normalized path string 09:48:21 [ed] CM: how many beziers does an arc get turned into? 09:48:26 [ed] TA: needs to be specified 09:49:52 [ed] RESOLUTION: we want to add a stringifier on the path object that returns a string using normalized svg path syntax 09:52:25 [ed] RESOLUTION: svg path elements gains a addPath method that appends path to the path 09:52:53 [ed] s/appends path to the path/appends path object to the path 09:53:07 [ed] CM: does canvas paths need to start with a moveto? 09:53:22 [ed] TA: not required, starts at 0,0 09:53:37 [ed] ... some things start implicit subpaths 09:53:48 [ed] ... e.g the rect command 09:55:09 [heycam] String(new Path("M … A …")) 09:55:29 [heycam] normalizes the path back to an SVG path segment list 09:56:58 [ed] RESOLUTION: it will be possible to normalize explicitly and stringify the result 09:57:07 [TabAtkins] (new Path().addPath("M... A...")+'' 09:57:13 [TabAtkins] (new Path()).addPath("M... A...")+'' 09:58:03 [ed] RESOLUTION: there will be a method that normalizes any svg shape into a path object 09:59:26 [ed] BB: so adding the canvas api methods to the svgpath elements 09:59:43 [ed] TA: if we put it on the path element id' expect it to be normalized 09:59:43 [birtles] s/svgpath elements/svgpathseglist/ 10:00:02 [ed] ... but if we put it on the svgpathseglist then i'd expect non-normalized 10:00:28 [ed] CM: where you put the interface is just baout how much you want to help the authors 10:00:54 [ed] TA: so seglists could be smarter 10:00:59 [ed] CM: seems confusing 10:01:34 [ed] BB: i do want a simple api, but avoiding duplication is maybe better 10:01:53 [ed] CM: e.d.arcTo(...) 10:02:02 [ed] CM: e.arcTo(...) 10:02:26 [ed] ... think the second one is not necessary 10:02:42 [ed] RC: if you do addPath it works 10:03:09 [ed] CM: i'd like to be able to set the svg path from a string directly 10:03:45 [ed] BB: think it should be on the seglist interface 10:04:26 [ed] CM: so adding addPath to the svgpathseglist 10:04:40 [ed] DS: i think it should be on the path element 10:05:08 [ed] CM: think it makes sense to have all path manipulations on the svgpathseglist interface 10:06:08 [ed] DS: what's on the svgpathseglist api now? 10:06:14 [ed] TA: path manip methods 10:06:21 [ed] DS: doesn't quite fit tehre IMO 10:06:50 [ed] CM: BB thinks it'd be more useful to have retained pathseglists 10:07:03 [ed] ... without having it attached to a path element 10:07:05 [ChrisL] ChrisL has joined #svg 10:08:02 [ed] ... so the worry is that we'll have two such path object representations (svgpathseglist and the path object) 10:09:20 [ed] BB: who's going to revise the path apis in svg? 10:09:28 [ed] CM: i think i have an action to do that 10:09:51 [ed] ... new SVGPathSegList(canvaspath) 10:10:34 [ed] Andreas: does addPath start a new subpath? 10:10:59 [ed] Doug: you can use clear to clear out the path so that you can have a blank canvas to draw on 10:11:56 [ed] ... addPath will add a new moveto, but what if you want to add commands to that? 10:12:08 [ed] ... to a existing path 10:12:25 [ed] CM: you could stringify and strip out the movetos 10:12:58 [ed] Doug: couldn't we just have addSegment to just append/continue the path? 10:13:56 [ed] ... could we make addSegment strip out the implicit path API moveto? 10:14:58 [ed] ... usecase: to append segments to an existing path 10:15:31 [ed] TA: you don't know if the moveto was explicit or implicit 10:15:45 [ed] ...due to normalization 10:16:41 [ed] TA: i want addPath and extendPath 10:16:59 [ed] ... extendPath cuts off the first moveto 10:18:43 [ed] RESOLUTION: add extendPath - which acts as addPath but trims off the initial moveto 10:19:16 [ed] doug: so are we adding arcTo? 10:20:04 [ed] ... and what do we do with catmull-rom? 10:21:07 [ed] CM: these new commands should be on the canvas path api so both can use them? 10:21:14 [ed] doug: yes, sounds useful for both 10:22:36 [ed] RESOLUTION: add new procedural methods for catmull-rom and add canvas-like arc commands in svg path syntax 10:23:10 [ed] CM: there are arc and arcto, and ellipse 10:23:47 [ed] TA: arc and ellipse use startangle,endangle 10:25:45 [ed] RESOLUTION: add a 'd' property to the svg path element for accessing the pathseglist 10:26:26 [ed] BB: allow svgpathseglists to be created independent of the svg path element 10:27:06 [ed] ... allow assigning a pathseglist object ot the path element (pathelm.d = seglist) 10:27:52 [ed] doug: should have a json serialization of the path, in addition to the stringification 10:28:25 [ed] CM: someone has requested toJSON for passing objects around, e.g for web workers 10:28:40 [ed] BB: to be able to set and get an array of floats 10:28:57 [ed] ... you already have normalization for moveto, lineto, closepath 10:29:46 [ed] ... faster with float arrays than to use pathseglist 10:31:02 [ed] CM: so, stringifier, jsonifier, and pointifier? 10:31:16 [ed] BB: there are a number of ways you could do this 10:31:39 [ed] ... there are libs that work on arrays of points directly 10:33:49 [ed] ... you're really working on arrays of arrays 10:34:14 [ed] CM: for subpaths you could flag it somehow 10:34:57 [ed] TA: boolean at the end? 10:35:24 [ed] BB: needs to be there when you set it back again 10:35:51 [ed] CM: alternative is to have a function isSubpathClosed, and pass in the subpath 10:36:20 [ed] BB: that's a bit less flexible 10:37:11 [ed] ... want to just read out the point array, manipulate and set it back 10:37:54 [ed] TA: so you have an array of points per subpath 10:38:16 [ed] BB: and you have array of subpaths, which each have an array of points 10:39:07 [ed] TA: so defer until we get some feedback on this, on the list? 10:39:44 [ed] CM: the json thing then... 10:40:08 [ed] TA: not quite ready to resolve on that yet, but similar to the array one, should probably be discussed on the list 10:40:25 [ed] -- 1h break for lunch -- 11:58:01 [stakagi] stakagi has joined #svg 11:59:17 [TabAtkins] TabAtkins has joined #svg 11:59:57 [birtles] birtles has joined #svg 12:00:12 [birtles] scribenick: birtles 12:01:04 [birtles] topic: new arc command 12:01:32 [birtles] heycam: problem is the existing arc is unintuitive and you often want to animate the angles of arc rather than the endpoints 12:01:49 [birtles] ... it's really hard with declarative animation 12:01:54 [nikos] nikos has joined #svg 12:01:55 [birtles] ... you have to do all the trig yourself 12:02:11 [birtles] ... which of the canvas arc commands let you specify the angle? 12:02:53 [birtles] TabAtkins: arc and ellipse are basically the same... 12:03:30 [birtles] ... arc(x, y, radius, startAngle, endAngle, acw) 12:03:42 [birtles] ... coordinates are absolute 12:03:56 [birtles] ... ellipse is the same but the radius has x and y and a rotation argument 12:04:07 [birtles] ... arc2 is a separate command that takes... 12:04:16 [birtles] s/arc2/arcTo/ 12:04:51 [birtles] heycam: is the implicit start point on the circle? 12:05:40 [birtles] cabanier: no, it automatically draws a line to the start of the arc 12:05:51 [birtles] TabAtkins: arcTo is a little more interesting... 12:06:09 [birtles] ... it does a straight line segment but it provides C1 continuity 12:06:33 [birtles] ... arcTo(x1, y2, x2, y2, radius) 12:07:25 [birtles] ... (draws a diagram explaining how arcTo works) 12:07:49 [birtles] shepazu: I'd like to add a rounding algorithm to SVG based on this mechanism 12:08:17 [birtles] TabAtkins: there is no rounded rect, you just use four arcTo commands 12:08:27 [birtles] ... there are the two arc commands in canvas 12:08:56 [birtles] krit: but are we running over the letters in the alphabet in the SVG path syntax? 12:09:10 [birtles] TabAtkins: but we could allow identifiers that are more than a single letter 12:09:51 [birtles] shepazu: we could just say "arc" 12:10:21 [birtles] heycam: is it useful to have both? 12:10:27 [birtles] everyone: yes 12:10:46 [birtles] TabAtkins: we could just have "arc" and "arcTo"? 12:10:55 [birtles] heycam: are you sure we don't want to use single letters? 12:11:05 [birtles] ... what about "e"? 12:11:06 [glenn] glenn has joined #svg 12:11:14 [birtles] krit: that conflicts with scientific notation 12:11:52 [birtles] (which is now in CSS by the way) 12:12:04 [birtles] TabAtkins: I don't think we can continue with single letters 12:12:16 [birtles] heycam: what about leading punctuation? 12:13:02 [birtles] ChrisL: we've talked about having longhand before 12:13:10 [birtles] shepazu: it's also better for compression 12:13:47 [birtles] (talking about using expanded elements for path commands) 12:14:21 [birtles] heycam: so use "arc" and "arcTo" as the commands? 12:15:10 [birtles] TabAtkins: I'd be inclined to call them "circle", "ellipse" and "arcTo" 12:15:38 [birtles] ... so that we can use "arc" later as a longform for the existing arc command 12:16:02 [birtles] heycam: it might be more important to match the command name with the method 12:16:12 [birtles] ... rather than accommodating the existing arc command 12:16:19 [birtles] ... we could give it some other name in the future 12:16:36 [birtles] ChrisL: do we want to deprecate the existing arc command? 12:16:42 [birtles] TabAtkins: no, it's sometimes useful 12:16:50 [birtles] ChrisL: ok 12:17:14 [andreas] andreas has joined #svg 12:17:40 [birtles] TabAtkins: ok, let's keep consistency with the canvas method names for now 12:17:59 [birtles] ChrisL: we should have a resolution about whether we want the longhand form or not 12:18:21 [birtles] TabAtkins: need to decide priorities (when you have both) 12:18:44 [cyril] cyril has joined #svg 12:18:57 [birtles] heycam: it seems like a lot of duplication when sometimes it's probably easier just to separate out paths of the path string, rather than having 10 new elements 12:19:08 [birtles] ed: like having a fragment of the path as a separate element 12:19:17 [birtles] ChrisL: I think we'll probably have that too 12:19:33 [birtles] ... but we've often been asked for the verbose form 12:21:27 [birtles] ... the two big advantages are (a) better compression, (b) adding IDs / event handlers to specific parts 12:22:04 [birtles] (discussion about whether we could stroke sections differently) 12:22:37 [birtles] krit: what about adding length (units) to the path data 12:23:06 [birtles] ChrisL: when we originally considered that there was feedback that said that was really difficult 12:23:40 [birtles] heycam: what about percentages? 12:23:49 [birtles] krit: percentages are difficult, just lengths would be enough 12:23:54 [birtles] heycam: percentages would be useful 12:24:12 [birtles] TabAtkins: what measure do you resolve against for a diagonal line 12:24:22 [birtles] heycam: depends which coordinate of the command 12:24:40 [birtles] ... if it's the y coord it is resolved against the height 12:25:06 [birtles] ... since unit identifiers are currently more than one letter there shouldn't be clashes with the path commands? 12:25:24 [birtles] shepazu: using units in paths has often been requested 12:25:38 [birtles] ... sometimes this is because people don't understand how to set units at the root 12:25:46 [birtles] ... but percentages are often valid 12:26:54 [birtles] krit: when are percentages are resolved? 12:27:09 [birtles] TabAtkins: if it's created in JS, what do you resolve the percentages against? 12:27:20 [birtles] heycam: it's similar to creating SVGLength objects 12:27:29 [birtles] ... what do they get resolved against? 12:27:41 [birtles] ... using createSVGLength 12:28:05 [birtles] ... we never really resolved how that should work 12:30:25 [birtles] RESOLUTION: We will add arc, ellipse, and two forms of arcTo to the SVG path syntax based on the canvas methods of the same names 12:31:17 [birtles] TabAtkins: the only remaining command on canvas that's not available with the d attribute is "rect" 12:32:01 [birtles] ChrisL: what do we need to spec out with regards to that resolution 12:32:08 [birtles] heycam: DOM API etc. 12:32:49 [birtles] TabAtkins: it would be nice to make the "d" attribute of the SVGPathElement return a sane version of SVGPathSegList 12:33:09 [birtles] ... (and not just an alias of pathSegList) 12:33:34 [birtles] krit: does this work for repeated commands? 12:33:50 [birtles] ... since the number of arguments to arcTo varies 12:34:01 [birtles] TabAtkins: arcTo comes in a 5 arg and a 7 arg version 12:34:12 [birtles] ChrisL: we could solve it by giving them different names 12:34:24 [birtles] ... or just say you have to repeat the command 12:35:25 [birtles] TabAtkins: what if we have ellipseTo 12:35:40 [birtles] ChrisL: and just document what ellipseTo equates to in canvas terms 12:35:55 [birtles] ... then you wouldn't have to have an exception where these commands can't be repeated 12:36:48 [birtles] heycam: we should look into adding ellipseTo to canvas (as an alias to the equivalent arcTo command) 12:36:56 [birtles] TabAtkins: I'll propose it 12:37:45 [birtles] ACTION: Tab to propose to the HTML WG that the 7 argument form of arcTo be replaced with an ellipseTo method 12:37:45 [trackbot] Created ACTION-3355 - Propose to the HTML WG that the 7 argument form of arcTo be replaced with an ellipseTo method [on Tab Atkins Jr. - due 2012-09-24]. 12:38:13 [birtles] (this is because the elliptical version of arcTo is not implemented anywhere so we are still free to change the name) 12:38:51 [birtles] heycam: I'll try to write this up in the spec since my name is down next to this item in the requirements list 12:39:10 [birtles] ChrisL: what about lengths and percentages in paths? 12:39:29 [birtles] krit: I'm still not sure about how you resolve percentages 12:40:38 [birtles] TabAtkins: let's defer percentages to further discussion on the list and just stick to lengths for now 12:40:44 [birtles] heycam: em units still need a context 12:40:56 [birtles] TabAtkins: you can resolve against a default font size 12:41:01 [birtles] heycam: what about calc? 12:41:09 [birtles] TabAtkins: it's not problematic by itself 12:41:27 [birtles] ... only if one of its components is a length we can't resolve 12:41:45 [birtles] heycam: more discussion is needed on lengths 12:42:05 [birtles] ... particularly for what to do when you don't have a context for resolving against 12:42:10 [birtles] ... what about points on a polyline? 12:42:17 [birtles] krit: yes, since we have that for shapes in CSS 12:43:05 [birtles] ACTION: Dirk to tell the CSS WG that we changed the SVG path syntax 12:43:06 [trackbot] Created ACTION-3356 - Tell the CSS WG that we changed the SVG path syntax [on Dirk Schulze - due 2012-09-24]. 12:43:43 [birtles] ACTION: Dirk to prepare a proposal for supporting lengths/percentages in paths and polylines 12:43:43 [trackbot] Created ACTION-3357 - Prepare a proposal for supporting lengths/percentages in paths and polylines [on Dirk Schulze - due 2012-09-24]. 12:43:56 [birtles] ChrisL: what about element syntax for paths? 12:44:14 [birtles] TabAtkins: I think it's useful 12:46:06 [birtles] ACTION: Chris to produce a proposal for expanded element syntax for paths (including finding the results of testing improved compression ratios with the expanded syntax) 12:46:07 [trackbot] Created ACTION-3358 - Produce a proposal for expanded element syntax for paths (including finding the results of testing improved compression ratios with the expanded syntax) [on Chris Lilley - due 2012-09-24]. 12:47:12 [birtles] cyril: I think it's good to be able to break paths into fragments but not necessarily an element per point 12:47:23 [birtles] ChrisL: it wouldn't be per point but per command 12:48:05 [birtles] heycam: sometimes you don't want to go down to individual commands but just fragments 12:48:16 [birtles] ... it would be nice to use the same mechanism for that 12:48:52 [birtles] shepazu: it would be nice to refer to segments and include them reversed etc. 12:51:34 [birtles] ChrisL: if you have multiple paths and you want to combine those to make one path you sometimes need to reverse a segment 12:51:39 [birtles] shepazu: it would be nice to be able to get the geometry of the reversed version 12:54:21 [birtles] cyril: if we have elements for each command, you'd end up with a lot of DOM nodes 12:54:29 [birtles] ... is it worth having the parse collapse them? 12:54:50 [birtles] TabAtkins: no, you don't want the parser doing that kind of special magic 12:54:54 [birtles] everyone: no magic 12:55:18 [birtles] andreas: what about polar coordinates? 12:55:36 [birtles] heycam: we rejected the request for polar coordinates in transforms 12:56:51 [birtles] ... in place of that we have proposed the turtle graphics which solves many of the cases but not all 12:57:17 [birtles] ChrisL: it reminds me of the syntax used in Raphael which provides a SVG path-like syntax for describing transforms 12:58:30 [birtles] krit: polar coordinates are definitely useful... 12:58:44 [birtles] ... but then the whole coordinate system should be in polar coordinates 12:58:50 [birtles] ... otherwise you have to map them 13:00:27 [birtles] andreas: sometimes you have a series of survey results that are best described using polar coordinates, like a cave 13:00:34 [birtles] ... everything is relative to the last position 13:01:12 [birtles] ... it's typically a series of straight line segments and rotations 13:01:29 [birtles] heycam: that should be possible using the turtle graphics command 13:02:08 [birtles] ... but polar coordinates in general are difficult and we rejected that requirement 13:02:43 [birtles] (description of turtle graphics proposal, the 'r' and 'R' commands) 13:04:15 [birtles] (now talking about Catmull-Rom curves) 13:04:47 [birtles] shepazu: adding a segment affects the previous segment 13:05:06 [birtles] ... and you need two endpoints 13:05:33 [birtles] ... so if you start with a P command (the Catmull-Rom command) you can't draw the curve until you get a second point 13:06:51 [birtles] ChrisL: with regards to the issue of segments affecting the previous segment, if you duplicate the points of the last segment (i.e. a zero-length segment which degenerates to a straight line segment) it won't affect the previous segment 13:07:16 [cabanier] cabanier has joined #svg 13:07:47 [birtles] topic: <image> as paint server for SVG (already resolved to have <gradient>) 13:08:07 [birtles] krit: we already allow <gradient> 13:08:15 [birtles] ... and we resolved how it applies to path elements 13:08:25 [birtles] ... and it's not limited to gradient box (as defined in CSS) 13:09:11 [birtles] TabAtkins: in CSS gradients are infinite 13:09:16 [birtles] ... but they get chopped to their box 13:09:26 [birtles] ... but SVG can define gradients so that extend to infinity 13:09:36 [birtles] ... but other image types can't be trivially extended in the same way 13:10:01 [birtles] ChrisL: in SVG 2 we could also have a painted bounding box (aka decorated bounding box) 13:10:09 [birtles] ... we could use that instead 13:10:30 [birtles] shepazu: i.e. use that instead of the gradient bounding box 13:10:56 [birtles] cyril: but with an image you can say that it only fills 50% of the box 13:11:01 [birtles] ... how do you fill the rest of the object? 13:11:06 [birtles] krit: not at all 13:11:11 [birtles] ChrisL: transparent black 13:11:36 [birtles] TabAtkins: CSS lets you specify repetition etc. 13:11:51 [birtles] ... if we wanted to do that in SVG we'd have to make fill etc. a shorthand 13:12:20 [birtles] ... or we could specify those properties when specifying a paint server (e.g. a pattern) 13:12:33 [birtles] ChrisL: there's another consequence of this painted bounding box 13:12:37 [glenn] glenn has joined #svg 13:12:59 [birtles] ... previously if you had a gradient on a horizontal line you'd end up with nothing unless you special case it 13:13:09 [birtles] ... but if you use the painted bounding box you can get around that 13:13:32 [birtles] krit: what if you have a rectangle that you stroke 13:13:42 [birtles] ... you stroke it with an image 13:13:57 [birtles] ... if you use the geometric bounding box only half the stroke gets painted 13:14:50 [birtles] TabAtkins: what about backwards compatibility about using the painted bounding box 13:15:28 [birtles] shepazu: you'd use the geometric bounding box for existing SVG gradient elements, and the painted bounding box for CSS gradients 13:16:10 [birtles] (discussion about providing a new keyword like objectBoundingBox when defining an SVG gradient so it can use the painted bounding box) 13:16:47 [birtles] TabAtkins: that's important since we also want to be able to use SVG gradients in CSS 13:17:09 [birtles] (discussion about the definition of the decorated bounding box) 13:17:56 [birtles] TabAtkins: when using a CSS gradient function in SVG, do we want stroke to use the painted bounding box and the fill to use the geometric? 13:18:05 [birtles] ChrisL: I'd like to have the flexibility to use both 13:18:21 [birtles] TabAtkins: I think fill should default to geometric and stroke to painted 13:18:48 [birtles] TabAtkins: in summary, we want to expose the ability to switch between geometric and painted bounding boxes 13:19:08 [birtles] ... we will add an optional keyword to the fill/stroke properties to allow switching between the two 13:19:23 [birtles] ... with appropriate defaults for each (specifically fill = geometric, stroke = painted) 13:19:31 [birtles] heycam: what about markers? 13:21:44 [birtles] ... are they included in the calculation of the decorated bounding box (as they are in the DOM method)? it might be less useful when stroking to include markers 13:21:52 [birtles] ... it needs more thought 13:23:52 [birtles] TabAtkins: for stroke, the default is the painted bounding box 13:24:23 [birtles] ... but when referring to an SVG gradient element it will defer to an attribute on the gradient specifying which bounding box to use 13:24:35 [birtles] ... and *that* will default to the geometric bounding box for backwards compatibility 13:24:40 [birtles] ... just like we're doing with maskign 13:24:46 [birtles] s/maskign/masking/ 13:25:13 [birtles] ... for the more general issue of the CSS <image> type... 13:25:38 [birtles] ... if you draw a gradient using the geometric bounding box 13:25:56 [birtles] ... it will size itself using the inner bounding box but it still can extend beyond that box 13:26:13 [birtles] ... the gradients are defined such that they extend infinitely 13:26:18 [birtles] ... then CSS just clips that result 13:26:26 [birtles] ... but SVG might not do that 13:26:59 [birtles] ... what do we do outside the box for other CSS image types 13:27:06 [birtles] ... do we just paint them as transparent black? 13:27:38 [birtles] (other CSS image types being all those other than gradient functions) 13:27:49 [birtles] ... if you want it to repeat, put it in a pattern and use that 13:29:15 [birtles] ... CSS just defines that outside the box you're transparent black unless you use background properties to repeat the image 13:30:08 [birtles] heycam: the alternative is to introduce background-repeat etc. into SVG 13:30:17 [birtles] ... and I'd rather not do that 13:30:28 [birtles] TabAtkins: me too, use patterns for repeating 13:31:54 [glenn] glenn has joined #svg 13:32:45 [birtles] RESOLUTION: We will add a control to fill and stroke to determine which bounding box (geometric or decorated) to use for sizing paint servers 13:33:13 [cyril] for sizing and positioning too 13:33:23 [birtles] RESOLUTION: We will add attribute to the existing paint servers in SVG defaulting to the geometric bounding box (like the maskType attribute) 13:34:09 [birtles] RESOLUTION: When using CSS image types that are finite in extent are expanded to infinity by using transparent black (not by repeating the result) 13:34:31 [birtles] s/extent are/extent, they are/ 13:35:13 [birtles] ACTION: Tab to amend the definition of fill,stroke in SVG to allow the CSS <image> type 13:35:13 [trackbot] Created ACTION-3359 - Amend the definition of fill,stroke in SVG to allow the CSS <image> type [on Tab Atkins Jr. - due 2012-09-24]. 13:36:23 [birtles] heycam: is there a clash since fill,stroke already take a URL but so does the <image> type 13:36:42 [birtles] ... are the mechanics for how you interpret it different? 13:38:00 [birtles] TabAtkins: it should be ok 13:38:06 [birtles] birtles: it's the same for masks 13:38:24 [birtles] ... you have one behavior when you refer to a whole document (a.svg) vs an element within a document (a.svg#mask) 14:03:41 [cabanier] cabanier has joined #svg 14:03:45 [TabAtkins__] TabAtkins__ has joined #svg 14:03:51 [andreas] andreas has joined #svg 14:03:59 [TabAtkins__] ScribeNick: TabAtkins__ 14:04:09 [TabAtkins__] Topic: color interpolation filters for shorthands 14:04:27 [TabAtkins__] krit: Currently we have the color-interpolation property for all filters. 14:04:32 [TabAtkins__] krit: How do we apply it to the shorthand functions? 14:04:56 [TabAtkins__] heycam: Would you normally want to apply it to specific filter primitives? 14:05:33 [TabAtkins__] chrisl: In general you want to do linear, but there are cases where you want to stay in the sRGB as you pass values between, to avoid loss. 14:05:58 [TabAtkins__] krit: For shorthands, we're okay with just using the default colorspace. If you want different shorthands, use the SVG <filter> element. 14:06:31 [nikos] 14:06:34 [krit] 14:07:19 [ChrisL] ChrisL has joined #svg 14:07:24 [TabAtkins__] krit: [lists the shorthand functions] 14:09:20 [TabAtkins__] krit: So, first question, are we okay with saying that the filter shorthands just use the default colorspace? 14:10:11 [TabAtkins__] RESOLVED: Filter shorthands only use the default colorspace (*not* the current value of 'color-interpolation-filters' on the element, either). 14:11:05 [TabAtkins__] krit: Note, obviously, that you can use an SVG <filter> (with color-interpolation set on the <filter> element or the subfilters) and then reference it with url(). 14:11:38 [TabAtkins__] Topic: Perlin noise 14:12:05 [TabAtkins__] krit: Do we want to add a new type of noise function that is easier to hardware-accelerate? 14:12:18 [TabAtkins__] ChrisL: In addition to existing turbulence, or replacement? 14:12:47 [TabAtkins__] TabAtkins__: Addition - too much content already relying on it. 14:12:55 [ChrisL] ok good 14:12:58 [TabAtkins__] ChrisL: Is there a definition for the algorithm that's free? 14:13:21 [TabAtkins__] krit: We need to decide on the new algorithm, with discussion or further research. 14:13:28 [TabAtkins__] ed: I think we discussed Simplex noise before. 14:14:03 [ChrisL] 14:14:29 [ChrisL] Noise Hardware - Ken Perlin 14:14:47 [TabAtkins__] RESOLVED: Add a new type of noise algorithm to the filters spec, for easier hardware acceleration. (Further research for what type, possibly Simplex noise.) 14:16:11 [TabAtkins__] Topic: Need a new filter shorthand for noise? 14:16:31 [TabAtkins__] krit: Often the noise functions are not used standalone - they're used with other filter primitives. 14:17:21 [ChrisL] glsl implementation of Perlin and simplex noise 14:20:33 [ed] here's an example using a bit of turbulence:öm.net/svg/presentations/svgopen2012/presentation/introimage-static-grain-toothpaste.svg 14:20:34 [TabAtkins__]. 14:22:13 [TabAtkins__] ChrisL: I've come across an impl of both classic and simplex noise in glsl, and it says explicitly that both algorithms can be done fine even on low-end (e.g. phone) GPUs. 14:22:55 [TabAtkins__] ChrisL: So why is this a problem that we need to solve? 14:23:24 [TabAtkins__] krit: Based on list discussion, Simplex is a smaller O() complexity, which is of concern with the often-large images that will be used in CSS. 14:23:25 [konno_] konno_ has joined #svg 14:23:33 [ChrisL] ok 14:24:25 [ChrisL] demo 14:25:10 [heycam] ScribeNick: heycam 14:25:39 [ed] the demo chrisl pasted demos 3d-perlin noise I believe 14:25:39 [heycam] TabAtkins__: the filter function in filter effects is of type CSS <image> 14:25:48 [heycam] s/TabAtkins__/krit/ 14:25:56 [TabAtkins__] s/filter function/filter() function/ 14:26:21 [heycam] TabAtkins__: so you can introduce a noise filter shorthand and then use it in the filter function 14:26:40 [TabAtkins__] s/TabAtkins__/krit/ 14:26:54 [heycam] TabAtkins__: my argument is that the filter function as written still takes an <image> as an argument 14:26:56 [heycam] krit: that would change 14:27:01 [heycam] TabAtkins__: if you make that optional it's less troublesome 14:27:07 [heycam] ... I think it's weird that we have no-input filters 14:27:09 [cabanier] filter function: 14:27:12 [heycam] … I think that was a hack in the original SVG 14:27:29 [heycam] … and instead just said "these are not paint servers but some filter primitives" 14:27:45 [heycam] … I am somewhat opposed to extending that confusion into the CSS version of the syntax 14:27:52 [heycam] … if we are producing an image, I would want to produce an <image> directly 14:28:02 [heycam] krit: then you can put this <image> as input to the filter function 14:28:13 [heycam] ed: if you want the same number of parameters that the SVG turbulence has? 14:28:18 [heycam] TabAtkins__: I don't know what's needed 14:28:53 [heycam] TabAtkins__: you would have "filter: noise(…) ..." 14:28:59 [heycam] krit: wouldn't the parser get confused? 14:29:03 [heycam] TabAtkins__: no 14:29:11 [heycam] … the filter function takes the first argument is an image, the next is filter list 14:29:40 [heycam] s/filter: noise(…) .../background-image: filter(noise(…), …)/ 14:29:55 [heycam] krit: you could use this as an input to custom filter primitives too 14:30:03 [heycam] TabAtkins__: so this seems fine to me 14:30:07 [heycam] krit: do we still want to have a shorthand noise function? 14:30:15 [heycam] … and is that in Images or Filter Effects? 14:30:28 [heycam] TabAtkins__: I don't think we do, because it's only a generation 14:30:34 [heycam] … and we can't do tree-based filter chains in the filter property 14:30:51 [heycam] … if we do do that, we can allow <image>s as well as inputs to compositing filters for example 14:31:40 [heycam] … I think making feTurbulence a filter primtiive and not a paint server was a mistake 14:31:56 [heycam] krit: in the end it probably doesn't matter, but yes where do we put it logically 14:32:05 [heycam] TabAtkins__: we should only allow pass through filters in the filter property 14:32:11 [heycam] krit: I agree 14:32:37 [heycam] ed: would we have a corresponding element in SVG filters? 14:32:42 [heycam] … we have feTurbulence 14:32:46 [heycam] krit: deprecate but not drop it 14:32:55 [heycam] TabAtkins__: for filters do we want to correct this historical mistake? 14:33:03 [heycam] … you can't just take a paint server directly, you need to specify bounds to fill 14:33:13 [heycam] krit: we could use feImage taking CSS <image> as well 14:33:18 [heycam] ed: I think it would be handy to have in SVG filters too 14:33:59 [heycam] ed: I don't mind if it's a new primitive or an 'image' referenced from <feImage> 14:34:10 [heycam] ed: a new in="" value? 14:34:18 [heycam] TabAtkins__: you just need to combine a paint server with a rect 14:34:36 [heycam] krit: in general, we need to redefine in and in2 14:34:45 [heycam] … to have CSS <image>s as input in there is fine 14:35:58 [heycam] TabAtkins__: you can't use colors in there 14:36:04 [heycam] .. because "blue" might be a name of a result 14:36:22 [heycam] … the <image> function in CSS can produce solid colors 14:36:27 [heycam] … image(blue) 14:36:44 [Cyril] Cyril has joined #svg 14:36:57 [heycam] krit: feImage still has subregion clipping 14:37:49 [heycam] … you could use media fragments for selecting the region, but there's no way to do preserveAspectRatio 14:38:01 [heycam] krit: a question about the noise function, how do you define the size of it? 14:38:05 [heycam] TabAtkins__: I would do the same as gradients 14:38:13 [heycam] … it's sized into the box it's drawn into 14:38:19 [heycam] … it has no intrinsic size 14:38:27 [heycam] … in SVG's case, it can still do an extension out to infinite case 14:38:40 [heycam] ed: one complication is having tiling 14:38:50 [heycam] … if you tile the noise function without stitchTiles you can see the tile edges 14:39:40 [heycam] TabAtkins__: any primitive tiling mechanisms will cause discontinuities at the edges of tiling 14:39:47 [heycam] … unless you tell the noise algorithm to stitch the edges 14:39:56 [heycam] ChrisL: we shouldn't be padding noise out, just which region we really want to cover 14:40:05 [ed] 14:41:12 [heycam] krit: primitives are always clipped to the filter region 14:41:19 [heycam] … if you have feOffset you can reach the edge of the noise function 14:41:42 [heycam] TabAtkins__: we should just generate noise at whichever pixels are going to be touched 14:42:48 [heycam] ACTION: Dirk to look into allowing CSS <image> values in in="" and in2="" 14:42:48 [trackbot] Created ACTION-3360 - Look into allowing CSS <image> values in in="" and in2="" [on Dirk Schulze - due 2012-09-24]. 14:43:00 [heycam] ACTION: Tab to work with Dirk to spec out a noise() <image> value 14:43:00 [trackbot] Created ACTION-3361 - Work with Dirk to spec out a noise() <image> value [on Tab Atkins Jr. - due 2012-09-24]. 14:43:37 [heycam] TabAtkins__: some things need the size you're drawing into 14:43:50 [heycam] … CSS gradients need to know how big their box is before drawing 14:44:05 [heycam] … media fragments don't apply to most CSS <image>s, just url() images 14:45:13 [heycam] ed: for SVG I would like to have 3D noise and to be able to connect the time domain to the third dimension 14:45:18 [heycam] … so you can have continuous effects 14:45:29 [heycam] … with feTurbulence it can move things in x and y, but you can't really have a fire/plasma effect 14:45:34 [heycam] krit: I'd suggest using shaders for that 14:45:38 [heycam] ed: you couldn't implement it that way 14:45:45 [heycam] …but it would be nice to have filter primitives for that 14:45:52 [heycam] krit: do we really want 3d noise? maybe, don't know. 14:45:57 [heycam] … is simpled 2d or 3d? 14:46:05 [heycam] ed: you can extend it to how many ever dimensions 14:46:13 [heycam] … I think that would be really nice to have 14:46:24 [heycam] … I want to allow that when we go with <image> generation, so we can animate the timing 14:46:57 [heycam] TabAtkins__: the CSS <image> type is now animatable, in Images 4 14:47:02 [heycam] … there's a generic animation type that does a cross-fade 14:47:10 [heycam] … but cross-fade and gradients animate argument-wise 14:47:16 [heycam] … so that's fine to have animation of noise() 14:47:34 [heycam] ed: Chris' link earlier shows the 3d noise animation 14:49:34 [heycam] shepazu: when we're talking about stitching, does this also apply to Tiling & Layering? 14:49:54 [heycam] TabAtkins__: just if you're using a noise function of a certain noise, or inside a pattern element that ends up being repeated, you need to tell the algorithm to make sure the edges tile well 14:51:09 [heycam] Topic: Masking issues 14:51:16 [heycam] krit: we said we wanted to have attribute maskType for <mask> 14:51:31 [heycam] … that says you specifically want to use this mask with alpha or luminance 14:51:35 [heycam] … do we want to make this a presentation attribute 14:51:55 [heycam] ed: why would you want to? 14:52:06 [heycam] krit: we already define mask-type for CSS masking 14:52:44 [heycam] heycam: so this would just use the same attribute/property on both the <mask> and the thing being masked 14:52:45 [krit] 14:52:45 [heycam] krit: yes 14:53:05 [heycam] krit: second question is if you use maskType="" as we currently defined it, it's camel case 14:53:15 [heycam] … which HTML editors don't want, because they need to special case it for the parser 14:53:21 [heycam] … I have no problems if they need to special case it 14:53:52 [heycam] ChrisL: the reason we had camel case was we originally had hyphenation, and the DOM people requested camel casing 14:54:42 [heycam] shepazu: if I'm converting from a property name to a DOM name, removing the hyphen is trivial, adding the camel-cased letter is kind of trivial with regex, but... 14:54:44 [heycam] krit: still trivial 14:54:50 [heycam] … the only problem is HTML is not case sensitive 14:55:02 [heycam] shepazu: could we go to all lower case? 14:55:56 [heycam] TabAtkins__: you would need to define what happens if you accept both camel case and lower case 14:56:05 [heycam] TabAtkins__: I'm fine with all lower case or keeping camel case in attributes 14:56:09 [heycam] … updating the list isn't a big deal 14:56:37 [heycam] … given every implementation just has a list of attribtues with cases, it's easy to update 14:57:42 [heycam] heycam: I'm slightly concerned that the case in the DOM changes after the parser is updated 14:58:16 [heycam] TabAtkins__: that's only a problem if people try to use the feature before implementations add the feature 14:58:32 [heycam] … you could define that with conflicting case names, just use the last one … that's what the HTML parser does 14:59:46 [heycam] TabAtkins__: maybe we should keep camel case, because when you are using the property with CSS, you use camel case in the DOM 15:00:08 [heycam] heycam: have the presentation attribute be camel case for a hyphenated property name? 15:01:22 [heycam] TabAtkins__: dashes are hard for the DOM, but we could accept hyphens too 15:02:09 [heycam] krit: it's easy in WebKit to have the dashed version of the presentation attribute 15:02:13 [heycam] …we just pass that to the CSS engine 15:02:31 [heycam] heycam: if it's going to become a presentation attribute, it should be mask-type not maskType 15:02:40 [heycam] krit: so do we want it to become a presentation attribute / property? 15:02:47 [heycam] TabAtkins__: I think we do 15:03:17 [heycam] birtles: just wondering when you'd want to set mask-type on <mask> with a style sheet 15:03:27 [heycam] TabAtkins__: you could set all <mask> elements to alpha with a style rule 15:03:28 [heycam] birtles: ok 15:04:36 [heycam] heycam: is it confusing that mask-type means slightly different thinks applied to <mask> and masked elements? 15:08:40 [heycam] TabAtkins__: we could merge in the "alpha" or "luminance" back into mask-image 15:08:44 [heycam] … instead of the separate mask-type 15:08:51 [heycam] … and just use mask-type to apply to <mask> 15:09:45 [heycam] birtles: I'd rather this 15:09:53 [heycam] TabAtkins__: and I think it's fine to think of the alpha-ness of luminance-ness as part of the image 15:11:21 [heycam] [discussion about changes to serizliation and computed values for -webkit-mask] 15:12:10 [heycam] RESOLUTION: mask-type now only applies to <mask>, and the [ alpha | luminance | auto ] goes in the mask-image value 15:12:31 [heycam] krit: next problem is related 15:12:38 [heycam] … mask-image has syntax like background-image 15:12:40 [heycam] … so it clips to a region 15:12:57 [heycam] … we have a predefined clipping regions border-box, content-box and padding-box 15:13:01 [heycam] s/have a/have/ 15:13:15 [heycam] … I would suggest defining for SVG that border-box means painted rectangle 15:13:28 [heycam] heycam: what's the default? 15:13:46 [heycam] TabAtkins__: border-box 15:14:23 [heycam] ed: if you ahve an SVG inline in HTML, you might want the actual border-box of the outer <svg> 15:14:28 [heycam] TabAtkins__: yes it should mean that on the outer <svg> 15:14:30 [heycam] s/ahve/have/ 15:16:14 [heycam] TabAtkins__: padding-box should map to normal bounding box 15:16:18 [heycam] krit: that's what it is currently 15:17:05 [heycam] RESOLUTION: {border-box, content-box, padding-box} should map to {painted bbox, normal bbox, normal bbox} for mask-clip 15:17:17 [heycam] ed: would you ever want to have a box that includes the filters? markers? 15:17:19 [heycam] krit: we need to discuss this 15:17:28 [heycam] … I'd rather no, because we would do this masking on the gpu 15:17:43 [heycam] heycam: markers are already included in border-box 15:21:59 [heycam] TabAtkins__: if you have a drop shadow applied to an element, then using mask-clip will clip that away 15:22:59 [heycam] krit: the old mask property still works the same way 15:23:16 [heycam] … the url() references <mask> and can still have x/y/width/height on it 15:26:18 [shepazu] 15:26:49 [ed] RRSAgent, make minutes 15:26:49 [RRSAgent] I have made the request to generate ed 15:37:26 [cabanier] cabanier has joined #svg 15:42:17 [cabanier] cabanier has joined #svg 15:44:43 [konno_] konno_ has joined #svg 16:04:39 [shepazu] shepazu has joined #svg 16:09:10 [krit] krit has joined #svg 16:15:02 [jet] jet has joined #svg 16:30:31 [jarek] jarek has joined #svg 16:31:18 [Zakim] Zakim has left #svg 16:56:09 [konno] konno has joined #svg 17:32:35 [cabanier] cabanier has joined #svg 18:21:10 [thorton] thorton has joined #svg 19:00:45 [victor] victor has joined #svg 20:56:12 [shepazu] shepazu has joined #svg 21:50:20 [cabanier] cabanier has joined #svg 21:52:27 [krit] krit has joined #svg 23:39:41 [glenn] glenn has joined #svg
http://www.w3.org/2012/09/17-svg-irc
CC-MAIN-2015-14
en
refinedweb
(For more resources related to this topic, see here.) In this modern world of JavaScript, Ext JS is the best JavaScript framework that includes a vast collection of cross-browser utilities, UI widgets, charts, data object stores, and much more. When developing an application, we mostly look for the best functionality support and components that offer it to the framework. But we usually face situations wherein the framework lacks the specific functionality or component that we need. Fortunately, Ext JS has a powerful class system that makes it easy to extend an existing functionality or component, or build new ones altogether. What is a plugin? An Ext JS plugin is a class that is used to provide additional functionalities to an existing component. Plugins must implement a method named init, which is called by the component and is passed as the parameter at the initialization time, at the beginning of the component's lifecycle. The destroy method is invoked by the owning component of the plugin, at the time of the component's destruction. We don't need to instantiate a plugin class. Plugins are inserted in to a component using the plugin's configuration option for that component. Plugins are used not only by components to which they are attached, but also by all the subclasses derived from that component. We can also use multiple plugins in a single component, but we need to be aware that using multiple plugins in a single component should not let the plugins conflict with each other. What is an extension? An Ext JS extension is a derived class or a subclass of an existing Ext JS class, which is designed to allow the inclusion of additional features. An Ext JS extension is mostly used to add custom functionalities or modify the behavior of an existing Ext JS class. An Ext JS extension can be as basic as the preconfigured Ext JS classes, which basically supply a set of default values to an existing class configuration. This type of extension is really helpful in situations where the required functionality is repeated at several places. Let us assume we have an application where several Ext JS windows have the same help button at the bottom bar. So we can create an extension of the Ext JS window, where we can add this help button and can use this extension window without providing the repeated code for the button. The advantage is that we can easily maintain the code for the help button in one place and can get the change in all places. Differences between an extension and a plugin The Ext JS extensions and plugins are used for the same purpose; they add extended functionality to Ext JS classes. But they mainly differ in terms of how they are written and the reason for which they are used. Ext JS extensions are extension classes or subclasses of Ext JS classes. To use these extensions, we need to instantiate these extensions by creating an object. We can provide additional properties, functions, and can even override any parent member to change its behavior. The extensions are very tightly coupled to the classes from which they are derived. The Ext JS extensions are mainly used when we need to modify the behavior of an existing class or component, or we need to create a fully new class or component. Ext JS plugins are also Ext JS classes, but they include the init function. To use the plugins we don't need to directly instantiate these classes; instead, we need to register the plugins in the plugins' configuration option within the component. After adding, the options and functions will become available to the component itself. The plugins are loosely coupled with the components they are plugged in, and they can be easily detachable and interoperable with multiple components and derived components. Plugins are used when we need to add features to a component. As plugins must be attached to an existing component, creating a fully new component, as done in the extensions, is not useful. Choosing the best option When we need to enhance or change the functionality of an existing Ext JS component, we have several ways to do that, each of which has both advantages and disadvantages. Let us assume we need to develop an SMS text field having a simple functionality of changing the text color to red whenever the text length exceeds the allocated length for a message; this way the user can see that they are typing more than one message. Now, this functionality can be implemented in three different ways in Ext JS, which is discussed in the following sections. By configuring an existing class We can choose to apply configuration to the existing classes. For example, we can create a text field by providing the required SMS functionality as a configuration within the listener's configuration, or we can provide event handlers after the text field is instantiated with the on method. This is the easiest option when the same functionality is used only at a few places. But as soon as the functionality is repeated at several places or in several situations, code duplication may arise. By creating a subclass or an extension By creating an extension, we can easily solve the problem as discussed in the previous section. So, if we create an extension for the SMS text field by extending the Ext JS text field, we can use this extension at as many places as we need, and can also create other extensions by using this extension. So, the code is centralized for this extension, and changing one place can reflect in all the places where this extension is used. But there is a problem: when the same functionality is needed for SMS in other subclasses of Ext JS text fields such as Ext JS text area field, we can't use the developed SMS text field extension to take advantage of the SMS functionality. Also, assume a situation where there are two subclasses of a base class, each of which provides their own facility, and we want to use both the features on a single class, then it is not possible in this implementation. By creating a plugin By creating a plugin, we can gain the maximum re-use of a code. As a plugin for one class, it is usable by the subclasses of that class, and also, we have the flexibility to use multiple plugins in a single component. This is the reason why if we create a plugin for the SMS functionality we can use the SMS plugin both in the text field and in the text area field. Also, we can use other plugins, including this SMS plugin, in the class. Building an Ext JS plugin Let us start developing an Ext JS plugin. In this section we will develop a simple SMS plugin, targeting the Ext JS textareafield component. The feature we wish to provide for the SMS functionality is that it should show the number of characters and the number of messages on the bottom of the containing field. Also, the color of the text of the message should change in order to notify the users whenever they exceed the allowed length for a message. Here, in the following code, the SMS plugin class has been created within the Examples namespace of an Ext JS application: Ext.define('Examples.plugin.Sms', { alias : 'plugin.sms', config : { perMessageLength : 160, defaultColor : '#000000', warningColor : '#ff0000' }, constructor : function(cfg) { Ext.apply(this, cfg); this.callParent(arguments); }, init : function(textField) { this.textField = textField; if (!textField.rendered) { textField.on('afterrender', this.handleAfterRender, this); } else { this.handleAfterRender(); } }, handleAfterRender : function() { this.textField.on({ scope : this, change : this.handleChange }); var dom = Ext.get(this.textField.bodyEl.dom); Ext.DomHelper.append(dom, { tag : 'div', cls : 'plugin.textField.el.select('.plugin-sms'); } }); In the preceding plugin class, you can see that within this class we have defined a "must implemented" function called init. Within the init function, we check whether the component, on which this plugin is attached, has rendered or not, and then call the handleAfterRender function whenever the rendering is. Within this function, a code is provided, such that when the change event fires off the textareafield component, the handleChange function of this class should get executed; simultaneously, create an HTML <div> element within the handleAfterRender function, where we want to show the message information regarding the characters and message counter. And the handleChange function is the handler that calculates the message length in order to show the colored, warning text, and call the updateMessageInfo function to update the message information text for the characters length and the number of messages. Now we can easily add the following plugin to the component: { xtype : 'textareafield', plugins : ['sms'] } Also, we can supply configuration options when we are inserting the plugin within the plugins configuration option to override the default values, as follows: plugins : [Ext.create('Examples.plugin.Sms', { perMessageLength : 20, defaultColor : '#0000ff', warningColor : "#00ff00" })] Building an Ext JS extension Let us start developing an Ext JS extension. In this section we will develop an SMS extension that exactly satisfies the same requirements as the earlier-developed SMS plugin. We already know that an Ext JS extension is a derived class of existing Ext JS class, we are going to extend the Ext JS's textarea field that facilitates for typing multiline text and provides several event handling, rendering and other functionalities. Here is the following code where we have created the Extension class under the SMS view within the Examples namespace of an Ext JS application: Ext.define('Examples.view.sms.Extension', { extend : 'Ext.form.field.TextArea', alias : 'widget.sms', config : { perMessageLength : 160, defaultColor : '#000000', warningColor : '#ff0000' }, constructor : function(cfg) { Ext.apply(this, cfg); this.callParent(arguments); }, afterRender : function() { this.on({ scope : this, change : this.handleChange }); var dom = Ext.get(this.bodyEl.dom); Ext.DomHelper.append(dom, { tag : 'div', cls : 'extension.el.select('.extension-sms'); } }); As seen in the preceding code, the extend keyword is used as a class property to extend the Ext.form.field.TextArea class in order to create the extension class. Within the afterRender event handler, we provide a code so that when the change event fires off the textarea field, we can execute the handleChange function of this class and also create an Html <div> element within this afterRender event handler where we want to show the message information regarding the characters counter and message counter. And from this section, the logic to show the warning, message character counter, and message counter is the same as we used in the SMS plugin. Now we can easily create an instance of this extension: Ext.create('Examples.view.sms.Extension'); Also, we can supply configuration options when we are creating the instance of this class to override the default values: Ext.create('Examples.view.sms.Extension', { perMessageLength : 20, defaultColor : '#0000ff', warningColor : "#00ff00" }); The following is the screenshot where we've used the SMS plugin and extension: In the preceding screenshot we have created an Ext JS window and incorporated the SMS extension and SMS plugin. As we have already discussed on the benefit of writing a plugin, we can not only use the SMS plugin with text area field, but we can also use it with text field. Summary We have learned from this article what a plugin and an extension are, the differences between the two, the facilities they offer, how to use them, and take decisions on choosing either an extension or a plugin for the needed functionality. In this article we've also developed a simple SMS plugin and an SMS extension. Resources for Article: Further resources on this subject: - So, what is Ext JS? [Article] - Ext JS 4: Working with the Grid Component [Article] - Custom Data Readers in Ext JS [Article]
https://www.packtpub.com/books/content/plugins-and-extensions
CC-MAIN-2015-14
en
refinedweb
CreateActions_ptCreateDAO_ptSpringControllers_pt This is version 1. It is not the current version, and thus it cannot be edited. [Back to current version] [Restore this version] This is version 1. It is not the current version, and thus it cannot be edited. [Back to current version] [Restore this version] In the context of AppFuse, this is called a Manager class. It's main responsibility to act as a bridge between the persistence (DAO) layer and the web layer. The Business Delegate pattern from Sun says that these objects are useful for de-coupling your presentation layer from your database layer (i.e. for Swing apps). Managers should also be where you put any business logic for your application. Let's get started by creating a new ManagerTest and Manager in AppFuse's architecture. This class should extend BaseManagerTestCase, which already exists in the service package. The parent class (BaseManagerTestCase) serves the same functionality as the BaseDaoTestCase - to load a properties file that has the same name as your *Test.class, as well as to initialize Spring's ApplicationContext. The code below is what we need for a basic JUnit test of our Managers. The code below simply creates and destroys the PersonManager. The "ctx" object is initialized in the BaseManagerTestCase class. package org.appfuse.service; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.appfuse.model.Person; public class PersonManagerTest extends BaseManagerTestCase { private Person person; private PersonManager mgr = null; private Log log = LogFactory.getLog(PersonManagerTest.class); protected void setUp() { mgr = (PersonManager) ctx.getBean("personManager"); } protected void tearDown() { mgr = null; } public static void main(String[] args) { junit.textui.TestRunner.run(PersonManagerTest.class); } } Now that we have the JUnit framework down for this class, let's add the meat: the test methods to make sure everything works in our Manager. Here's a snippet from the DAO Tutorial tutorial to help you understand what we're about to do. Add the following methods to your PersonManagerTest.java file: public void testGetPerson() throws Exception { person = (Person) mgr.getPerson("1"); assertTrue("person.firstName not null", person.getFirstName() != null); } public void testSavePerson() throws Exception { person = (Person) mgr.getPerson("1"); String name = person.getFirstName(); person.setFirstName("test"); person = (Person) mgr.savePerson(person); assertTrue("name updated", person.getFirstName().equals("test")); person.setFirstName(name); mgr.savePerson(person); } public void testAddAndRemovePerson() throws Exception { person = new Person(); person = (Person) populate(person); person = (Person) mgr.savePerson(person); assertTrue(person.getFirstName().equals("Bill")); assertTrue(person.getId() != null); if (log.isDebugEnabled()) { log.debug("removing person, personId: " + person.getId()); } mgr.removePerson(person.getId().toString()); assertNull(mgr.getPerson(person.getId().toString())); } This class won't compile at this point because we have not created our PersonManager interface. First off, create a PersonManager.java interface in the src/service/**/service directory and specify the basic CRUD methods for any implementation classes. I've eliminated the JavaDocs in the class below for display purposes. package org.appfuse.service; import org.appfuse.model.Person; import java.util.List; public interface PersonManager { public List getPeople(Person person); public Person getPerson(String id); public Person savePerson(Object person); public void removePerson(String id); } Now let's create a PersonManagerImpl class that implements the methods in PersonManager. To do this, create a new class in src/service/**/service and name it PersonManagerImpl.java. It should extend BaseManager and implement PersonManager. package org.appfuse.service; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.appfuse.model.Person; import org.appfuse.dao.PersonDao; import java.util.List; /** * @author mraible * @version $Revision: $ $Date: May 25, 2004 11:46:54 PM $ */ public class PersonManagerImpl extends BaseManager implements PersonManager { private static Log log = LogFactory.getLog(PersonManagerImpl.class); private PersonDao dao; public void setPersonDao(PersonDao dao) { this.dao = dao; } public List getPeople(Person person) { return dao.getPeople(person); } public Person getPerson(String id) { return dao.getPerson(Long.valueOf(id)); } public Person savePerson(Object person) { dao.savePerson(person); return (Person) person; } public void removePerson(String id) { dao.removePerson(Long.valueOf(id)); } } One thing to note is the setPersonDao method. This is used by Spring to bind the PersonDao to this Manager. This is configured in the applicationContext-service.xml file. We'll get to configuring that in Step 3[3]. You should be able to compile everything now using "ant compile-service"... Finally, we need to create the PersonManagerTest.properties file in test/service/**/service so that person = (Person) populate(person); will work in our test. firstName=Bill lastName=Joy Now we need to edit Spring's config file for our services layer so it will know about this new Manager. To notify Spring of this our PersonManager interface and it's implementation, open the src/service/**/service/applicationContext-service.xml file. In here, you will see an existing configuration for the UserManager. You should be able to copy that and change a few things to get the XML fragment below. Add the following to the bottom of this file. <!-- Person Manager --> <bean id="personManagerTarget" class="org.appfuse.service.PersonManagerImpl" singleton="false"> <property name="personDao"><ref bean="personDAO"/></property> </bean> <!-- Transaction declarations for business services. To apply a generic transaction proxy to all managers, you might look into using the BeanNameAutoProxyCreator --> <bean id="personManager" class="org.springframework.transaction.interceptor.TransactionProxyFactoryBean"> <property name="transactionManager"><ref bean="transactionManager"/></property> <property name="target"><ref local="personManagerTarget"/></property> <property name="transactionAttributeSource"><ref local="defaultTxAttributes"/></property> </bean> Note: I had SAX throw an error because defaultTxAttributes was not defined so I replaced the transactionAttributeSource with this instead, which allowed my unit test to run successfully. <property name="transactionAttributes"> <props> <prop key="*">PROPAGATION_REQUIRED</prop> </props> </property> Save all your edited files and try running "ant test-service -Dtestcase=PersonManager" one more time. Yeah Baby, Yeah: BUILD SUCCESSFUL Total time: 9 seconds Next Up: Part III: Creating Actions and JSPs - A HowTo for creating Actions and JSPs in the AppFuse architecture.
http://raibledesigns.com/wiki/Wiki.jsp?page=CreateManager_pt&version=1
CC-MAIN-2015-14
en
refinedweb
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 1 results of 1 >>> Thomas Maier <T.Maier@...> seems to think that: >Ok, couldn't resist :). At least the namespace problem seemed like an [ ... ] Hi Thomas, I have finally accounted for all the patches you sent. Many I just adapted as is, but I used funny lexical tricks to handle include files names. If you would like to try them out, you can get the new sources from CVS: Have fun Eric -- Eric Ludlam: zappo@..., eric@... Home: Siege: Emacs: GNU:
http://sourceforge.net/p/cedet/mailman/cedet-semantic/?viewmonth=200206&viewday=13
CC-MAIN-2015-14
en
refinedweb
In this article, I will implement a simple Master-Detail scenario using WPF technology and the MVVM paradigm. For the UI, I will use two ListView elements. The first ListView will enable sorting and filtering. The application was built with the aim to provide an overview of the many modern best practices used in .NET programming. ListView You should have a basic understanding of how WPF applications work. Also, a basic understanding behind the Model View ViewModel (MVVM) paradigm should be useful (MSDN or). For this example, I used the MVVM Light Toolkit, which you can find here. For unit testing, I used NUnit, which you can find here. In the unit tests, I used Moq to mock objects; Moq can be found here. As an Inversion of Control container, I used Castle Windsor, which can be found here. Although you can download each library separately, in the downloadable projects, the necessary DLL files are all included. So, what is this all about? It's a very useful Design Pattern for WPF applications because it allows the separation of concerns by decoupling the implementation details of an application from its presentation layer. The three words that describe this pattern each represent a separate layer of abstraction: the Model layer contains all the entities (in MasterDetailDemo, such classes are Person and Car) that are used by the application and all the functionality that is needed to retrieve these entities from an external location (in MasterDetailDemo, such classes are XmlPersonsRepository and XmlCarsRepository); the View layer contains UI components that give the application a visible interface; the ViewModel layer contains the actual implementation of the application, and serves as a mediator between the View and Model, basically, it prepares the data in such a format that the View can understand. Person Car XmlPersonsRepository XmlCarsRepository There are many reasons why you should use MVVM, but I won't go into more details right now because there are many presentations about it on the web. So, Google it up, or take a look at the links provided in the Background section. The application we're going to build will be as simple as possible. We will have a list of persons, and each person owns some cars. On the main window, we'll create two tables: one for the persons, and one for the cars owned by the selected person. To complicate things, we want the table of persons to be sortable and filterable. Clicking on the header of each column will invoke the sorting. On the header of each column, we'll have a kind of toggle button that, if activated, will bring up a filter list filled with check boxes. Checking/unchecking the check boxes will invoke the filtering. Now, let's get down to business, shall we? If you open up the VS solution of MasterDetailDemo, you'll see four projects. I will describe each project in detail now. This one will be easy. All we need for our application is two kinds of entities: one for the persons and one for the cars. The Person class has some properties with attributes attached to them (I used the DescriptionAttribute to store the description of the property that will be shown on the UI, and the BindableAttribute is used to know which properties should be sortable and filterable; more on this in the MasterDetailDemo.ViewModel section). DescriptionAttribute BindableAttribute In the Repositories folder, you will find interfaces and concrete classes for repositories for the persons and for the cars. This way, the retrieval of persons and cars from an external location is cleanly separated. Concrete repositories are created for fakes and for XML files. In the case of XmlPersonsRepository and XmlCarsRepository, the actual data is provided by the Persons.xml file. The fake repositories have hard-coded values for the entities. This kind of design is needed for testing purposes; it will make sense to you when we'll be writing unit tests. Things should get interesting here. The ViewModelLocator class was created by the MVVM Light Toolkit. This class handles the creation of the MainViewModel object which is then used as the binding source for the MainWindow class (in the MasterDetailDemo.View project, more on this later.) I modified the CreateMain () method just to enable the instantiation of the IPersonsRepository object by using the Castle Windsor framework. ViewModelLocator MainViewModel MainWindow CreateMain () IPersonsRepository if (MainViewModel.IsInDesignModeStatic) personsRepository = new FakePersonsRepository (); else { var container = new WindsorContainer ( new XmlInterpreter (new ConfigResource ("castle"))); personsRepository = container.Resolve (typeof (IPersonsRepository)) as IPersonsRepository; } _main = new MainViewModel (personsRepository, new MainViewModelEvents (), new ColumnHeaderFactory ()); All this code does is that it checks first whether we're in Design mode. If we are, then it creates a FakeRepository object; otherwise, it creates WindsorContainer and instantiates the concrete IPersonsRepository object run-time depending on the settings in the App.config file. The App.config file is located in the MasterDetailDemo.View project. It is configured in the following way: FakeRepository WindsorContainer <component id="personRepository" service="MasterDetailDemo.Model.Repositories.Abstract.IPersonsRepository, MasterDetailDemo.Model" type="MasterDetailDemo.Model.Repositories.Concrete.XmlPersonsRepository, MasterDetailDemo.Model"> <parameters> <filePath>c:\Users\aszalos\Documents\Visual Studio 2010\ Projects\WPF projects\MasterDetailDemo\ MasterDetailDemo.Model\bin\Debug\Persons.xml</filePath> </parameters> </component> This config tells WindsorContainer to create an XmlPersonsRepository object whenever an IPersonsRepository object is requested. Note that you need to provide the correct path where your Persons.xml file is located. The constructor of XmlPersonsRepository has a filePath parameter which should be specified in the config file. It is the path where Persons.xml is located. Using the Castle Windsor IoC container makes it easy to setup the dependency on the MainViewModel by modifying the configuration file, so the code will instantiate the actual IPersonsRepository implementation which will be used by the ViewModel, thus the ViewModelLocator class' definition doesn't need to be changed in the future. The MainViewModel class is also created automatically by the MVVM Light Toolkit template. This class represents the ViewModel for the MainWindow (part of the MasterDetailDemo.View project); this is the class that gets created by the ViewModelLocator. Before we go into the details of the implementation, I would like to summarize what the whole MasterDetailDemo.ViewModel project does. Basically, we want to display the persons and their cars, therefore we need to store a reference to an IPersonsRepository object. The cars of a person are directly available by calling Person.CarsOwned, therefore we don't need to store cars in a separate object. (Actually, to maintain the Master-Detail relation between persons and cars won't need any extra code in the ViewModel; the relation will be maintained in the View, and I leave the explanation of that to the next part.) Person.CarsOwned The tricky part of the whole application is to realize the sorting and filtering of the persons. As I said before, we want the user to be able to set filtering conditions on each column of the persons table. Each column needs to know what kind of values it contains (for example, the "First nam" column needs to know the values "Nancy", "Andrew", "Janet", and "Margaret"), therefore we need a VM class that implements this. ColumnHeaderViewModel is the VM class that will be the binding source for each column of the persons table. Besides storing the list of the values in the respective column, it needs to know whether the popup with the check box list is currently shown or not. Why do we need that? Well, when the user clicks on another toggle button in a different column, we want to hide the currently shown popup (if it is activated), and this way, only one popup will be shown at a time. ColumnHeaderViewModel In the popup, we'll have the filter texts together with the check boxes, therefore we need a VM class that will store information about a filter. FilterViewModel will store the filter text that will be displayed on the UI and a boolean value that indicates whether it is active or not. FilterViewModel The following diagram illustrates how the communication between the main VM classes will occur. Let's see how we can realize this communication between these three classes. The main idea was that I didn't want to have a hard reference to the MainViewModel in the FilterViewModel and ColumnHeaderViewModel classes. For this reason, I used the IMainViewModelEvents interface which has two events that the MainViewModel object can subscribe to and two methods that the other VM classes can use to notify the subscriber (the MainViewModel object) that a specific action occurred. IMainViewModelEvents This way, all the three VM classes will have a reference to the same IMainViewModelEvents class (the reason why I used an interface for this will be clear when we get to unit testing, but we can state right away that we need to test whether the interaction has occurred or not). The basic structure of the VM classes would like this: Before we go into the details of the implementation of VM classes, I want to specify what the MainViewModel object should do when the above mentioned events get fired. When the ColumnHeaderFiltersChanged event is fired in the IMainViewModelEvents class, we need to do the filtering. How will we do that? For each Person record, we need to iterate through the ColumnHeaders collection of the MainViewModel object, and for each ColumnHeaderViewModel object, we need to check its Filters collection. For each FilterViewModel object, we need to check whether it is active or not. (For the logic of the filtering process, please see the code.) When the IsHeaderPopupOpenChanged event is fired, we need to iterate through the ColumnHeaders collection of the MainViewModel object and set the other ColumnHeaderViewModel objects' IsHeaderPopupOpen property to false. ColumnHeaderFiltersChanged ColumnHeaders Filters IsHeaderPopupOpenChanged IsHeaderPopupOpen false For unit testing purposes, we need to test the VM classes in isolation, without any dependencies on other classes, but if we implement the above scenario, the MainViewModel will depend on the implementation of the ColumnHeaderViewModel, which in turn will depend on the implementation of the FilterViewModel. Therefore, we create interfaces for ColumnHeaderViewModel and FilterViewModel which will allow us to define the dependencies by means of interfaces. This way, we'll be able to create mock objects to break those dependencies. The main diagram would look like this: We can assume right away that the MainViewModel's constructor will take care of the initialization of the ColumnHeaders property. But because we need to be able to control what kind of IColumnHeaderLocator object will be created by the constructor, we create a factory class (ColumnHeaderFactory) which will take care of the creation of the actual IColumnHeaderLocator object. ColumnHeaderFactory IColumnHeaderLocator public class ColumnHeaderFactory { public virtual IColumnHeaderLocator Create (IMainViewModelEvents mainVMEvents, String headerText, String propertyName, List<String> filterTexts) { return new ColumnHeaderViewModel (mainVMEvents, headerText, propertyName, filterTexts); } } The ColumnHeaderFactory's Create method will be called by the constructor of the MainViewModel class. In the case of unit testing, we'll override the Create method and create mock objects for IColumnHeaderLocator and IFilterLocator. Create IFilterLocator The constructor of the MainViewModel class looks like this: public MainViewModel ( IPersonsRepository personRepository, IMainViewModelEvents mainVMEvents, ColumnHeaderFactory chFactory) I used constructor injection to provide the dependencies for the class. The ColumnHeaders property is initialized in the constructor; it is of type Dictionary<String, IColumnHeaderLocator>, and stores all the columns that should be displayed on the UI. The dictionary's key property stores the bindable object's property name (e.g., Person.FirstName => FirstName), and the value property stores an IColumnHeaderLocator object. A column header is created for each property of the Person class that has the BindableAttribute defined on it. Sorting by a property name can be done by executing the SortCommand command. SortCommand is of type RelayCommand<String>, and receives the property name as an argument to execute the Sort () method. Executing the SortCommand with the same property name argument twice sorts the persons list by that property in descending order. Dictionary<String, IColumnHeaderLocator> SortCommand RelayCommand<String> Sort () In the case of the ColumnHeaderViewModel class, I want to clarify the IsHeaderPopupOpen property which looks like this: public bool IsHeaderPopupOpen { get { return _isHeaderPopupOpen; } set { _isHeaderPopupOpen = value; RaisePropertyChanged ("IsHeaderPopupOpen"); if (_isHeaderPopupOpen) _mainVMEvents.RaiseIsHeaderPopupOpenChanged (this); } } Whenever IsHeaderPopupOpen is set to a new value, the RaisePropertyChanged () method is called (this method is defined by ViewModelBase, which is the base class of all VM classes) to notify any binding targets that the value of the property has changed. In the meantime, I check whether the value is true (which means the popup header should be open), and if that's the case, then I call the RaiseIsHeaderPopupOpenChanged method on the _mainVMEvents object to notify the MainViewModel object to check the other column headers. The FilterViewModel class' IsActive property is defined in a similar fashion, so I'll skip its explanation. RaisePropertyChanged ( ViewModelBase RaiseIsHeaderPopupOpenChanged _mainVMEvents IsActive MainWindow.xaml contains the markup which sets up the UI of our application. The MainWindow DataContext property is bound to the ViewModelLocator's Main property (which is of type MainViewModel). This is set up when you create an MVVM Light Toolkit project template, so you don't need to modify this. DataContext Main SortCommand is defined as an attached property on the MainWindow. (You can find info on attached properties here.) We need this property because we want to execute this command when the user clicks on the header of a column of the persons table. The SortCommand attached property is bound to the MainViewModel's SortCommand property in the XAML file by the code: main:MainWindow.SortCommand="{Binding Source={StaticResource Locator}, Path=Main.SortCommand}" To show persons and cars in a master-detail setup, all we need to do is pick a common ancestor of the ListView controls and define the MainViewModel's Persons property as a binding source on its DataContext. Additionally, you need to set IsSynchronizedWithCurrentItem="true" on both ListView controls (note that in the sample project, this is set via styles). Persons IsSynchronizedWithCurrentItem="true" <StackPanel DataContext="{Binding Persons}"> <ListView ItemsSource="{Binding}" GridViewColumnHeader.Click="ListView_Click" ... /> <ListView ItemsSource="{Binding CurrentItem.CarsOwned.Cars}" ... /> </StackPanel> Notice the CurrentItem property in the second ListView's binding expression. The CurrentItem property is set to the Person object which corresponds to the first ListView's selected row. This way, whenever a new Person object is selected in the first ListView, the Person.CarsOwned.Cars collection gets bound to the second ListView. CurrentItem Person.CarsOwned.Cars The persons ListView's GridView property looks like this: GridView <GridView ColumnHeaderTemplate="{StaticResource dt_ColumnHeader}" ColumnHeaderContainerStyle="{DynamicResource stl_ColumnHeaderContainer}"> <GridViewColumn Width="125" DisplayMemberBinding="{Binding FirstName}" Header="{Binding DataContext.ColumnHeaders[FirstName], RelativeSource={RelativeSource AncestorType={x:Type Window}, Mode=FindAncestor}}"/> ... </GridView> Notice that the GridView's DataContext is implicitly set to the MainViewModel's Persons property (which is of type ObservableCollection<Person>), so when we set the DisplayMemberBinding property of the GridViewColumn, the binding implicitly refers to the current Person's property (e.g., FirstName). But when we want to bind the GridViewColumn's Header property to the MainViewModel's ColumnHeaders[xyz] property, we need to set RelativeSource of the Binding to the main MainViewModel object (which is defined on the Window). ObservableCollection<Person> DisplayMemberBinding GridViewColumn Header ColumnHeaders[xyz] RelativeSource The dt_ColumnHeader DataTemplate is defined in the Window.Resources section of the XAML file, and contains two Path elements (one for the up and one for the down arrow, respectively) and a TextBlock element. The respective Path element's Visibility is set to Visible only when the ColumnHeaderViewModel's PropertyName property equals the MainViewModel's SortBy property and the MainViewModel's SortDirection property equals the ConverterParameter value ("asc" or "desc") specified for the MultiBinding. The MultiBinding uses the ColumnPropertyToVisibilityConverter to convert the three values to a Visibility object. dt_ColumnHeader DataTemplate Window.Resources Path TextBlock Visibility Visible PropertyName SortBy SortDirection ConverterParameter MultiBinding ColumnPropertyToVisibilityConverter The stl_ColumnHeaderContainer style is defined in the MainSkin.xaml file, and defines a ControlTemplate for the GridViewColumnHeader elements. stl_ColumnHeaderContainer ControlTemplate GridViewColumnHeader <DockPanel> <Popup IsOpen="{Binding RelativeSource={RelativeSource Mode=TemplatedParent}, Path=Content.IsHeaderPopupOpen, Mode=OneWay}" PlacementTarget="{Binding ElementName=exp_Filter}" Placement="Bottom"> <ItemsControl ItemsSource="{Binding RelativeSource={RelativeSource Mode=TemplatedParent}, Path=Content.Filters, Mode=OneWay}" ItemTemplate="{DynamicResource dt_CheckBoxItem}" ... > <ItemsControl.Resources> <DataTemplate x: <CheckBox IsChecked="{Binding IsActive, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" Content="{Binding FilterText}" /> </DataTemplate> </ItemsControl.Resources> </ItemsControl> </Popup> <Expander Name="exp_Filter" IsExpanded="{Binding RelativeSource={RelativeSource Mode=TemplatedParent}, Path=Content.IsHeaderPopupOpen, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" </Expander> </DockPanel> ... The Popup's IsOpen property is bound to the ColumnHeaderViewModel's IsHeaderPopupOpen property. The Popup contains an ItemsControl element which in turn defines CheckBox elements as its ItemTemplate property. The CheckBox element's IsChecked property and Content property are bound to the FilterViewModel's IsActive property and FilterText property, respectively. The exp_Filter Expander element defines us a toggle button which will serve as an input element to bring up the Popup control. Its IsExpanded property is bound to the ColumnHeaderViewModel's IsHeaderPopupOpen property. The Binding mode is set to TwoWay because we need to notify the VM class when the user clicks the toggle button. Popup IsOpen ItemsControl CheckBox ItemTemplate IsChecked Content FilterText exp_Filter IsExpanded TwoWay How does the SortCommand get executed? The ListView_Click() event handler method is called whenever the user clicks on a column header, and if the source of the event is not the toggle button (when you click on this, you expect the popup to be shown with the filter texts instead of sorting), the SortCommand is executed with the property name of the Person class by which we want to sort the persons collection. ListView_Click() So, basically we're done with the discussion of the application. All that's left is the Test project. We want to test each VM class of MasterDetailDemo.ViewModel. The tests written are basically self explanatory, so I won't go into the details except in the case of the MainViewModel class. Because the OnColumnHeaderFilterChanged() and OnIsHeaderPopupOpenChanged methods rely on the actual implementation of the IColumnHeaderLocator and IFilterLocator interfaces, we need to create mock objects the MainViewModel can use. As I said before, the MainViewModes's constructor uses ColumnHeaderFactory's Create () method to create the IColumnHeaderLocator object. We need to override the Create () method and create mock objects in it. The mocks are stored in the ColumnHeaderLocatorMocks and the FilterLocatorMocks properties of the XColumnHeaderFactory class. For more details, check the code. OnColumnHeaderFilterChanged() OnIsHeaderPopupOpenChanged MainViewModes Create ( Create () ColumnHeaderLocatorMocks FilterLocatorMocks XColumnHeaderFactory Well, that was it. I hope you enjoyed it and learned something from it. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) MasterDetailDemo.View.csproj filePath App.config MasterDetailDemo.View Persons.xml General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/110570/Sorting-and-filtering-a-WPF-ListView-in-an-MVVM-se
CC-MAIN-2015-14
en
refinedweb
@OnStart & @OnStop By Geertjan-Oracle on Jul 10, 2012 In applications based on NetBeans Platform 7.2, you'll be able to replace all your ModuleInstall classes with this code: import org.openide.modules.OnStart; import org.openide.modules.OnStop; @OnStart public final class Installer implements Runnable { @Override public void run() { System.out.println("enable something..."); } @OnStop public static final class Down implements Runnable { @Override public void run() { System.out.println("disable something..."); } } } Build the module and the annotations result in named services, thanks to @NamedServiceDefinition: Aside from no longer needing to register the ModuleInstall class in the manifest, performance of startup will be enhanced, if you use the above approach: It is unclear to me how to "enable something" across modules. For example, if you used the onStart for a login similarly to what you did here:, but I'm not sure how to make sure the login occurs before the other modules are enabled. Posted by Bertha Truble on March 15, 2014 at 03:03 PM PDT # I created several @OnStart annotation runnable classes within the same module. Each class implements an independent initialization code. I observed that several threads are created to run @OnStart annotated runnables in parallel, which may cause race conditions. Posted by Daniel Felix Ferber on August 01, 2014 at 05:16 AM PDT #
https://blogs.oracle.com/geertjan/entry/onstart_onstop
CC-MAIN-2015-14
en
refinedweb
Hi, I am a very new programmer ( 1 month and counting). I am presented with a problem from a tutorial I recieved online and I have no idea how to even start with it. My only assumption is to use array's, but I am not sure how the function protypes are to work and what not. Also, its just plain complicated! (to me at least) I will write up what I need done, and if you can, give me tips on how and where to start, and what else I can do. It shouldn't be any more than one page of code. Inside the directions for it, I include the most I could come up with as far as psuedocode for it. (I just threw comments in there basically) Here is the problem: Calculate the time it takes a Marathoner to travel the distance traveled (provided below). *Of course you must assume that 8 hours is 1 day of travel* Have the user enter the following (cin): (7 individual Marathoners) Marathoners Name Running Speed Distance Elevation gain/loss The running speed cannot be less than 1 and not greater than 3 (mph). The running speed should be a floating point number (so that "1.5"s can be presented (ie, decimals (sig figures))) Somehow, create a function to calculate the marathoners's time traveled. The time (in hours) is: distance/running speed. Create a function to calculate for elevation gain/loss. For every 1000 feet of elevation gain/loss, add one hour to the running time. (elevation gain/loss, is going up and down a hill in essence) Output (cout) the result of each marathoner to the screen. The Marathoner's name and the time must be separated by a white space. Preferably the output should be as follows: "Name Time" (ie, Joe 2days, 4hours). Or at least thats how i see it. If you have an opinion on how to make it better let me know! Also, how does one go about create a data file? I would like to create a data file called "distance.txt". using the 'marathoner' problem; The file should contain the following on each line, separated by a white space: marathoners name, running Speed (MPH), distance to travel (miles), elevation gain/loss (feet). This one I REALLY have no idea how to start out! If I could recieve some help, that would be great! Please email me some of the coding, if you can, and don't feel like typing it up to post or whatever (^_^ ) [email protected] Signed In desperate need of help , THE FU FYTER Addendum: (Tell me if i'm getting anywhere) #include <iostream.h> //Function protype...is this correct? void runnername_array (int *, int); void main (void) { }
http://cboard.cprogramming.com/cplusplus-programming/13802-dire-need-help-lots-printable-thread.html
CC-MAIN-2015-14
en
refinedweb
This adds the new kernel-internal header file <linux/tracehook.h>.This is not yet used at all. The comments in the header introducewhat the following series of patches is about.The aim is to formalize and consolidate all the places that the corekernel code and the arch code now ties into the ptrace implementation.These patches mostly don't cause any functional change. They justmove the details of ptrace logic out of core code into tracehook.hinlines, where they are mostly compiled away to the same as before.All that changes is that everything is thoroughly documented and anyfuture reworking of ptrace, or addition of something new, would nothave to touch core code all over, just change the tracehook.h inlines.The new linux/ptrace.h inlines are used by the following patches in thenew tracehook_*() inlines. Using these helpers for the ptrace eventstops makes it simple to change or disable the old ptrace implementationof these stops conditionally later.Signed-off-by: Roland McGrath <[email protected]>--- include/linux/ptrace.h | 33 ++++++++++++++++++++++++++++ include/linux/tracehook.h | 52 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 85 insertions(+), 0 deletions(-)diff --git a/include/linux/ptrace.h b/include/linux/ptrace.hindex c6f5f9d..c74abfc 100644--- a/include/linux/ptrace.h+++ b/include/linux/ptrace.h@@ -121,6 +121,39 @@ static inline void ptrace_unlink(struct task_struct *child) int generic_ptrace_peekdata(struct task_struct *tsk, long addr, long data); int generic_ptrace_pokedata(struct task_struct *tsk, long addr, long data); +/**+ * task_ptrace - return %PT_* flags that apply to a task+ * @task: pointer to &task_struct in question+ *+ * Returns the %PT_* flags that apply to @task.+ */+static inline int task_ptrace(struct task_struct *task)+{+ return task->ptrace;+}++/**+ * ptrace_event - possibly stop for a ptrace event notification+ * @mask: %PT_* bit to check in @current->ptrace+ * @event: %PTRACE_EVENT_* value to report if @mask is set+ * @message: value for %PTRACE_GETEVENTMSG to return+ *+ * This checks the @mask bit to see if ptrace wants stops for this event.+ * If so we stop, reporting @event and @message to the ptrace parent.+ *+ * Returns nonzero if we did a ptrace notification, zero if not.+ *+ * Called without locks.+ */+static inline int ptrace_event(int mask, int event, unsigned long message)+{+ if (mask && likely(!(current->ptrace & mask)))+ return 0;+ current->ptrace_message = message;+ ptrace_notify((event << 8) | SIGTRAP);+ return 1;+}+ #ifndef force_successful_syscall_return /* * System call handlers that, upon successful completion, need to return adiff --git a/include/linux/tracehook.h b/include/linux/tracehook.hnew file mode 100644index 0000000..bea0f3e--- /dev/null+++ b/include/linux/tracehook.h@@ -0,0 +1,52 @@+/*+ * Tracing hooks+ *+ * Copyright (C) 2008 Red Hat, Inc. All rights reserved.+ *+ * This copyrighted material is made available to anyone wishing to use,+ * modify, copy, or redistribute it subject to the terms and conditions+ * of the GNU General Public License v.2.+ *+ * This file defines hook entry points called by core code where+ * user tracing/debugging support might need to do something. These+ * entry points are called tracehook_*(). Each hook declared below+ * has a detailed kerneldoc comment giving the context (locking et+ * al) from which it is called, and the meaning of its return value.+ *+ * Each function here typically has only one call site, so it is ok+ * to have some nontrivial tracehook_*() inlines. In all cases, the+ * fast path when no tracing is enabled should be very short.+ *+ * The purpose of this file and the tracehook_* layer is to consolidate+ * the interface that the kernel core and arch code uses to enable any+ * user debugging or tracing facility (such as ptrace). The interfaces+ * here are carefully documented so that maintainers of core and arch+ * code do not need to think about the implementation details of the+ * tracing facilities. Likewise, maintainers of the tracing code do not+ * need to understand all the calling core or arch code in detail, just+ * documented circumstances of each call, such as locking conditions.+ *+ * If the calling core code changes so that locking is different, then+ * it is ok to change the interface documented here. The maintainer of+ * core code changing should notify the maintainers of the tracing code+ * that they need to work out the change.+ *+ * Some tracehook_*() inlines take arguments that the current tracing+ * implementations might not necessarily use. These function signatures+ * are chosen to pass in all the information that is on hand in the+ * caller and might conceivably be relevant to a tracer, so that the+ * core code won't have to be updated when tracing adds more features.+ * If a call site changes so that some of those parameters are no longer+ * already on hand without extra work, then the tracehook_* interface+ * can change so there is no make-work burden on the core code. The+ * maintainer of core code changing should notify the maintainers of the+ * tracing code that they need to work out the change.+ */++#ifndef _LINUX_TRACEHOOK_H+#define _LINUX_TRACEHOOK_H 1++#include <linux/sched.h>+#include <linux/ptrace.h>++#endif /* <linux/tracehook.h> */
http://lkml.org/lkml/2008/7/17/55
CC-MAIN-2015-14
en
refinedweb
Preface One of the most interesting features introduced to Direct3D in DirectX 6 was multiple texturing. In previous versions of DirectX, the texture mapping phase of the Direct3D pixel pipeline had only involved fetching texels from a single texture. DirectX 6 introduced the concept of a texture operation unit. "The Basics" Tutorial. To switch between the examples use the #defines at the beginning of the source: ... // Multitexturing using Colour operations #define DARKMAPPING 1 #define ADARKMAPPING 2 #define DIFFUSECOLOR 3 #define DARKDIFFUSE 4 // Multitexturing using Alpha operations #define MODULATEALPHA 5 #define BLENDWFRAME 6 // The switch #define TM 5 ... Texture Replaces Light Most effects that modify the appearance of a surface are calculated on what's called a "per-vertex" basis. This means that the actual calculations are done for each vertex of a triangle, as opposed to each pixel that gets rendered. Sometimes with this technique you get noticeable artifacts. Think of a large triangle with a light source close to the surface. As long as the light is close to one of the vertices of the triangle, you can see the lighting effects on the triangle. When it moves towards the center of the triangle, then the triangle gradually loses the lighting effect. In the worst case, the light is directly in the middle of the triangle and you see a triangle with very little light shining on it, instead of a triangle with a bright spot in the middle. If no light shines on the vertices, the surface properties are not calculated as properly. The best way to generate the illusion of pixel-based lighting is to use a texture map of the desired type of light shining on a dark surface. Multipass Rendering/Multitexturing/Bump mapping The three main texture-blending techniques are multipass rendering, multiple-texture blending or multitexturing and bump mapping. Multipass texturing is the process of applying more than one texture to a primitive in several passes. Brian Hook tells us in his course 29 notes at SIGGRAPH '98, that Quake III uses 10 passes: - (passes 1 - 4: accumulate bump map) - pass 5: diffuse lighting - pass 6: base texture (with specular component) - (pass 7: specular lighting) - (pass 8: emissive lighting) - (pass 9: volumetric/atmospheric effects) - (pass 10: screen flashes) It's obvious that the more passes a renderer must take, the lower its overall performance will be. To reduce the number of passes, some graphics accelerators support multitexturing, in which two or more textures are accessed during the same pass. Bump mapping is a texture-blending method that models a realistic rough surface on primitives. The bump map contains depth information in the form of values indicating high and low spots on the surface. Watch out for a tutorial on bump mapping, coming soon. The Texture Operation Unit Before Direct3D 6 the pipeline stages determined the texel color and blended this color with the color of the primitive interpolated from the vertices (multipass texturing). From Direct3D 6 up to 8 texture operation units can be cascaded together to apply multiple textures to a common primitive in a single pass (multitexturing). The results of each stage carry over to the next one, and the result of the final stage is rasterized on the polygon. This process is called "texture blending cascade". Each texture operation unit has six associated render states, which control the flow of pixels through the unit, as well as additional render states associated with filtering, clamping and so on. Three of the render states in each texture operation unit are associated with RGB (color), and another three are associated with alpha. Most 3D hardware will only support applying two textures at the same time to a common primitive. Newer hardware handles three texture operations at once, but a lot of existing 3D hardware won't support multitexturing at all. The demo for this tutorial won't run on these cards (you'll see an informative message box :-) ). You can find the best article on multitexturing in DirectX in Game Developer September 1998, page 33 ff. from Mitchell, Tatro and Bullard. The online version can be found at Gamasutra. They have developed a tool to visualize the texture operations ... try it. NVIDIA has also developed a tool to visualize the texture operations. Another intersting article is "Multipass Rendering and the Magic of Alpha Rendering" by Brian Hook. You can find it in Game Developer August 1997, page 12 ff. The book Real-time Rendering from Thomas Möller and Eric Haines gives you an overview on texturing methods. You will find a perfect explanation of the Direct3D IM texturing methods in 3D Game Programming with C++ from John De Goes. Another good book comes from Peter Kovach: Inside Direct3D from Microsoft Press. In addition, I've found the examples from ATI very interesting. They're using the Direct3D 7 framework to show a few nice effects. Multitexturing Support First, you have to check your 3D hardware's multitexturing support in the framework call ConfirmDevice(): ... HRESULT CMyD3DApplication::ConfirmDevice(DDCAPS* pddDriverCaps, D3DDEVICEDESC7* d3dDeviceDesc ) { // Accept devices that really support multiple textures.; return E_FAIL; } ... The following examples are not optimized in any way. They are for instructional purposes only. Dark Mapping The D3DTSS_COLORx render states control the flow of an RGB vector, while the D3DTSS_ALPHAx render states govern the flow of the scalar alpha through parallel segments of the pixel pipeline. In Render() we use: ... //OP, D3DTOP_SELECTARG1); //); ...In Detail: ... // first texture operation unit //); // Use this texture stage's first color unmodified, as the output. m_pd3dDevice->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_SELECTARG1); // second texture operation unit // Associate texture with the second texture stage m_pd3dDevice->SetTexture(1, D3DTextr_GetSurface("env0.bmp")); // Set the first color argument to the texture associated with this stage m_pd3dDevice->SetTextureStageState(1, D3DTSS_COLORARG1, D3DTA_TEXTURE ); // Set the second color argument to the output of the last texture stage m_pd3dDevice->SetTextureStageState(1, D3DTSS_COLORARG2, D3DTA_CURRENT); // Set the color operation to the multiplication operation m_pd3dDevice->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_MODULATE); ...This code combines the following textures: This kind of multitexturing is called dark mapping, because the resulting texel is a darker version of the unlit texel of the primary map. This technique is used a lot in 3D shooters. You can see it in GLQuake1. Nothing interesting is being done with the alpha channel of the pipeline in this case. There are no D3DTSS_ALPHAx render state calls. For RGB color, the render states D3DTSS_COLORARG1 and D3DTSS_COLORARG2 control arguments, while D3DTSS_COLORUP controls the operation on the arguments. The first texture operation unit passes the data from texture 0 to the next stage. The control argument D3DTA_TEXTURE means: The texture argument is the texture color for this texture stage. The second texture operation unit receives these texels via Arg2. It modulates (D3DTOP_MODULATE) the texels from texture 0 with the texels from texture 1, which were received via Arg1. There are two other modulation operations: <table border="0" cellpadding="4" cellspacing="1" width="100%"><tbody><tr style="background: none repeat scroll 0% 0% rgb(136, 136, 136);"><td><span style="color: white;">D3DTOP_MODULATE</span></td><td><span style="color: white;">Multiply the components of the arguments together.</span></td></tr><tr style="background: none repeat scroll 0% 0% rgb(102, 102, 153);"> <td><span style="color: white;">D3DTOP_MODULATE2X</span></td><td><span style="color: white;">Multiply the components of the arguments, and shift the products to the left 1 bit (effectively multiplying them by 2) for brightening.</span></td></tr><tr style="background: none repeat scroll 0% 0% rgb(136, 136, 136);"> <td><span style="color: white;">D3DTOP_MODULATE4X</span></td><td><span style="color: white;">Multiply the components of the arguments, and shift the products to the left 2 bits (effectively multiplying them by 4) for brightening.</span></td></tr></tbody></table>The D3DTEXTUREOP structure shows the possible per-stage texture-blending operations. Just take a look at your DirectX 7 documentation. The default value for the first texture stage (stage 0) is D3DTOP_MODULATE, and for all other stages the default is D3DTOP_DISABLE. The IDirect3DDevice7::SetTexture method assigns a texture to a given stage for a device. The first parameter must be a number in the range of 0-7 inclusive. Pass the texture interface pointer as the second parameter. This IDirectDraw7::CreateSurface which is called in the framework by D3Dtextr_CreateTextureFromFile() method. I've made an animated sample with the three modulation types: Software devices do not support assigning a texture to more than one texture stage at a time. ... // animate darkmap if (i < 40) { m_pd3dDevice->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_MODULATE); } else if (i < 80) { m_pd3dDevice->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_MODULATE2X); } else if (i < 120) { m_pd3dDevice->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_MODULATE4X); } else if (i = 120) { i = 0; } i++; ... Blending a texture with diffuse color lighting Sometimes the sun shines so bright that the colors on things get brighter. You can imitate that effect by blending the texture with diffuse color lighting: ... //_ADD); ...In Detail: ... //); // Set the second color argument to diffuse lighting information m_pd3dDevice->SetTextureStageState(0, D3DTSS_COLORARG2, D3DTA_DIFFUSE); // Set the color operation to the addition mode m_pd3dDevice->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_ADD); ...Darkmap blended with diffuse color lighting Now we're combining the two effects: ..._ADDSIGNED); //); m_pd3dDevice->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_MODULATE); ...I think this is self-explaining for an old multitexturing freak like you. You should play around with D3DTOP_ADDSIGNED, D3DTOP_SUBTRACT and D3DTOP_ADDSIGNED2X as the color operation of the first texture stage. My favorites are D3DTOP_ADDSIGNED and D3DTOP_SUBTRACT. The latter is really cool. The Alpha operations As Direct3D renders a scene, it can integrate color information from several sources: vertex color, the current material, texture map and the color previously written to the render target. It can blend several of these colors. A factor called alpha, which could be stored in vertices, materials and texture maps, can be used to indicate how blending should be weighted. An alpha value of 0 means full transparency, an alpha value of 1 means some level of semitransparency. Modulate Alpha They looked like modulating a green ambient light with the textures by alpha. For this example, I switched from a directional light to a green ambient light. This light is modulated with the texture colour: #if TM != MODULATEALPHA // Set up the light if (m_pDeviceInfo->ddDeviceDesc.dwVertexProcessingCaps & D3DVTXPCAPS_DIRECTIONALLIGHTS) { D3DLIGHT7 light; D3DUtil_InitLight(light, D3DLIGHT_DIRECTIONAL, 0.0f, -0.4f, 1.0f ); m_pd3dDevice->SetLight(0, &light ); m_pd3dDevice->LightEnable(0, TRUE ); m_pd3dDevice->SetRenderState(D3DRENDERSTATE_LIGHTING, TRUE ); } #else // Set the ambient light. D3DCOLOR d3dclrAmbientLightColor = D3DRGBA(0.0f,1.0f,0.0f,1.0f); m_pd3dDevice->SetRenderState(D3DRENDERSTATE_AMBIENT, d3dclrAmbientLightColor); #endifTo modulate the ambient color with the texture: #elif TM == MODULATEALPHA_MODULATE); m_pd3dDevice->SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE); m_pd3dDevice->SetTextureStageState(0, D3DTSS_ALPHAARG2, D3DTA_DIFFUSE); m_pd3dDevice->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_MODULATE); ... Alpha blending with the Frame Buffer Direct3D uses the following formula to compute the final color for each pixel in the rendered primitive: FinalColor = SourcePixelColor * SourceBlendFactor + DestPixelColor * DestBlendFactorIt lets you change the SourceBlendFactor and DestBlendFactor flags to generate the effect you want. I set the following RenderStates in InitDeviceObjects():<br clear="all"> ... #if TM == BLENDWFRAME m_pd3dDevice->SetRenderState (D3DRENDERSTATE_ALPHABLENDENABLE, TRUE); // Set the source blend state. m_pd3dDevice->SetRenderState(D3DRENDERSTATE_SRCBLEND, D3DBLEND_SRCCOLOR); // Set the destination blend state. m_pd3dDevice->SetRenderState(D3DRENDERSTATE_DESTBLEND, D3DBLEND_INVSRCCOLOR); #endif ...As a result of the calls in the preceding code fragment, Direct3D performs a linear blend between the source color (the color of the primitive being rendered at the current location) and the destination color (the color at the current location in the frame buffer). This gives an appearance similar to tinted glass. Some of the color of the destination object seems to be transmitted through the source object. The rest of it appears to be absorbed. Alpha blending requires a fair bit of extra math and memory access, so turning it on and off with ALPHABLENDENABLE is worth the effort. In the Render() method, there's only a call to set the texture: ... #elif TM == BLENDWFRAME m_pd3dDevice->SetTexture(0, D3DTextr_GetSurface("wall.bmp")); #endif ...I hope you enjoyed our small trip into the word of the Direct3D 7 IM Framework. If you disliked or liked it, give me a sign at [email protected].
http://www.gamedev.net/page/resources/_/technical/directx-and-xna/direct3d-7-immediate-mode-framework-programming-r1028
CC-MAIN-2015-14
en
refinedweb
24 February 2009 09:19 [Source: ICIS news] By Salmon Aidan Lee SINGAPORE (ICIS news)--The rebound in Asian purified terephthalic acid (PTA) prices may not be sustained as the downstream polyester and textile markets in China have remained very weak, buyers and sellers said on Tuesday. Renewed buying interests pushed up PTA spot prices to $730-740/tonne (€577-585/tonne) CFR (cost and freight) China on Monday from $720-725/tonne CFR China last Friday, after falling as much as $60/tonne in the week of 16 February. "We are generally optimistic that prices can recover and we are holding back our offers for March for now," said a trader with Shanghai-based BCF Trading "We can make some profit with PTA at [$720/tonne CFR China], so we bought some at that level," said a source from Zhenhui Polyester, a filament yarn maker based in Taicang in eastern China. But downstream markets continued to reel under mounting stocks and poor sales, which will likely constrain the profitability of polyester makers and put a cap on PTA’s rebound, some market participants warned. "The textile industry is doing badly, spinners and weavers are facing stocks as high as a month, so that is a major problem," said a source from Tian Sheng Group, which owns polyester, spinning and dyeing facilities in eastern China. Like Tian Sheng, most spinners and weavers preferred not to keep stocks beyond two weeks, otherwise they would be forced to sell at an extremely low price, industry sources said. "This year, the only reason I am still surviving with so much stocks is because the credit controls had recently been relaxed, and the banks are not chasing me for money," said a source from Jiang Qiao Spinning and Weaving, which owns 200 spindles in eastern China. Some polyester producers already experiencing slow business fear they will soon be unable to transfer their feedstock costs to the customers, producers said. Sales against daily output fell as low as 20% last week in key production areas such as Xiaoshan and Shaoxing in eastern ?xml:namespace> "We cannot move our products… some [competitors] had dropped prices in order to sell part of their inventories, but we are still holding on," said a source from Rong Sheng Polyester, a major producer of filament yarns and chips in Xiaoshan. Prices of most grades of polyester fell by yuan (CNY)100-300/tonne ($15-44/tonne) last week, and fell a further CNY50-100/tonne on Monday, according to ICIS pricing. Cash-strapped polyester makers in China were producing about 75% of nameplate capacity, refusing to lower production to demonstrate their creditworthiness to banks. Some market watchers also pointed to additional PTA capacities coming on stream, as well as the desire of most PTA makers to resume full operations, which could exert downward pressure on prices. "We will definitely like to produce more, but we cannot deny that (shortage of feedstock) paraxylene (PX) is an issue," said a source from Sinopec, an integrated producer and owner of Yangzi Petrochemical in eastern China. "Now, it’s because PX is not enough, that we see less PTA; if most PX producers resume full operating rates by March, we may see more PTA coming out as well," said a source from Hualian Sunshine Petrochemical, a Chinese PTA supplier. Another trader with Macau-based Winsway Trading said PTA sellers could be caught in the centre if the PX and PTA supply increased amidst collapsing demand for the PTA. ($1 = €0.79 , $1 = CNY6.84) For more on PX
http://www.icis.com/Articles/2009/02/24/9195122/poor-downstream-markets-may-cap-asia-pta-rebound.html
CC-MAIN-2015-14
en
refinedweb
There is a parameter roll in UI [Edit Current Viewpoint]. It means rotating the camera around its front-to-back axis. A positive value rotates the camera counterclockwise, and a negative value rotates it clockwise. In API perspective, a rotation around world axes (WCS) is configured by Viewpoint.Rotation (Rotation3D) which is in 3D space defined as a quaternion. From quaternion, it can also tell something like roll, yaw, pitch. One post kindly provides the mathematical equations: These are defined in aircraft principal axes. In Navisworks space, when the up vector is Y+, right vector is X+, and view direction is Z-, the roll can be calculated from quaternion (Viewpoint.Rotation) by the equations above. However, in other cases when the up vector is different, roll in UI means what it indicates: rotating the camera around its front-to-back axis. Unfortunately, I do not find any API which tells roll in UI. While in math, once we know the base up vector, current up vector, we can calculate the roll ourselves. - Viewpoint.WorldUpVector: initial base up vector when user [Set Viewpoint Up] - Viewpoint.GetCamera(): a json string which contains many information such as - current up vector - current view direction (reversed vector of forward vector) since view direction will keep the same when setting roll, the roll value will be the angle from current right to the base right (aligned with initial up vector). The code below prints out the roll in aircraft principal axes and the roll in Navisworks UI. [Serializable] public class cameraInfoClass { public double[] UpDirection { get; set; } public double[] WorldRightDirection { get; set; } public double[] ViewDirection { get; set; } } public override int Execute(params string[] parameters) { //current viewpoint Viewpoint oViewpoint = Autodesk.Navisworks.Api.Application.ActiveDocument.CurrentViewpoint; //current world up vector. The vector defined by user. //Same to UI >> [Edit Viewpoint] >> UnitVector3D worldUpVec = null; if (oViewpoint.HasWorldUpVector) worldUpVec = oViewpoint.WorldUpVector; else return 0; //A rotation in 3D space defined as a quaternion. Rotation3D rotation = oViewpoint.Rotation; //***** //Aircraft principal axes: X+: right, Y+: up, Z-: view direction // roll: around Z- // pitch: around X+ // yaw: around X+ //from //roll = Mathf.Atan2(2*y*w - 2*x*z, 1 - 2*y*y - 2*z*z); //pitch = Mathf.Atan2(2 * x * w - 2 * y * z, 1 - 2 * x * x - 2 * z * z); //yaw = Mathf.Asin(2 * x * y + 2 * z * w); double aircraft_roll = Math.Atan2((2 * rotation.B * rotation.D) - (2 * rotation.A * rotation.C), 1 - (2 * rotation.B * rotation.B) - (2 * rotation.C * rotation.C)); double aircraft_roll_degree = aircraft_roll * 180 / Math.PI; double aircraft_pitch = Math.Atan2((2 * rotation.A * rotation.D) - (2 * rotation.B * rotation.C), 1 - (2 * rotation.A * rotation.A) - (2 * rotation.C * rotation.C)); double aircraft_pitch_degree = aircraft_pitch * 180 / Math.PI; double aircraft_yaw = Math.Asin((2 * rotation.A * rotation.B) + (2 * rotation.C * rotation.D)); double aircraft_yaw_degree = aircraft_yaw * 180 / Math.PI; //***** //get camera parameters which contains more data we need to calculate roll of Navisworks UI string cameraStr = oViewpoint.GetCamera(); cameraInfoClass cameraStrJson = JsonConvert.DeserializeObject(cameraStr); //current up vector Vector3D currentUpVec = new Vector3D(cameraStrJson.UpDirection[0], cameraStrJson.UpDirection[1], cameraStrJson.UpDirection[2]); Vector3D currentViewDir = new Vector3D(-cameraStrJson.ViewDirection[0], -cameraStrJson.ViewDirection[1], -cameraStrJson.ViewDirection[2]); //current right vector Vector3D currentRightVec = currentUpVec.Cross(currentViewDir); //current world right vector is when the viewpoint //is aligned with an intial up vector.Initially, roll of UI is 0 Vector3D currentWorldRightVec = worldUpVec.Cross(currentViewDir); //get roll of UI in degree double UI_roll_degree = currentRightVec.Angle(currentWorldRightVec) * 180 / Math.PI; MessageBox.Show("aircraft_roll:" + aircraft_roll + "\naircraft_pitch:" + aircraft_pitch_degree + "\naircraft_yaw:" + aircraft_yaw_degree + "\nUI roll:" + UI_roll_degree); return 0; }
https://adndevblog.typepad.com/aec/2019/07/get-roll-value-of-edit-current-viewpoint.html
CC-MAIN-2022-27
en
refinedweb
jupyter_wire 0.1.3 Jupyter kernel written in D To use this package, run the following command in your project's root directory: Manual usage Put the following dependency into your project's dependences section: jupyter-wire An implementation of the Jupyter wire protocol in D. This depends on ZeroMQ so the relevant development library (zmq.dll, libzmq.so, ...) will have to be installed on the system for this to link. This library was written so that any backend written in or callable by D can be a jupyter kernel. A backend must be a D type that satisfies the following compile-time interface: import jupyter.wire.kernel: LanguageInfo, ExecutionResult; LanguageInfo info = T.init.languageInfo; ExecutionResult result = T.init.execute("foo"); For a backend type that doesn't require initialisation, the following code is sufficient: struct MyBackend { enum languageInfo = LanguageInfo(/*...*/); ExecutionResult execute(in string code) { // ... } } import jupyter.wire.kernel: Main; mixin Main!MyBackend; Otherwise, initialise as necessary and call Kernel.run: import jupyter.wire.kernel: kernel; auto k = kernel(backend, connectionFileName); k.run; Examples - basic example shows a minimal kernel that responds to requests from the frontend. - widgets example shows how to can communicate with ipywidgets. Windows Set the environment variables ZMQ_DIR_32 and/or ZMQ_DIR_64 for where to find the zmq.lib when building. Remember to copy the revelant .dll to the executable path. Text and markdown output To return text output, use the textResult helper function: return textResult("this is the output", Stdout("This is stdout side-effect")); Similarly for markdown output: return markdownResult("# This is a header"); In both cases the Stdout parameter is optional. - Registered by Symmetry Investments - 0.1.3 released a year ago - kaleidicassociates/jupyter-wire - boost - Authors: - - Dependencies: - zmqd, asdf - Versions: - Show all 15 versions - Download Stats: 0 downloads today 2 downloads this week 224 downloads this month 67005 downloads total - Score: - 3.2 - Short URL: - jupyter_wire.dub.pm
https://code.dlang.org/packages/jupyter_wire
CC-MAIN-2022-27
en
refinedweb
README Introduction¶ In this directory we publish simple tools to analyze backward reference distance distributions in LZ77 compression. We developed these tools to be able to make more efficient encoding of distances in large-window brotli. In large-window compression the average cost of a backward reference distance is higher, and this may allow for more advanced encoding strategies, such as delta coding or an increase in context size, to bring significant compression density improvements. Our tools visualize the backward references as histogram images, i.e., one pixel in the image shows how many distances of a certain range exist at a certain locality in the data. The human visual system is excellent at pattern detection, so we tried to roughly identify patterns visually before going into more quantitative analysis. These tools can turn out to be useful in development of other LZ77-based compressors and we hope you try them out. Tools¶ find_opt_references¶ This tool generates optimal (match-length-wise) backward references for every position in the input files and stores them in *.dist file described below. Example usage: find_opt_references input.txt output.dist draw_histogram¶ This tool generates a visualization of the distribution of backward references stored in *.dist file. The original file size has to be specified as a second parameter. The output is a grayscale PGM (binary) image. Example usage: draw_histogram input.dist 65536 output.pgm Here's an example of resulting image: draw_diff¶ This tool generates a diff PPM (binary) image between two input 8-bit PGM (binary) images. Input images must be of same size. Useful for comparing different backward references distributions for same input file. Normally used for comparison of output images from draw_histogram tool. Example usage: draw_diff image1.pgm image2.pgm diff.ppm For example the diff of this image and this image looks like this: Backward distance file format¶ The format of *.dist files is as follows: [[ 0| match length][ 1|position|distance]...] [1 byte| 4 bytes][1 byte| 4 bytes| 4 bytes] More verbose explanation: for each backward reference there is a position-distance pair, also a copy length may be specified. Copy length is prefixed with flag byte 0, position-distance pair is prefixed with flag byte 1. Each number is a 32-bit integer. Copy length always comes before position-distance pair. Standalone copy length is allowed, in this case it is ignored. Here's an example of how to read from *.dist file: #include "read_dist.h" FILE* f; int copy, pos, dist; while (ReadBackwardReference(fin, ©, &pos, &dist)) { ... }
https://microsoft.github.io/mu/dyn/mu_basecore/MdeModulePkg/Library/BrotliCustomDecompressLib/brotli/research/
CC-MAIN-2022-27
en
refinedweb
#include <disclaimer.h> /* I am not now, nor have I ever been, a literary critic. Nor have I been a student of German literature. I am just a guy that reads. So if it seems like my review is shallow, incomplete, addle-brained, or insipid, node your own review. Thank you. */ Schlink writes in this novel of the love affair of fifteen-year-old Michael Berg and thirty-six-year-old Hanna Schmitz — its origins, its conclusion, and its consequences in Berg's life. Actually, to say it is the story of an affair between an adult woman and a minor is to unjustly simplify it. Into the book Schlink weaves the themes of German war guilt; unbalanced power relationships between lovers; duty and justice; love, loss, and betrayal; and many more. On top of everything else, the narrative is delightful, the writing crisp, and the pace perfect. I am unsure whether the story is in any way autobiographical, but the author succeeds in telling a thoughtful, personal tale about coming to terms with a relationship rooted in inequality and deception. Hanna is often cruel and cold to Michael, and he tortures himself in any number of ways to keep in her good graces. Their relationship begins as a sexual one. Being a young man of fifteen, he is understandably mesmerized by the steady supply of sex he gets. As their relationship develops, they spend progressively more time together. One of their favorite non-sexual activities is Michael reading to her (thus, the title). He confuses his feelings of sexual gratification and intimacy with love for Hanna, but for Hanna, one gets the sense that the relationship is quite different. To her, Michael is a plaything. She is the one making all the terms in the relationship from the beginning. She welcomes him into her arms when she wants him, and pushes him away when she doesn't. This pushing and pulling has a corresponding depressing and elating effect on Michael's emotional state. They fight and make up, and the fights seem to center around the abnormal needs each brings to the relationship. There is an episode where Michael leaves to get breakfast after spending the night. When she awakens and misses him, she is deprived of her plaything and throws a tantrum when he returns. For his part, when she rages against him and denies herself, he feels abandoned, hurt, and lost, like a child torn from his mother, to such a degree that one cannot help but to think of Freud's Oedipus complex. The relationship continues to have ups and downs, and eventually, she leaves him without so much as a note of farewell. Naturally, he is devastated by this abandonment. At the same time, he has been feeling that he has betrayed her by continually denying and concealing their relationship to friends and family. He projects this guilt, in the Freudian sense, onto Hanna by telling himself she left because of him and his public rejection of her. These emotional wounds follow him into manhood and affect his adult relationships. The book quickly moves from adolescence to young adulthood where Michael is a law student. At this stage of his life, years and miles removed from Hanna, Michael learns some of the secrets Hanna concealed from him. I will not reveal what these secrets are as not to spoil the book, but let it suffice to say that it more directly hints at Hanna's motivations for her interest in Michael. Michael deduces that the reason Hanna left was not because of him at all, but rather she left because these dark secrets were at last catching up with her. Time passes, and Michael marries and has a child, but since he still carries this baggage from his earlier relationship with Hanna, the marriage does not last. He finds that he cannot help but compare his wife to Hanna. Everything his wife does, he says, is wrong because the standard he judges her against is the unattainable Hanna. Despairing over his failed marriage, he initiates contact with Hanna once again. The form this communication takes is taped recordings of him reading books to her. Doing this brings him satisfaction and a way to salve his wounds. Her responses are brief and not very encouraging at first, but he continues to send her tapes for ten years. In the end, Hanna never finds peace. She cannot escape the demons tormenting her even though she satisfies her obligations to society. Nor can Michael give up on her, try as he might. Michael must ultimately face living in a world without Hanna, and the book does not tell how he fares. Without her, he must find some other way to define himself, and we are left with his questions vexing us as much as they vex him. I am leaving a lot out, as you can probably tell. You can also guess that I am dancing around a lot of plot points. I don't want to give anything away. I found this such an engrossing book that I would have choked someone who gave away its secrets before I had finished. I'm still thinking about it. By all means, read it. If you have read it, and want to talk about it, I would love to hear from you. Email me or /msg me. Log in or register to write something here or to contact authors. Need help? [email protected]
https://m.everything2.com/title/the+reader
CC-MAIN-2022-27
en
refinedweb
Learn how to configure the Kendo UI Grid Component to easily consume data through GraphQL queries and mutations. By now you have almost certainly heard of GraphQL, the runtime query language for APIs. With rapidly growing popularity, it is becoming an increasingly adopted standard for API development. This has generated demand for frameworks and UI tools that can easily consume the data from a GraphQL API – just like the Kendo UI components can do, as we provide seamless integration through the DataSource component. This blog provides a comprehensive guide on how to setup the Kendo UI jQuery Grid component to perform CRUD operations through GraphQL queries and mutations. It includes a sample application, code snippets and documentation resources to get you up and running with Kendo UI and GraphQL. The backbone of every data-driven Kendo UI Component is the DataSource. It possesses the agility to adapt to any data/service scenario and fully supports CRUD data operations. It is also highly configurable and provides tons of features for fine-tuning its behavior. This is also the main reason why consuming a GraphQL API is so easy with the DataSource abstraction. First things first – we need to create a GraphQL service that can receive queries and mutations to validate and execute: First, start by defining a type which describes the possible data that can be queried on the service: import { GraphQLObjectType, GraphQLString, GraphQLID, GraphQLFloat, } from 'graphql'; module.exports = new GraphQLObjectType({ name: 'Product', fields: () => ({ ProductID: { type: GraphQLID }, ProductName: { type: GraphQLString }, UnitPrice: { type: GraphQLFloat }, UnitsInStock: { type: GraphQLFloat } }) }); Next, create queries for fetching the data and mutations for modifying the data server-side: const RootQuery = new GraphQLObjectType({ name: 'RootQueryType', fields: { products: { type: new GraphQLList(Product), resolve(parent, args) { return products; } } } }); const Mutation = new GraphQLObjectType({ name: 'Mutation', fields: { AddProduct: { type: Product, args: { ProductName: { type: new GraphQLNonNull(GraphQLString) }, UnitPrice: { type: new GraphQLNonNull(GraphQLFloat) }, UnitsInStock: { type: new GraphQLNonNull(GraphQLFloat) } }, resolve(parent, args) { let newProduct = { ProductID: uuidv1(), ProductName: args.ProductName, UnitsInStock: args.UnitsInStock, UnitPrice: args.UnitPrice } products.unshift(newProduct); return newProduct; } }, UpdateProduct: { type: Product, args: { ProductID: { type: new GraphQLNonNull(GraphQLID) }, ProductName: { type: new GraphQLNonNull(GraphQLString) }, UnitPrice: { type: new GraphQLNonNull(GraphQLFloat) }, UnitsInStock: { type: new GraphQLNonNull(GraphQLFloat) } }, resolve(parent, args) { let index = products.findIndex(product => product.ProductID == args.ProductID); let product = products[index]; product.ProductName = args.ProductName; product.UnitsInStock = args.UnitsInStock; product.UnitPrice = args.UnitPrice; return product; } }, DeleteProduct: { type: Product, args: { ProductID: { type: new GraphQLNonNull(GraphQLID) } }, resolve(parent, args) { let index = products.findIndex(product => product.ProductID == args.ProductID); products.splice(index, 1); return { ProductID: args.ProductID }; } } } }); module.exports = new GraphQLSchema({ query: RootQuery, mutation: Mutation }); Then, serve the GraphQL service over HTTP via a single endpoint which expresses the full set of its capabilities: import express from 'express'; import cors from 'cors'; import graphqlHTTP from 'express-graphql'; import schema from './schema'; import { createServer } from 'http'; const PORT = 3021; var app = express(); app.use(cors()); app.use('/graphql', graphqlHTTP({ schema })); const server = createServer(app); server.listen(PORT, () => { console.log(`API Server is now running on{PORT}/graphql`) }); For additional information on how to setup the server, required packages and the full GraphQL schema, refer to the source code of the sample application. To be able to use the Kendo UI jQuery Grid component, just reference the required client-side resources of the framework: <link rel="stylesheet" href="" /> <script src=""></script> <script src=""></script> Once you have the right setup, adding the Grid is as simple as placing a container element on the page and then initializing the widget with JavaScript: <div id="content"> <div id="grid"></div> </div> <script> $(document).ready(function () { $("#grid").kendoGrid({ dataSource: dataSource, height: 550, groupable: true, sortable: true, pageable: true, toolbar: ["create"], editable: "inline", columns: [ { field: "ProductID", title: "Product ID" }, { field: "ProductName", title: "Product Name" }, { field: "UnitPrice", title: "Unit Price" }, { field: "UnitsInStock", title: "Units in stock" }, { command: ["edit", "destroy"], title: "Options ", width: "250px" } ] }); }); </script> The wealth of configuration options that the DataSource offers allows you to easily integrate it to work with a GraphQL API and bind the Grid Component to it. GraphQL is all about asking for specific fields on objects. To populate the Grid initially with records, we need to issue a query against the API to return the object types: <script> var READ_PRODUCTS_QUERY = "query { products { ProductID, ProductName, UnitPrice, UnitsInStock } }"; </script> And then create the mutations for adding, updating and deleting the object type: <script> var ADD_PRODUCT_QUERY = "mutation AddProduct($ProductName: String!, $UnitPrice: Float!, $UnitsInStock: Float!){" + "AddProduct(ProductName: $ProductName, UnitPrice: $UnitPrice, UnitsInStock: $UnitsInStock ){" + "ProductID,"+ "ProductName,"+ "UnitPrice,"+ "UnitsInStock"+ "}"+ "}"; var UPDATE_PRODUCT_QUERY = "mutation UpdateProduct($ProductID: ID!, $ProductName: String! ,$UnitPrice: Float!, $UnitsInStock: Float!){" + "UpdateProduct(ProductID: $ProductID, ProductName: $ProductName, UnitPrice: $UnitPrice, UnitsInStock: $UnitsInStock){" + "ProductID," + "ProductName," + "UnitPrice," + "UnitsInStock" + "}" + "}"; var DELETE_PRODUCT_QUERY = "mutation DeleteProduct($ProductID: ID!){" + "DeleteProduct(ProductID: $ProductID){" + "ProductID" + "}" + "}"; </script> To request or modify data through a GraphQL query or mutation, all you have to do is to configure the transport.read method of the DataSource: var dataSource = new kendo.data.DataSource({ pageSize: 20, transport: { read: { contentType: "application/json", url: "", type: "POST", data: function () { return { query: READ_PRODUCTS_QUERY }; } }, update: { contentType: "application/json", url: "", type: "POST", data: function (model) { return { query: UPDATE_PRODUCT_QUERY, variables: model }; } }, destroy: { contentType: "application/json", url: "", type: "POST", data: function (model) { return { query: DELETE_PRODUCT_QUERY, variables: model }; } }, create: { contentType: "application/json", url: "", type: "POST", data: function (model) { return { query: ADD_PRODUCT_QUERY, variables: model }; } }, parameterMap: function (options, operation) { return kendo.stringify(options); } }, schema: { data: function (response) { var data = response.data; if (data.products) { return data.products; } else if (data.AddProduct) { return data.AddProduct; } else if (data.UpdateProduct) { return data.UpdateProduct; } else if (data.DeleteProduct) { return data.DeleteProduct; } }, model: { id: "ProductID", fields: { ProductID: { type: "string", editable: false }, ProductName: { type: "string" }, UnitPrice: { type: "number" }, UnitsInStock: { type: "number" } } } } }); Beyond configuring the transports, there are also other features of the DataSource, like the parameterMap() and the schema options, that are useful for encoding the request parameters and parsing the API response: There you go – by simply setting up the DataSource we got the Grid up and running with a GraphQL API. From here on, you can start exploring the vast options of the Grid and also take advantage of the other 70+ ready-to-use Kendo UI components and easily bind them to a GraphQL service. Get started right away by downloading a free trial of Kendo UI. Dimitar is a Technical Support Engineer at Progress. He is passionate about programming and experimenting with new technologies. In his spare time, he enjoys traveling, hiking and photography.
https://www.telerik.com/blogs/how-to-bind-the-kendo-ui-grid-to-a-graphql-api
CC-MAIN-2022-27
en
refinedweb
#include <CGAL/QP_models.h> An object of class Linear_program_from_iterators describes a linear program of the form. \begin{eqnarray*} \mbox{(QP)}& \mbox{minimize} & . Example QP_solver/first_l LinearProgram Quadratic_program<NT> Quadratic_program_from_mps<NT> constructs lp from given random-access iterators and the constant c0. The passed iterators are merely stored, no copying of the program data takes place. How these iterators are supposed to encode the linear program is described in LinearProgram.
https://doc.cgal.org/5.1.3/QP_solver/classCGAL_1_1Linear__program__from__iterators.html
CC-MAIN-2022-27
en
refinedweb
Optimistic. Optimistic locking relies on the idea that data remains unmodified while it is away from the server. As a simple example, consider how you update a customer details object. The customer details are stored within an object, and if a client application needs to update them, it first needs to get the object from the space. The data is not locked, and other client applications can have access to it simultaneously, thus ensuring a scalable system. The problem is that while the customer details object is away from the space server, it may become stale. For example, a second client application can request the same customer details, update them, and commit them back to the space. The first client, unaware that it is dealing with a stale copy of the data, modifies and commits the data. Obviously, with no version checking mechanism to detect this conflict, the first client’s application changes, which commit last, are made permanent, thus overwriting the changes made by the second client application. For optimistic locking to work effectively, you must be able to detect these update-update conflicts, and to make the client aware of them, so they can be dealt with appropriately. GigaSpaces optimistic locking protocol: - Is best suited for environments with many read-only transactions, few read-update transactions, and a relatively low volume of objects that are changed. - Is more suitable for real-time systems than pessimistic locking, because the space runs best with short term transactions. - Has a big advantage when you want to read a large number of objects, but update only a few of them - or when it is unlikely that objects you want to work with are updated by other users. - Ensures that updated objects are the most recent ones, while improving the coherency of system behavior. Using the Optimistic Locking Protocol Here are the steps you should execute to update data, using the optimistic locking protocol: Step 1 – Get a Space Proxy in Versioned Mode Get a space proxy in versioned mode. This can be done using one of the options listed below. You may get remote or embedded space proxies. Make sure the proxy is in optimistic locking mode using the ( versioned) option. This can be done using one of the options listed below: public void createVersionedSpace() { // Create the SpaceProxy to Embedded Space ISpaceProxy spaceProxy = new EmbeddedSpaceFactory("mySpace").Create(); spaceProxy.OptimisticLocking = true; } Step 2 – Enable the Space Class to Support Optimistic Locking You should enable the Space class to support the optimistic locking protocol, by including the [SpaceVersion] decoration on an int getter field. This field stores the current object version and is maintained by XAP. See below for an example: [SpaceClass] public class Account { [SpaceID] [SpaceRouting] private long? Id { set; get; } private String Number{ set; get; } private double? Receipts{ set; get; } private double? FeeAmount{ set; get; } private Nullable<EAccountStatus> Status{ set; get; } [SpaceVersion] private int Version{ set; get; } // ...... } Step 3 – Read Objects without using a Transaction Read objects from the space without using a transaction. You may use the ReadMultiple method to get several objects in one call. Reading objects without using a transaction, allows multiple users to get the same objects at the same time, and allows them to be updated using the optimistic locking protocol. If objects are read using a transaction, no other user can update the objects until the object is committed or rolled back. Step 4 – Modify and Update the Objects Modify the objects you read from the space and call a Write space operation to update the object within the space. Use a transactional with your write operation. You must use a transaction when you update multiple space objects in the same context. When the write operation is called to update the object, the space does the following: - for each updated object, the Version ID in the updated object is compared with the Version ID of the corresponding object within the space. This is done at the space side. - if the Version ID of the updated object is the same as the Version ID of the corresponding object within the space, it is incremented by 1, and the object is updated within the space successfully. - if the Version ID of the updated object is different than the Version ID of the corresponding object within the space, the object is not updated within the space - i.e. the operation fails. In this case, a SpaceOptimisticLockingFailureExceptionis thrown. It is recommended that you call the update operation just before the commit operation. This minimizes the time the object is locked under a transaction. Step 5 – Update Failure If you use optimistic locking and your update operation fails, an Exception is thrown. This exception is thrown when you try to write an object whose version ID value does not match the version of the existing object within the space - i.e. you are not using the latest version of the object. You can either roll back or refresh the failed object and try updating it again. This means you should repeat steps 3 and 4 - read the latest committed object from the space, back to the client side and perform the update again. For a fast refresh, you may re-read the object using ReadByID method. Make sure you also provide the SpaceRouting value. Step 6 – Commit or Rollback Changes At any time, you can commit or rollback the transaction. If you are using Spring automatic transaction demarcation, the commit is called implicitly once the method that started the transaction is completed. By following the above procedure, you get a shorter locking duration, that improves performance and concurrency of access among multiple users to the space object. The object version ID validation that is performed on Write, Take, and WriteMultiple requests, keeps the data within the space consistent. Scenario Example Suppose that you have two applications, Application_1 and Application_2, which are both working with the same Object A. The following sequence of events describes a simple optimistic locking scenario.
https://docs.gigaspaces.com/xap/10.2/dev-dotnet/transaction-optimistic-locking.html
CC-MAIN-2022-27
en
refinedweb
Investors in Wynn Resorts Ltd (Symbol: WYNN) saw new options begin trading this week, for the July 15th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the WYNN options chain for the new July 15th WYNN, that could represent an attractive alternative to paying $63.10.48% return on the cash commitment, or 55.27% annualized — at Stock Options Channel we call this the YieldBoost. Below is a chart showing the trailing twelve month trading history for Wynn Resorts Ltd, and highlighting in green where the $62.50 strike is located relative to that history: Turning to the calls side of the option chain, the call contract at the $65.00 strike price has a current bid of $4.80. If an investor was to purchase shares of WYNN stock at the current price level of $63.10/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $65.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 10.62% if the stock gets called away at the July 15th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if WYNN shares really soar, which is why looking at the trailing twelve month trading history for Wynn Resorts Ltd, as well as studying the business fundamentals becomes important. Below is a chart showing WYNN's trailing twelve month trading history, with the $65.00 strike highlighted in red: .61% boost of extra return to the investor, or 49.58% annualized, which we refer to as the YieldBoost. The implied volatility in the put contract example, as well as the call contract example, are both approximately 59%. Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 253 trading day closing values as well as today's price of $63.10) to be 50%. For more put and call options contract ideas worth looking at, visit StockOptionsChannel.com. The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
https://www.nasdaq.com/articles/interesting-wynn-put-and-call-options-for-july-15th
CC-MAIN-2022-27
en
refinedweb
AWS Compute Blog Token-based authentication for iOS applications with Amazon SNS This post is co-written by Karen Hong, Software Development Engineer, AWS Messaging. To use Amazon SNS to send mobile push notifications, you must provide a set of credentials for connecting to the supported push notification service (see prerequisites for push). For the Apple Push Notification service (APNs), SNS now supports using token-based authentication (.p8), in addition to the existing certificate-based method. You can now use a .p8 file to create or update a platform application resource through the SNS console or programmatically. You can publish messages (directly or from a topic) to platform application endpoints configured for token-based authentication. In this tutorial, you set up an example iOS application. You retrieve information from your Apple developer account and learn how to register a new signing key. Next, you use the SNS console to set up a platform application and a platform endpoint. Finally, you test the setup and watch a push notification arrive on your device. Advantages of token-based authentication Token-based authentication has several benefits compared to using certificates. The first is that you can use the same signing key from multiple provider servers (iOS,VoIP, and MacOS), and you can use one signing key to distribute notifications for all of your company’s application environments (sandbox, production). In contrast, a certificate is only associated with a particular subset of these channels. A pain point for customers using certificate-based authentication is the need to renew certificates annually, an inconvenient procedure which can lead to production issues when forgotten. Your signing key for token-based authentication, on the other hand, does not expire. Token-based authentication improves the security of your certificates. Unlike certificate-based authentication, the credential does not transfer. Hence, it is less likely to be compromised. You establish trust through encrypted tokens that are frequently regenerated. SNS manages the creation and management of these tokens. You configure APNs platform applications for use with both .p8 and .p12 certificates, but only 1 authentication method is active at any given time. Setting up your iOS application To use token-based authentication, you must set up your application. Prerequisites: An Apple developer account - Create a new XCode project. Select iOS as the platform and use the App template. - Select your Apple Developer Account team and your organization identifier. - Go to Signing & Capabilities and select + Capability. This step creates resources on your Apple Developer Account. - Add the Push Notification Capability. - In SNSPushDemoApp.swift, add the following code to print the device token and receive push notifications. import SwiftUI @main struct SNSPushDemoApp: App { @UIApplicationDelegateAdaptor private var appDelegate: AppDelegate var body: some Scene { WindowGroup { ContentView() } } } class AppDelegate: NSObject, UIApplicationDelegate, UNUserNotificationCenterDelegate { func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey : Any]? = nil) -> Bool { UNUserNotificationCenter.current().delegate = self return true } func application(_ application: UIApplication, didRegisterForRemoteNotificationsWithDeviceToken deviceToken: Data) { let tokenParts = deviceToken.map { data in String(format: "%02.2hhx", data) } let token = tokenParts.joined() print("Device Token: \(token)") }; func application(_ application: UIApplication, didFailToRegisterForRemoteNotificationsWithError error: Error) { print(error.localizedDescription) } func userNotificationCenter(_ center: UNUserNotificationCenter, willPresent notification: UNNotification, withCompletionHandler completionHandler: @escaping (UNNotificationPresentationOptions) -> Void) { completionHandler([.banner, .badge, .sound]) } } - In ContentView.swift, add the code to request authorization for push notifications and register for notifications. import SwiftUI struct ContentView: View { init() { requestPushAuthorization(); } var body: some View { Button("Register") { registerForNotifications(); } } } struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView() } } func requestPushAuthorization() { UNUserNotificationCenter.current().requestAuthorization(options: [.alert, .badge, .sound]) { success, error in if success { print("Push notifications allowed") } else if let error = error { print(error.localizedDescription) } } } func registerForNotifications() { UIApplication.shared.registerForRemoteNotifications() } - Build and run the app on an iPhone. The push notification feature does not work with a simulator. - On your phone, select allow notifications when the prompt appears. The debugger prints out “Push notifications allowed” if it is successful. - On your phone, choose the Register button. The debugger prints out the device token. - You have set up an iOS application that can receive push notifications and prints the device token. We can now use this app to test sending push notifications with SNS configured for token-based authentication. Retrieving your Apple resources After setting up your application, you retrieve your Apple resources from your Apple developer account. There are four pieces of information you need from your Apple Developer Account: Bundle ID, Team ID, Signing Key, and Signing Key ID. The signing key and signing key ID are credentials that you manage through your Apple Developer Account. You can register a new key by selecting the Keys tab under the Certificates, Identifiers & Profiles menu. Your Apple developer account provides the signing key in the form of a text file with a .p8 extension. Find the team ID under Membership Details. The bundle ID is the unique identifier that you set up when creating your application. Find this value in the Identifiers section under the Certificates, Identifiers & Profiles menu. Amazon SNS uses a token constructed from the team ID, signing key, and signing key ID to authenticate with APNs for every push notification that you send. Amazon SNS manages tokens on your behalf and renews them when necessary (within an hour). The request header includes the bundle ID and helps identify where the notification goes. Creating a new platform application using APNs token-based authentication Prerequisites In order to implement APNs token-based authentication, you must have: - An Apple Developer Account - A mobile application To create a new platform application: - Navigate to the Amazon SNS console and choose Push notifications. Then choose Create platform application. - Enter a name for your application. In the Push notification platform dropdown, choose Apple iOS/VoIP/Mac. - For the Push service, choose iOS, and for the Authentication method, choose Token. Select the check box labeled Used for development in sandbox. Then, input the fields from your Apple Developer Account. - You have successfully created a platform application using APNs token-based authentication. Creating a new platform endpoint using APNs token-based authentication A platform application stores credentials, sending configuration, and other settings but does not contain an exact sending destination. Create a platform endpoint resource to store the information to allow SNS to target push notifications to the proper application on the correct mobile device. Any iOS application that is capable of receiving push notifications must register with APNs. Upon successful registration, APNs returns a device token that uniquely identifies an instance of an app. SNS needs this device token in order to send to that app. Each platform endpoint belongs to a specific platform application and uses the credentials and settings set in the platform application to complete the sending. In this tutorial, you create the platform endpoint manually through the SNS console. In a real system, upon receiving the device token, you programmatically call SNS from your application server to create or update your platform endpoints. These are the steps to create a new platform endpoint: - From the details page of the platform application in the SNS console, choose Create application endpoint. - From the iOS app that you set up previously, find the device token in the application logs. Enter the device token and choose Create application endpoint. - You have successfully created a platform application endpoint. Testing a push notification from your device In this section, you test a push notification from your device. - From the details page of the application endpoint you just created, (this is the page you end up at immediately after creating the endpoint), choose Publish message. - Enter a message to send and choose Publish message. - The notification arrives on your iOS app. Conclusion Developers sending mobile push notifications can now use a .p8 key to authenticate an Apple device endpoint. Token-based authentication is more secure, and reduces operational burden of renewing the certificates every year. In this post, you learn how to set up your iOS application for mobile push using token-based authentication, by creating and configuring a new platform endpoint in the Amazon SNS console. To learn more about APNs token-based authentication with Amazon SNS, visit the Amazon SNS Developer Guide. For more serverless content, visit Serverless Land.
https://aws.amazon.com/blogs/compute/token-based-authentication-for-ios-applications-with-amazon-sns/
CC-MAIN-2022-27
en
refinedweb
I met this code today class someClass { // ... private volatile int a; // ... } Answer 1, authority 100% The volatile modifier imposes some additional conditions on the read / write of a variable. It is important to understand two things about volatile variables: - Read / write operations on a volatile variable are atomic. - The result of an operation of writing a value to a volatile variable by one thread becomes visible to all other threads that use this variable to read a value from it. Answer 2, authority 61% This means that the value of the variable will be “always read”. For example, in multithreaded applications, one thread read the value a = 1, passed control to another thread, which changed the value to a = 2, then control returned. So, without volatile the value of a for the first thread will be 1, since the first thread “remembers” that a = 1, with volatile – 2, because the first thread will read the value again and get the already changed one. Answer 3, authority 39% the variable has a master copy plus a copy for each thread, who is using it. The master copy is syncronized with the local a copy of the thread when entering / leaving the synchronized block. Sometimes, for example, an empty synchronized (lock) {} block makes sense. variables with the volatile modifier have no local copies. All threads work with master copy. Answer 4, authority 23% This is the definition given in the article “Java Multithreading” on the site . Defining a variable with the volatilekeyword means that the value of this variable can be changed by other threads. To to understand what volatiledoes, it’s helpful to understand how threads process regular variables. For performance reasons, the Java language specification allows storing a local copy of the variable in the JRE for each thread, which links to it. Such “local” copies of variables resemble a cache and help a thread avoid accessing main memory every time you want to get the value of a variable. At startup two threads, one of them reads the variable A as 5, and the second as 10. If the value of variable A has changed from 5 to 10, then the first thread will not know about the change and will store the wrong value A. But if variable A is marked as volatile, then whenever the stream reads value of A, it will access the master copy of A and read it present value. Stream local cache makes sense if variables in your applications will not be changed externally. If a variable is declared volatile, it means that it can be changed by different threads. It is natural to expect the JRE to provide some form of synchronization for such volatile variables. The JRE does implicitly provide synchronization when accessing volatile variables, but with one very big caveat: reading a volatile variable and writing to a volatile variable are synchronized, but non-atomic operations are not. Answer 5, authority 6% is optional for volatile object references. am i right? For example, when we use the Singleton pattern in a multithreaded application in which we use synchronization and want synchronization to occur only once when initializing an object, and not every time we call getInstance (), then we use the volatile modifier for the object reference: public class Singleton { private static volatile Singleton instance; private Singleton () { } public static Singleton getInstance () { if (instance == null) { synchronized (Singleton.class) { if (instance == null) instance = new Singleton (); } } return instance; } } Answer 6, authority 6% volatile – literally means volatile , volatile , volatile in the context of programming, this means that the value of a variable may change unexpectedly, so you should not rely on the values of this variable, for example, if the code says: private volatile int i; // over time i = 0; while (i & lt; 10) { // blah-blah i ++; } this does not mean that the cycle will definitely end in 10 steps … It may well happen that during the loop, the volatile value of a variable will (or may not) change unexpectedly … Answer 7, authority 5% volatile – tells the thread that the variable can change, and informs the thread to refer to the latest version, not a hashed copy, and propagate changes in a timely manner. A “volatile” data member informs a thread, both to get the latest value for the variable (instead of using a cached copy) and to write all updates to the variable as they occur.
https://computicket.co.za/java-java-volatile-keyword/
CC-MAIN-2022-27
en
refinedweb
@batterii/fake-query@batterii/fake-query This module exposes a fake Objection query builder for unit tests. It is built using Sinon and is intended to be used in conjunction with it. The FakeQuery ClassThe FakeQuery Class A named export of this module, the FakeQuery exposes a fake QueryBuilder instance on its builder property. You can inject the fake into your code under test by stubbing the static ::query method on the desired model. The fake builder automatically creates sinon stubs for any property accessed on the builder, except for the #then and #catch methods used to execute the query and obtain its result, as well as the #inspect method which prints out a string representation of the builder. Created stubs always return this, as all QueryBuilder methods are chainable. Test code can examine the stubs property to write assertions about the query. Typically, you will want to do a keys assertion on the stubs object, followed by sinon assertions on the stubs themselves. By default, the fake builder will neither resolve or reject when executed, as is normal for sinon stubs. If you want it to resolve or reject, simply involve the #resolves or #rejects methods with the desired result value. Once the fake builder has been executed, it can no longer be changed. If any of its instance methods are invoked, or if you attempt to change its result with #resolves or #rejects, the invoked method will throw. This ensures that your assertions are always referring to the state of the builder when it was executed, and not after. Example (Using TypeScript, Mocha, and Chai)Example (Using TypeScript, Mocha, and Chai) import {FakeQuery} from "@batterii/fake-query"; import {MyModel} from "../path/to/my-model"; import chai from "chai"; import sinon from "sinon"; import sinonChai from "sinon-chai"; chai.use(sinonChai); const {expect} = chai; describe('functionUnderTest', function() { let qry: FakeQuery; beforeEach(function() { qry = new FakeQuery(); // Make sure this stub is cleaned up! See the `afterEach` below. sinon.stub(MyModel, "query").returns(qry.builder); }); afterEach(function() { sinon.restore(); }); it("deletes the things", async function() { const deletedThings = []; qry.resolves(deletedThings); const result = await functionUnderTest(); expect(MyModel.query).to.be.calledOnce; expect(MyModel.query).to.be.calledOn(MyModel); expect(MyModel.query).to.be.calledWithExactly(); expect(qry.stubs).to.have.keys(["delete", "where", "returning"]); expect(qry.stubs.delete).to.be.calledOnce; expect(qry.stubs.delete).to.be.calledWithExactly(); expect(qry.stubs.where).to.be.calledOnce; expect(qry.stubs.where).to.be.calledWith("id", ">", 42); expect(qry.stubs.returning).to.be.calledOnce; expect(qry.stubs.returning).to.be.calledWith("*"); expect(result).to.equal(deletedThings); }); }); // Any non-cosmetic changes to this function will cause the above test to fail. async function functionUnderTest(): Promise<MyModel[]> { return MyModel.query() .delete() .where('id', '>', 42) .returning('*'); }
https://libraries.io/npm/@batterii%2Ffake-query
CC-MAIN-2022-27
en
refinedweb
For a deeper look into our Eikon Data API, look into: Overview | Quickstart | Documentation | Downloads | Tutorials | Articles I want to use python to automate an excel+eikon workflow. I tried to rebuild all of my excel API calls with eikon's python package, but not all required data was supported. My excel file is .xlsm, and it needs to connect to eikon then update before I manipulate data in python. I'm using win32com as follows: import win32com.client as win32 excel = win32.gencache.EnsureDispatch('Excel.Application') wb = excel.Workbooks.Open(os.getcwd() + '\\MyFile.xlsm') ws = wb.Worksheets('Sheet_With_Eikon_API_Calls') This code opens excel, then opens my worksheet with excel eikon api calls. However, this doesn't launch the Thomson Reuters Eikon COM Add-in, so my data doesn't get updated. For my process to work, I need to launch the Eikon Add-in then update the data. Is this possible with win32com? The steps I want are: python code: 1. launch excel 2. go to excel worksheet that has eikon api calls 3. launch excel thomson reuters eikon comm add-in (or verify that it's on) 4. refresh excel eikon api calls 5. save fresh data as .csv, then manipulate data with pandas and auto generate reports Try adding excel.COMAddIns("PowerlinkCOMAddIn.COMAddIn").Connect = True I have the same problem, i tried with xl = win32com.client.gencache.EnsureDispatch('Excel.Application') xl.COMAddIns("PowerlinkCOMAddIn.COMAddIn").Connect = True but it doesnt work When posting a new question on these forums always start a new thread. Old threads with accepted answers are not monitored by moderators. If you need to reference an old thread in your question, include a link to the old thread. To determine what's happening make the Excel instance you're creating visible (add xl.Visible = True right after creating an instance of Excel application). In the instance of Excel created do you see Refinitiv or Thomson Reuters tab added to Excel ribbon? Issue while requesting Stock Returns (TR.TotalReturn1D) How can I import a shareholder history report through the eikon python API? Refinitiv add-in excel (Mac) creates formula but does not give any result For 2 stocks listed on same exchange I get different number of rows of historical data and missing minutes Downloading excel file from eikon link with python
https://community.developers.refinitiv.com/questions/29464/launch-excel-com-add-in-with-python-win32com.html?sort=oldest
CC-MAIN-2022-27
en
refinedweb
Tree hash-storage files Project description Overview Tree the library for saving many files by they hash. For preservation it is enough to you to transfer a binary code of the file and the Tree will keep him. Example Superficial uses in the tree hash storage from tree_storage import TreeStorage tree = TreeStorage(path="/path/to/storage") # If you want add file to the Tree Storage with open("/path/to/file", "rb") as file: tree.breed(file_byte=file.read(), mode='wb') # after add file, method return status of writing. # If add file status is success, tree save last # hash of the file in the attribute file_hash_name # For remove file from the Tree Storage # you can call cut method and past # to him hash name of file which you have delete tree.cut(file_hash_name=tree.file_hash_name, greedy=True) Installing Download and install the latest released version from PyPI: pip install tree-storage Download and install the development version from GitHub: pip install git+ Installing from source (installs the version in the current working directory): python setup.py install (In all cases, add –user to the install command to install in the current user’s home directory.) Install and Update Tree library using pip: Documentation Read full documentation on. License This repository is distributed under The MIT license Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/tree-storage/
CC-MAIN-2022-27
en
refinedweb
zbar package Project description Introduction Author: Zachary Pincus [email protected] Contributions: Rounak Singh [email protected] (example code and zbar.misc). zbar-py is a module (compatible with both Python 2.7 and 3+) that provides an interface to the zbar bar-code reading library, which can read most barcode formats as well as QR codes. Input images must be 2D numpy arrays of type uint8 (i.e. 2D greyscale images). The zbar library itself packaged along with zbar-py (it’s built as a python extension), so no external dependencies are required. Building zbar requires the iconv library to be present, which you almost certainly have, except if you’re on windows. Then you probably will need to download or build the iconv DLL. Here are pre-built 32- and 64-bit binaries for same. The python code is under the MIT license, and zbar itself is licensed under the GNU LGPL version 2.1. Prerequisites: - iconv – c library required for building zbar-py; see above - numpy – for running zbar-py - pygame – for examples using a webcam Simple examples: More sophisticated examples can be found in ‘examples’ directory. - Scan for barcodes in a 2D numpy array: import zbar image = read_image_into_numpy_array(...) # whatever function you use to read an image file into a numpy array scanner = zbar.Scanner() results = scanner.scan(image) for result in results: print(result.type, result.data, result.quality, result.position) - Scan for UPC-A barcodes and perform checksum validity test: import zbar import zbar.misc image = read_image_into_numpy_array(...) # get an image into a numpy array scanner = zbar.Scanner() results = scanner.scan(image) for result in results: if result.type == 'UPC-A': print(result.data, zbar.misc.upca_is_valid(result.data.decode('ascii'))) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/zbar-py/
CC-MAIN-2019-26
en
refinedweb
1. Setup a dom0 on a distro of your choice with volgroups (rPath makes this really easy here) 2. Download the xen img 3. Create a volume group for the asterisknow image - Code: Select all lvcreate -L2G -n asterisknow VolGroup00 4. Gunzip the image and write to image to a Volgroup - Code: Select all # gunzip <path_to_filename>/asterisk-1.4.0-...img.gz # dd if=<path_to_image>/asterisk-1.4.0-beta2-x86.img of=/dev/VolGroup00/asterisknow bs=1M 5. Create an asteriskNOW domU config file - Code: Select all # cat /etc/xen/asterisknow import os host_uname = os.uname()[2] name = 'asterisknow' kernel = "/boot/vmlinuz-%s" % host_uname ramdisk = "/boot/initrd-%s.img" % host_uname memory = '64' vif = ['mac=00:13:D3:02:54:45,bridge=xenbr0'] disk = [ "phy:VolGroup00/asterisknow,xvda1,w" ] root = "/dev/xvda1 ro" NOTE!!! As of beta5, you can use pygrub and zaptel/mISDN should be built against the right kernels. Whether or not they function properly is another story. Good luck!! That config would look somethin like this then: - Code: Select all # cat /etc/xen/asterisknow name = 'asterisknow' memory = '64' vif = ['mac=00:13:D3:02:54:45,bridge=xenbr0'] disk = [ "phy:VolGroup00/asterisknow,xvda1,w" ] bootloader = 'pygrub' 6. Start it up - Code: Select all # xm create asterisknow Note: This is a little different for systems not using Volgroups
http://forums.asterisk.org/viewtopic.php?p=40332&amp
CC-MAIN-2019-26
en
refinedweb
Here is my latest project:. It is a port of the Perl script volgenmodel to Python, using the functionality of Nipype. A lot of scientific workflow code has a common pattern, something like this: collect some input files, run something to produce intermediate results, and then combine the results into a final result. One way to implement the workflow is to glob the files and set up arrays or dictionaries to keep track of the outputs. files = glob.glob('/tmp/blah*.dat') intermediate_result = [None] * len(files) for (i, f) in enumerate(files): intermediate_result[i] = fn1(f, param=0.3) final_result = fn2(intermediate_result) The problem with this approach is that it doesn’t scale well nor is it easy to reason about. The equivalent in Nipype is: import nipype.pipeline.engine as pe import nipype.interfaces.io as nio datasource = pe.Node(interface=nio.DataGrabber(sort_filelist=True), name='datasource_dat') datasource.inputs.base_directory = '/scratch/data' datasource.inputs.template = 'blah*.dat' datasink = pe.Node(interface=nio.DataSink(), name="datasink") datasink.inputs.base_directory = '/scratch/output' intermediate = pe.MapNode( interface=fn1_interface(param=0.3) name='intermediate_mapnode', iterfield=['input_file']) final = pe.Node( interface=fn2, name='final_node') workflow = pe.Workflow(name="workflow") # Apply the fn1 interface to each file in the datasource: workflow.connect(datasource, 'outfiles', intermediate, 'input_file') # Apply the fn2 interface to the list of outputs from the intermediate map node: workflow.connect(intermediate, 'output_file', final, 'input_file') # Save the final output: workflow.connect(final, 'output_file', datasink, 'final') This code is much closer to the actual problem that we are trying to solve, and as a bonus we don’t have to take care of arrays of input and output files, which is pure agony and prone to errors. Nipype lets us run the workflow using a single core like this: workflow.run() or we can fire it up using 4 cores using: workflow.run(plugin='MultiProc', plugin_args={'n_procs' : 4}) Nipype also has plugins for SGE, PBS, HTCondor, LSF, SLURM, and others. Here is volgenmodel-nipype’s workflow graph (generating this graph is a one-liner with the workflow object). Click the image for the full size version.
https://carlo-hamalainen.net/2014/02/26/volgenmodel-nipype-v1-0/
CC-MAIN-2019-26
en
refinedweb
5 Annotations Every Java Developer Should Know 5 Annotations Every Java Developer Should Know In this article, we will take a look at 5 of the annotations supported by all Java compilers and take a look at their intended uses. Join the DZone community and get the full member experience.Join For Free Since their inception in Java Development Kit (JDK) 5, annotations have become an indispensable part of the Java ecosystem. While there are countless custom annotations developed for use by Java frameworks (such as @Autowired for Spring), there are a few annotations recognized by the compiler that are of supreme importance. In this article, we will take a look at 5 of the annotations supported by all Java compilers and take a look at their intended uses. Along the way, we will explore the rationale behind their inception, some idiosyncrasies that surround their use, and some examples of their proper application. Although some of these annotations are more common than others, each should be internalized by non-beginner Java developers. To start off, we will delve into one of the most commonly used annotations in Java: @Override. @Override The ability to override the implementation of a method, or provide an implementation for an abstract method, is at the core of any Object-Oriented (OO) language. Being that Java is an OO language and features many of the common OO abstraction mechanisms, a non-final method defined in a non-final superclass or any method in an interface (interface methods cannot be final) can be overridden by a subclass. Although overriding a method appears to be straightforward at first, there are many subtle bugs that can be introduced when overriding is performed incorrectly. For example, it is a common mistake to override the Object#equals method with a single parameter of the type of the overriding class: public class Foo { public boolean equals(Foo foo) { // Check if the supplied object is equal to this object } } Being that all classes implicitly inherit from the Object class, the intent of our Foo class is to the override the Object#equals method so that Foo can be tested for equality against any other object in Java. While our intent is correct, our implementation is not. In fact, our implementation does not override the Object#equals method at all. Instead, we provide an overload of the method: rather than substituting the implementation of the equals method provided by the Object class, we instead provide a second method that accepts Foo object specifically, rather than an Object object. Our mistake can be illustrated using the trivial implementation that returns true for all equality checks but is never called when the supplied object is treated as an Object (which Java will do, such as in the Java Collections Framework, JCF): public class Foo { public boolean equals(Foo foo) { return true; } } Object foo = new Foo(); Object identicalFoo = new Foo(); System.out.println(foo.equals(identicalFoo)); // false This is a very subtle, but common error that could be caught by the compiler. It was our intent to override the Object#equals method, but because we specified a parameter of type Foo, rather than type Object, we in fact provided overloaded the Object#equals method, rather than overriding it. In order to catch mistakes of this kind, the @Override annotation was introduced, which instructs the compiler to check if an override was actually performed. If a valid override was not performed, an error is thrown. Thus, we can update our Foo class to resemble the following: public class Foo { @Override public boolean equals(Foo foo) { return true; } } If we try to compile this class, we now receive the following error: $ javac Foo.java Foo.java:3: error: method does not override or implement a method from a supertype @Override ^ 1 error In essence, we have transformed our implicit assumption that we have overridden a method into an explicit verification by the compiler. In the event that our intent was incorrectly implemented, the Java compiler will emit an error, not allowing our code with our incorrect implementation to successfully compile. In general, the Java compiler will emit an error for a method annotated with @Override if either of the following conditions is not satisfied (quoted from the Override annotation documentation): - The method does override or implement a method declared in a supertype. - The method has a signature that is override-equivalent to that of any public method declared in Object(i.e. equalsor hashCodemethods). Therefore, we can also use this annotation to ensure that a subclass method actually overrides a non-final concrete method or abstract method in a superclass as well: public abstract class Foo { public int doSomething() { return 1; } public abstract int doSomethingElse(); } public class Bar extends Foo { @Override public int doSomething() { return 10; } @Override public int doSomethingElse() { return 20; } } Foo bar = new Bar(); System.out.println(bar.doSomething()); // 10 System.out.println(bar.doSomethingElse()); // 20 The @Override annotation is not relegated to just concrete or abstract methods in a superclass, but can also be used to ensure that methods of an interface are overridden as well (since JDK 6): public interface Foo { public int doSomething(); } public class Bar implements Foo { @Override public int doSomething() { return 10; } } Foo bar = new Bar(); System.out.println(bar.doSomething()); // 10 In general, any method that overrides a non-final concrete class method, an abstract superclass method, or an interface method can be annotated with @Override. For more information on valid overrides, see the Overriding and Hiding documentation and section 9.6.4.4. of the Java Language Specification (JLS). @FunctionalInterface With the introduction of lambda expressions in JDK 8, functional interfaces have become much more prevalent in Java. These special types of interfaces can be substituted with lambda expressions, method references, or constructor references. According to the @FunctionalInterface documentation, a functional interface is defined as follows: A functional interface has exactly one abstract method. Since default methods have an implementation, they are not abstract. For example, the following interfaces are considered functional interfaces: public interface Foo { public int doSomething(); } public interface Bar { public int doSomething(); public default int doSomethingElse() { return 1; } } Thus, each of the following can be substituted with a lambda expression as follows: public class FunctionalConsumer { public void consumeFoo(Foo foo) { System.out.println(foo.doSomething()); } public void consumeBar(Bar bar) { System.out.println(bar.doSomething()); } } FunctionalConsumer consumer = new FunctionalConsumer(); consumer.consumeFoo(() -> 10); // 10 consumer.consumeBar(() -> 20); // 20 It is important to note that abstract classes, even if they contain only one abstract method, are not functional interfaces. For more information on this decision, see Allow lambdas to implement abstract classes written by Brian Goetz, chief Java Language Architect. Similar to the @Override annotation, the Java compiler provides the @FunctionalInterface annotation to ensure that an interface is indeed a functional interface. For example, we could add this annotation to the interfaces we created above: @FunctionalInterface public interface Foo { public int doSomething(); } @FunctionalInterface public interface Bar { public int doSomething(); public default int doSomethingElse() { return 1; } } If we were to mistakenly define our interfaces as non-functional interfaces and annotated the mistaken interface with the @FunctionalInterface, the Java compiler would emit an error. For example, we could define the following annotated, non-functional interface: @FunctionalInterface public interface Foo { public int doSomething(); public int doSomethingElse(); } If we tried to compile this interface, we would receive the following error: $ javac Foo.java Foo.java:1: error: Unexpected @FunctionalInterface annotation @FunctionalInterface ^ Foo is not a functional interface multiple non-overriding abstract methods found in interface Foo 1 error Using this annotation, we can ensure that we do not mistakenly create a non-functional interface that we intended to be used as a functional interface. It is important to note that interfaces can be used as the functional interfaces (can be substituted as lambdas, method references, and constructor references) even when the @FunctionalInterface annotation is not present, as we saw in our previous example. This is analogous to the @Override annotation, where a method can be overridden, even if it does not include the @Override annotation. In both cases, the annotation is an optional technique for allowing the compiler to enforce our intent. For more information on the @FunctionalInterface annotation, see the @FunctionalInterface documentation and section 4.6.4.9 of the JLS. @SuppressWarnings Warnings are an important part of any compiler, providing a developer with feedback about possibly dangerous behavior or possible errors that may arise in future versions of the compiler. For example, using generic types in Java without their associated formal generic parameter (called raw types) causes a warning, as does the use of deprecated code (see the @Deprecated section below). While these warnings are important, they may not always be applicable or even correct. For example, there may be instances where a warning is emitted for an unsafe type conversion, but based on the context in which it is used, it can be guaranteed to be safe. In order to ignore specific warnings in certain contexts, the @SuppressWarnings annotation was introduced in JDK 5. This annotation accepts 1 or more string arguments that represent the name of the warnings to ignore. Although the names of these warnings generally vary between compiler implementation, there are 3 warnings that are standard in the Java language (and hence are common among all Java compiler implementations): unchecked: A warning denoting an unchecked type cast (a typecast that the compiler cannot guarantee is safe), which may occur as a result of access to members of raw types (see JLS section 4.8), narrow reference conversion or unsafe downcast (see JLS section 5.1.6), unchecked type conversions (see JLS section 5.1.9), the use of generic parameters with variable arguments (varargs) (see JLS section 8.4.1 and the @SafeVarargssection below), the use of invalid covariant return types (see JLS section 8.4.8.3), indeterminate argument evaluation (see JLS section 15.12.4.2), unchecked conversion of a method reference type (see JLS section 15.13.2), or unchecked conversation of lambda types (see JLS section 15.27.3). deprecation: A warning denoting the use of a deprecated method, class, type, etc. (see JLS section 9.6.4.6 and the @Deprecatedsection below). removal: A warning denoting the use of a terminally deprecated method, class, type, etc. (see JLS section 9.6.4.6 and the @Deprecatedsection below). In order to ignore a specific warning, the @SuppressedWarning annotation, along with 1 or more names of suppressed warnings (supplied in the form of a string array), can be added to the context in which the warning would occur: public class Foo { public void doSomething(@SuppressWarnings("rawtypes") List myList) { // Do something with myList } } The @SuppressWarnings annotation can be used on any of the following: - type - field - method - parameter - constructor - local variable - module In general, the @SuppressWarnings annotation should be applied to the most immediate scope of the warning. For example, if a warning should be ignored for a local variable within a method, the @SuppressWarnings annotation should be applied to the local variable, rather than the method or the class that contains the local variable: public class Foo { public void doSomething() { @SuppressWarnings("rawtypes") List myList = new ArrayList(); // Do something with myList } } @SafeVarargs Varargs can be a useful technique in Java, but they can also cause some serious issues when paired with generic arguments. Since generics are non-reified in Java, the actual (implementation) type of a variable with a generic type cannot be determined at runtime. Since this determination cannot be made, it is possible for a variable to store a reference to a type that is not its actual type, as seen in the following snippet (derived from Java Generics FAQs): List ln = new ArrayList<Number>(); ln.add(1); List<String> ls = ln; // unchecked warning String s = ls.get(0); // ClassCastException After the assignment of ln to ls, there exists a variable ls in the heap that has a type of List<String> but stores a reference to a value that is actually of type List<Number>. This invalid reference is known as heap pollution. Since this error cannot be determined until runtime, it manifests itself as a warning at compile time and a ClassCastException at runtime. This issue can be exacerbated when generic arguments are combined with varargs: public class Foo { public <T> void doSomething(T... args) { // ... } } In this case, the Java compiler internally creates an array at the call site to store the variable number of arguments, but the type of T is not reified and is therefore lost at runtime. In essence, the parameter to doSomething is actually of type Object[]. This can cause serious issues if the runtime type of T is relied upon, as in the following snippet: public class Foo { public <T> void doSomething(T... args) { Object[] objects = args; String string = (String) objects[0]; } } Foo foo = new Foo(); foo.<Number>doSomething(1, 2); If executed, this snippet will result in a ClassCastException, because the first Number argument passed at the call site cannot be converted to a String (similar to the ClassCastException thrown in the standalone heap pollution example. In general, there may be cases where the compiler does not have enough information to properly determine the exact type of a generic vararg parameter, which can result in heap pollution. This pollution can be propagated by allowing the internal varargs array to escape from a method, as in the following example from pp. 147 of Effective Java, 3rd Edition: public static <T> T[] toArray(T... args) { return args; } In some cases, we know that a method is actually type safe and will not cause heap pollution. If this determination can be made with assurance, we can annotate the method with the @SafeVarargs annotation, which suppresses warnings related to possible heap pollution. This begs the question, though: When is a generic vararg method considered type safe? Josh Bloch provides a sound answer on pp. 147 of Effective Java, 3rd Edition, based on the interaction of a method with the internally created array used to store its varargs: If the method doesn’t store anything into the array (which would overwrite the parameters) and doesn’t allow a reference to the array to escape (which would enable untrusted code to access the array), then it’s safe. In other words, if the varargs parameter array is used only to transmit a variable number of arguments from the caller to the method—which is, after all, the purpose of varargs—then the method is safe. Thus, if we created the following method (from pp. 149 Ibid.), we can soundly annotate our method with the @SafeVarags annotation: @SafeVarargs static <T> List<T> flatten(List<? extends T>... lists) { List<T> result = new ArrayList<>(); for (List<? extends T> list : lists) { result.addAll(list); } return result; } For more information on the @SafeVarargs annotation, see the @SafeVarargs documentation, JLS section 9.6.4.7, and Item 32 from Effective Java, 3rd Edition. @Deprecated When developing code, there may be times when code becomes out-of-date and should no longer be used. In these cases, there is usually a replacement that is better suited for the task at hand and while existing calls to the out-dated code may remain, all new calls should use the replacement method. This out-of-date code is called deprecated code. In some pressing cases, deprecated code may be slated for removal and should be immediately converted to the replacement code before a future version of a framework or library removes the deprecated code from its code base. In order to support the documentation of deprecated code, Java includes the @Deprecated annotation, which marks some constructor, field, local variable, method, package, module, parameter, or type as being deprecated. If this deprecated element (constructor, field, local variable, etc.) is used, the compiler will emit a warning. For example, we can create a deprecated class and use it as follows: @Deprecated public class Foo {} Foo foo = new Foo(); If we compile this code (in a file called Main.java), we receive the following warning: $ javac Main.java Note: Main.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. In general, a warning will be thrown whenever an element annotated with @Deprecated is used, except in the following five circumstances: - The use is within a declaration that is itself deprecated (i.e. a recursive call). - The use is within a declaration that is annotated to suppress deprecation warnings (i.e. the @SuppressWarnings("deprecation")annotation, described above, is applied to the context in which the deprecated element is used). - The use and declaration are both within the same outermost class (i.e. if a class calls its own deprecated method). - The use is within an importdeclaration that imports the ordinarily deprecated type or member (i.e. when importing a deprecated class into another class). - The use is within an exportsor opensdirective. As previously mentioned, there are some cases when a deprecated element is slated for removal and calling code should immediately remove the deprecated element (called terminally deprecated code). In this case, the @Deprecated annotation can be supplied with a forRemoval argument as follows: @Deprecated(forRemoval = true) public class Foo {} Using this terminally deprecated code now results in a more imposing set of warnings: $ javac Main.java Main.java:7: warning: [removal] Foo in com.foo has been deprecated and marked for removal Foo foo = new Foo(); ^ Main.java:7: warning: [removal] Foo in com.foo has been deprecated and marked for removal Foo foo = new Foo(); ^ 2 warnings Terminally deprecated warnings are always emitted, save for the same exceptions described for the standard @Deprcated annotation. We can also add documentation to the @Deprecated annotation by supplying a since argument to the annotation: @Deprecated(since = "1.0.5", forRemoval = true) public class Foo {} Deprecated elements can be further documented using the @deprecated JavaDoc element (note the lowercase d), as seen in the following snippet: /** * Some test class. * * @deprecated Replaced by {@link com.foo.NewerFoo}. * * @author Justin Albano */ @Deprecated(since = "1.0.5", forRemoval = true) public class Foo {} The JavaDoc tool will then produce the following documentation: For more information on the @Deprecated annotation, see the @Deprecated documentation and JLS section 9.6.4.6. Coda Annotations have been an indispensable part of Java since their introduction in JDK 5. While some are more popular than others, there are 5 annotations that any developer above the novice level should understand: @Override, @FunctionalInterface, @SuppressWarnings, @SafeVarargs, and @Deprecated. While each has its own unique purpose, the aggregation of these annotations make a Java application much more readable and allow the compiler to enforce some otherwise implicit assumptions about our code. As the Java language continues to grow, these tried-and-true annotations will likely see many more years of service and help to ensure that many more applications behave as their developers intended. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/5-annotations-every-java-developer-should-know
CC-MAIN-2019-26
en
refinedweb
I am working on a Sitecore/MVC application, my first MVC application so I am learning as I go. No doubt I am going wrong somewhere along the line. I have a Basket that has 2 address views on it, one for billing and another for delivery. There is also a checkbox for "Delivery the same as billing" allowing the user to complete just one address. When you user checks this checkbox the delivery address div collapses. Main view: <div class="pure-control-group"> <h2>Billing Address</h2> @Html.Action("Init", "Address", new { <h2>Delivery Address</h2> <label for="UseBillingForShipping" class="pure-checkbox"> @Html.CheckBoxFor(x => x.UseBillingForShipping) Same as Billing Address above </label> </div> <div class="manual-address-entry focus-pane"> @Html.Action("Init", "Address", new {AddressType = "Delivery", @Address = Model.Delivery}) </div> An example of the Address view: <div class="pure-u-1 pure-u-sm-1-2 pure-u-lg-2-5"> <label for="@(Model.AddressType).FirstName">First Name<span class="required">*</span></label> <input type="text" id="@(Model.AddressType).FirstName" name="@(Model.AddressType).FirstName"> @Html.ValidationMessageFor(x=>x.FirstName) //<= How to handle this? </div> <div class="pure-u-1 pure-u-sm-1-2 pure-u-lg-2-5"> <label for="@(Model.AddressType).LastName">Last Name<span class="required">*</span></label> <input type="text" id="@(Model.AddressType).LastName" name="@(Model.AddressType).LastName"> @Html.ValidationMessageFor(x=>x.LastName) //<= How to handle this? </div> My problem occurs when I am trying to validate. The id of the controls on the address view are named id="@(Model.AddressType).LastName" so in the case of the Billing address they render like id="Billing.LastName" On the Address model the fields are annotated, e.g: [Required(ErrorMessage = "First Name is required")] public string FirstName { get; set; } [Required(ErrorMessage = "Last Name is required")] public string LastName { get; set; } So I have 2 problems: @Html.ValidationMessageFormarkup. I have tried @Html.ValidationMessageFor(x=>x.FirstName)and something similar to the labelfor( <label for="@(Model.AddressType).LastName">), @Html.ValidationMessageFor(@(Model.AddressType).LastName)and neither work. I am starting to think I have approached this totally the wrong way. The easiest way to handle this is to use a custom EditorTemplate for your address model. Assuming its public class Address, then create a view in /Views/Shared/EditorTemplates named Address.cshtml (i.e. named to match the name of your type) @model yourAssembly.Address @Html.LabelFor(m => m.FirstName) @Html.TextBoxFor(m => m.FirstName) @Html.ValidationMessageFor(m => m.FirstName) ... // ditto for other properties of Address Then in the main view @Html.EditorFor(m => m.Billing) @Html.CheckBoxFor(x => x.UseBillingForShipping) @Html.EditorFor(m => m.Delivery) The EditorFor() method will use your template and correctly name all elements for binding (including the validation message) Note that because you have a [Required] attribute, the script that hides the 'Delivery' address, should also ensure that it copies the contents of the 'Billing' to the 'Delivery' address controls otherwise validation will fail (alternatively you could use a [RequiredIf] validation attribute)
http://databasefaq.com/index.php/answer/33142/aspnet-mvc-5-sitecore8-mvc-validation-on-subviews
CC-MAIN-2019-26
en
refinedweb
While writing and running the program there might be three types of error.SYNTAX:- - Compile Time Error:-This error generally occurs because of syntax error. - Run Time Error(Exception):- This error occurs when application is running but some logic is failed according to CLR. - Logical Error:-Program is compile and running successfully but we are not getting the expected output due to not written correct logic. Exception Handling:-In Exception handling we provide an alternate path that if some error occurs at run time instead of halting the program.some alternate path of code executed.There are some keywords which is used in exception handling. - Try:- We put those codes to try block which may give some error at the run time. - Catch:-A catch block is used for catch the Exception of try block codes if any.We can use multiple catch block for handling the Exception of one try block. - Throw:-When any error is found in program then this keyword is used to throw the Exception to other block(method). - Finally:-Finally block codes are always executed if any Exception is throw or not throw.This set of statement is always executed. try { ....statement.... } catch(Exception ex) { ....statement.... } catch(Exception e) { ....statement.... } finally { ....statement... } Exception classes in C# There are some exception classes which is used for handling the Exception.All the Exception classes is derived from the two main classes. - System.SystemException --->Predefined System Exception. - System.ApplicationException -->Application program Exception. These two classes are used for Exception handling in C#.some derived Exception classes are:- - FormatException - DivideByZeroException - NullReferenceException - ArgumentNullException - NotFiniteNumberException - ArrayTypeMismatchException - ArgumentOutOfRangeException - ContextMarshalException - StackOverflowException etc. Step :1 First open your visual studio --.File --> New--> Project-->Console Application-->click OK --> and write the following codes which is given below: see it:- using System; namespace Exception_handling { class Program { class student { int res; public void display(int a, int b) { try { res = a / b; Console.WriteLine(res); } catch (DivideByZeroException e) { Console.WriteLine(e.Message); } catch (FormatException ex) { Console.WriteLine(ex.Message); } catch { Console.WriteLine("any other exception"); } finally { Console.WriteLine("Press enter for Exit"); } } } static void Main(string[] args) { student st = new student(); Console.WriteLine("Enter the two value"); int m = int.Parse(Console.ReadLine()); int n = int.Parse(Console.ReadLine()); st.display(m,n); Console.ReadLine(); } } }Step :2 Now Run the Program(press F5). see output: Note:-This is example of predefined Exception class. Create User defined Exception (Use of throw keyword):- User defined Exception classes are derived from the Application Exception class.Which is shown in example which is given below:- follow same steps as above. using System; namespace userdefined_exception { class Program { class billingException:ApplicationException { public billingException():base("value can not be less than 1000") { } public int calculate(int amount) { int total; if (amount < 1000) { throw new billingException(); } else { total = amount - 1000; Console.WriteLine("amount greater than 1000 is: "+total ); } return total; } } static void Main(string[] args) { try { billingException obj = new billingException(); Console.WriteLine("Enter one amount value"); int m = int.Parse(Console.ReadLine()); obj.calculate(m); } catch (billingException ex) { Console.WriteLine(ex.Message); Console.ReadLine(); } finally { Console.WriteLine("press enter for exit"); Console.ReadLine(); } } } } Output:- - If you enter amount less than 1000 then you will see this output which is given below: If you enter amount less than 1000 then you will see this output which is given below: Note :-On the basis of this example you will create your own Exception which you want to show to your users. If you want to run any specific project in visual studio then ,click Solution Explorer--> Right click on particular Project-->Set as Start UP Project. See it: For more: - Partial Classes - Operators in C# - Microsoft sql server - Basic elements are used for compiling the c# code. - oops - Data Integrity I hope this is helpful for you. To Get the Latest Free Updates Subscribe Click below for download whole application.Please share these post to another friends. Download C# Exception handling uses the try, catch, and finally keywords to attempt actions that may not succeed, to handle failures, and to clean up resources afterwards. c# exception handling ling
http://www.msdotnet.co.in/2013/05/exception-handling-in-c.html
CC-MAIN-2019-26
en
refinedweb
Hey guys! I have a question about the FCC problem where you use the reduce method. Link below. It has an object containing multiple movies, and the challenge is to find the average score of Christopher Nolan movies. The code I have is : var averageRating = watchList .filter(movie => {return movie.Director=="Christopher Nolan"}) .reduce((sum, nolanMovies) => { return (sum + parseFloat(nolanMovies.imdbRating)); },0) /(watchList.filter(movie => {return movie.Director=="Christopher Nolan"}).length); // <--there has to be a better //way. ; while this works, I feel it is kind of ugly. As you can see, I could filter out the movie list and get the sum of the ratings but didn’t know how to get the amount of movies there are without invoking the filter method again. I feel like there has to be a better way to do this but I just couldn’t find out. What would be the best way so I can convert the sum of scores into an average?
https://www.freecodecamp.org/forum/t/functional-programming-use-the-reduce-method-to-analyze-data/201817/3
CC-MAIN-2019-26
en
refinedweb
Provided by: alliance_5.0-20120515-6_amd64 NAME insmbkrds - converts MBK figure to RDS figure SYNOPSYS #include "rfmnnn.h" rdsins_list ∗insmbkrds( FigureRds, InstanceMbk, Mode, Lynx ) rdsfig_list ∗FigureRds; phins_list ∗InstanceMbk; char Mode; char Lynx; PARAMETER FigureRds The Rds figure which has to receive the RDS instance issue to the MBK instance conversion. InstanceMbk MBK instance which has to be converted and added to the RDS figure. Mode This field can take three values : ´A´ : All the cell is loaded in ram. ´P´ : Only information concerning the model interface is present, that means connector s and the abutment box. ´C´ : Finishes to fill an already loaded figure in ´P´ mode, in order to have it all in memory. After this, the conversion is applied. Note : The loading mode here is the MBK mode. Lynx Flag used for the segment conversion. If the parameter Lynx is set to 0 then thi s is the normal conversion mode. If the parameter Lynx is set to 1 then the rds structure generated permits to extract equipotentials rectangles. DESCRIPTION The insmbkrds function creates in the RDS figure the RDS instance issue to the convertion of the MBK instance to RDS format. If the parameter ´Mode´ is set to ´A´ then all the instance is loaded, else if parameter ´Mode´ is set to ´P´ then connectors and abutment box and through routes are loaded (for more information, see getphfig and loadphfig MBK functions). RETURN VALUE A pointer to the newly created instance is ∗STRING; void ∗USER1; } UserStruct; main() { phfig_list ∗MbkFigure; phins_list ∗MbkInstance; rdsfig_list ∗RdsFigure; rdsins_list ∗RdsInstance; mbkenv(); rdsenv(); loadrdsparam(); /∗ create MbkFigure Named "core" ∗/ MbkFigure = addphfig("core"); /∗ add Mbk instance "n1_y" to MbkFigure named "core" ∗/ MbkInstance = addphins(MbkFigure,"n1_y","inv_1",NOSYM,4,9); /∗ create RdsFigure named "core_2" ∗/ RdsFigure = addrdsfig("core_2",sizeof(UserStruct)); /∗ create RdsInstance with MbkInstance ∗/ RdsInstance = insmbkrds ( RdsFigure, MbkInstance, 'A', 0 ); viewrdsins ( RdsInstance ); . . . } SEE ALSO librfm, librds, viewrfmins, loadrdsparam
http://manpages.ubuntu.com/manpages/trusty/man3/insmbkrds.3.html
CC-MAIN-2019-26
en
refinedweb
Provided by: libgsasl7-dev_1.8.0-2ubuntu2_amd64 NAME gsasl_mechanism_name - API function SYNOPSIS #include <gsasl.h> const char * gsasl_mechanism_name(Gsasl_session * sctx); ARGUMENTS Gsasl_session * sctx libgsasl session handle. DESCRIPTION This function returns the name of the SASL mechanism used in the session. RETURN VALUE Returns a zero terminated character array with the name of the SASL mechanism, or NULL if not known. SINCE 0.2.28.
http://manpages.ubuntu.com/manpages/trusty/man3/gsasl_mechanism_name.3.html
CC-MAIN-2019-26
en
refinedweb
User Tag List Results 1 to 4 of 4 Using a different character set inside a JavaScript file Inside my .js file I have some text in Hebrew. However, whenever it is written to the document or displayed in any way (alert() for example), it is shown as gibberish. Any idea how to fix this? Thanks. - HTML Code: <script type="text/javascript" src="/script.js" charset="utf-8"></script> Hmm, thanks... that works with my test pages, but when I try to use it in my real application (which is XUL-based) nothing changes. Here is how I use the script: Code: <?xml version="1.0" encoding="Windows-1255"?> <?xml-stylesheet <script type="text/javascript" src="common.js" charset="windows-1255" /> ... </window> Code: Content-type: application/vnd.mozilla.xul+xml; charset=windows-1255 If I add the HTML namespace and use the <html:script /> tag instead, the Hebrew text is displayed correctly... the only problem is that the script seems to get included twice - the first load gives the right text but the second load gives the 'bad' text. Maybe charset is the wrong attribute for the XUL <script /> tag? Bookmarks
http://www.sitepoint.com/forums/showthread.php?167250-Using-a-different-character-set-inside-a-JavaScript-file&p=1206413
CC-MAIN-2017-13
en
refinedweb
The vast majority of text fields we create on our forms hold plain text. Downstream systems that receive data from our forms handle plain text much more easily than they deal with rich text expressed in XHTML. Obviously this is a bit of a compromise, since plain text is much less expressive than rich text. However, one area where we can express some richness in our plain text is by handling paragraph breaks — specifically by differentiating them from line breaks. This means that paragraph properties on your field such as space before/after and indents can be applied to paragraphs within your plain text. The main difficulty is how to differentiate a paragraph break from a line break in plain text and what keystrokes are used to enter the two kinds of breaks. Keystrokes Most authoring systems have keystrokes to differentiate a line break from a paragraph break. The prevalent convention is that pressing “return” adds a paragraph break, and pressing “shift return” adds a line break. However that convention seems to be enforced only when the text storage format is a rich format. E.g. it works this way in Microsoft Word, but it doesn’t work this way in notepad. Similarly in our forms. When entering boilerplate text in Designer or when entering data in a rich text field we follow this keystroke convention. Entering “return” generates a <p/> element, and entering “shift return” generates a <br/> element. However, when entering data in a plain text field there is no difference between return and shift-return. Both keystrokes generate a linefeed — which is interpreted as a line break. Characters You might assume that in plain text we could simply use the linefeed (U+000A) and the carriage return (U+000D) to differentiate between a line break and a paragraph break. However, it is not so easy. We store our data in XML, and the Unicode standard for XML does not support differentiating these characters. XML line end processing dictates that conforming parsers must convert each U+000A, U+000D sequence to U+000A, and also instances of U+000D not preceded by U+000A to U+000A. As of Reader 9, we have a solution by using Unicode characters U+2028 (line break) and U+2029 (paragraph break). When these characters are found in our data, they will correctly generate the desired line/paragraph breaking behaviours. The problem now is one of generating these characters from keystrokes. We can’t just change the default behaviour of Reader to start inserting a U+2029 character from a carriage return. Legacy systems would be surprised to find Unicode characters outside the 8-bit range in their plain text. However, the form author can explicitly add this behaviour. The trick is to add a simple change event script to your multi-line plain text field: testDataEntry.#subform[0].plainTextField[0]::change – (JavaScript) // Modify carriage returns so that they insert Unicode characters if (xfa.event.change == ‘\u000A’) { if (xfa.event.shift) xfa.event.change = ‘\u2028’; // line break else xfa.event.change = ‘\u2029’; // paragraph break } As you can see in the sample form, entering text into these fields will now generate the desired paragraph breaks in your plain text.
http://blogs.adobe.com/formfeed/2009/01/paragraph_breaks_in_plain_text.html
CC-MAIN-2017-13
en
refinedweb
Download presentation Presentation is loading. Please wait. Published byJaven Frizzle Modified about 1 year ago 1 1 DEPOSITING AND REPORTING WITHHELD TAXES CHAPTER 8 (2012) 2 2 EMPLOYER IDENTIFICATION NUMBER (EIN) 9 digit number (00-0000000) used to identify employer to the IRS and SSA and ensure correct credit to the correct employer account Apply on-line, by phone, or completing Form SS- 4 Application for Employer Identification Number and mailing or faxing it Can be applied for by a 3 rd party. Employer must sign Form SS-4 and authorization Individual applying must be an authorized designee Application must be made no later than 7 days after the first payment of wages 3 3 MAKING TAX DEPOSITS If paying taxes without EIN mark “Applied for” and date of application If no TIN by the time taxes are due send payment with explanation to local IRS office or service center where returns are filed Merger, consolidation and reincorporation – use TIN of surviving corp, if new corp apply for new ID Employment taxes handled different than other taxes Unless amounts are small, they must be deposited in a federal depository 4 4 PAYROLL TAX DEPOSIT RULES 2 depositor statuses; monthly or semiweekly (exception if over 100k, <1k annually or <2,500) Determined by liabilities during a “look back period” – 12 month period running from July 1 of the second previous year through June 30 of previous year 2012 look back period is July 1, 2010 through June 30, 2011 $50,000 or less in tax liability – monthly Exceeds $50,000 – semi-weekly Effective 2009 941X amounts are no longer counted Small employers with liabilities =<$1,000 file form 944 Employer’s Annual Federal Tax Return; taxes are paid with return Look back period the second year preceding the current calendar year 5 5 PAYROLL TAX DEPOSIT RULES (cont.) Non-payroll tax withholding treated separately with similar deposit rules New employer are monthly unless they trigger the one day rule Agricultural employers may have 2 different rules Monthly depositors - due the 15 th of following month Semiweekly – Wed, Th, Fri due by following Wed. Sat, Sun, Mon & Tue. Due following Friday One day rule – impact on monthly depositors 6 6 PAYROLL TAX DEPOSIT RULES (cont.) Semiweekly periods bridging two quarters “De minimis” deposit rules – less than $2,500 can be deposited with form 941 Small employers filing form 944 can pay with return unless annual liability exceeds $1,000 Saturday, Sunday and Holidays – due next banking day “Safe harbor” shortfall rules – no more than $100 or 2% of amount due. Must be made up by “make up” date Monthly due by due date of quarterly return Semiweekly due by the first Wed or Fri occurring on or after the 15 th of the month or if earlier due date of quarterly return 7 7 HOW TO DEPOSIT PAYROLL TAXES Must be made with financial institution authorized by the federal government;(Treasury Financial Agent (TFA)). Employer assigned based on the location of their principal financial institution. Most paid electronically through Electronic Federal Tax Payment System (EFTPS) Effective 2011 almost universal EFTPS for deposits (1000 or 2500) Automatic enrollment for new employers PIN and activation instructions are sent in the mail Activation by calling 800 number Within 15 days will receive form 9787, Business Confirmation/Update form, instruction booklet 8 8 HOW TO DEPOSIT PAYROLL TAXES (cont.) EFTPS payment options: EFTPS – Direct (ACH Direct) – Employer accesses EFTPS directly to report it tax deposit information and instructs EFTPS to move funds to the Treasury Request must be made at least one banking day before due date by 8:00 PM eastern time EFTPS – Credit (ACH Credit) – made through a financial institution. ER directs the FI to fund the Treasury Make sure FI offers the service and their required deadlines Same day payment (2:00PM) 9 9 HOW TO DEPOSIT PAYROLL TAXES (cont.) Holidays – due date is a holiday initiate payment one day before the holiday Proving payment: Decrease in account balance Amount and date of transfer and U.S. Government as the payee No Refunds of overpayments will be made through EFTPS. Use current process of filing form 843, Claim for Refund and Request for Abatement and 941X supporting statement Hardship – file form 911, Application for Taxpayer Assistance Order ACH rules apply (5 banking days) 10 10 HOW TO DEPOSIT PAYROLL TAXES (cont.) Keep accurate records Paper depositors complete form 8109, Federal Tax Deposit Coupon Deposits with no EIN – local IRS office or service center where returns are filed, not financial institution. Provide required info with payment No preprinted coupon – authorized financial institution accompanied by form 8109-B (non preprinted FTD coupon) Late deposit – paid directly to IRS service center with IRS notice 11 11 PENALTIES Failure to Deposit Timely: 2% of under deposited amount if deposited within 5 days of due date 5% if 6-15 days 10% more than 15 days after due date or made within 10 days of IRS notice or made to an unauthorized institution or directly to the IRS. Also applies to paper filing if required to file electronically 15% if not paid within 10 days of receiving IRS notice If using a payroll service liability remains on the employer 12 12 PENALTIES (cont.) How Penalties are applied: IRS is required to apply deposits to the most recent period within the tax period to which the deposit relates Enacted by Congress to reduce the change of facing multiple failure-to- deposit penalties under the previous system Rules apply to 940, 941, 943 and 1042 See page 8-23 for examples 13 13 PENALTIES (cont.) Failure to Withhold Penalty (know as Trust Fund Recovery Penalty) 100% Responsible party Cannot be imposed without first being notified by IRS at least 60 days in advance Liability may be shared Criminal penalties In addition to the 100% penalty, if willful fine up to $10k plus imprisonment for up to 5 years 14 14 EMPLOYMENT TAX RETURNS Form 941 – Employer’s Quarterly Federal Tax Return Exempt from filing 941 Seasonal employers Those withholding Non payroll taxes Employers of domestic workers Agricultural employers Employers with employment tax liabilities of $1000 or less 15 15 MERGERS/ACQUSITIONS OR BUSINESS REORGANIZATION Impact on tax filings Successor employer Predecessor employer Surviving entity Filing forms 941, 940, W-2 and schedule D Surviving corp. files schedule D after filing forms W-2 Acquired corp. should file schedule D with final 941 return 16 16 MERGERS/ACQUSITIONS OR BUSINESS REORGANIZATION (cont.) Successor hire predecessor’s employees – 2 procedures; standard and alternate Standard Procedure Each file form 941 for the quarter of acquisition reporting only what they paid/withheld If predecessor goes out of business it must file a “final” form 941, this procedure no schedule D required Files form W-2 Alternate Procedure Both agree that predecessor will not have to report wages/taxes for EEs hired by the successor on form W2 17 17 FILING FORM 941 Usually due by the last day of the first month following the quarter Automatic extension until the 10 th of the following month if deposits are made on time Saturdays, Sundays, and holidays – due date is next business day Postmark – certified or registered mail recommended Designated PDSs DHL Worldwide Express Federal Express UPS 18 18 FILING FORM 941 Filed with IRS office assigned to the employers region Line by line instruction for form 941, pages 8-34 to 8-37 (form on pages 8-40 to 8-41) Schedule B (page 8-42) Form 945 “Annual Return of Withheld Federal Income Tax” (non payroll) Form 941-M – Monthly reporting for delinquent employers Forms 941PR (Puerto Rico) and 941SS (American Samoa, Guam, Northern Mariana Islands, and Virgin Islands) Form 943 – “Employer’s Annual Federal Tax Return for Agricultural Employees” and form 943-A “Agricultural Employer’s Record of Federal Tax Liability” (similar to schedule B) 19 19 MAKING CORRECTION Form 941X (previously 941C) Errors discovered before form 941 is filed – no biggy FIT, SS or M/C taxes (under/over) discovered after filing form 941 Under - Timely if filed with return in the quarter is was discovered and payment is made timely (even if not collected from EE) Over – over withholding does not need to be reported if repaid to EE For SS and M/C affidavit and receipt required FIT, repay before end of year or send to IRS Form 843 “Claim for Refund and Request for Abatement” 3 year statute of limitation 20 20 LATE REPORTING AND PAYING OF TAXES Penalty for late filing of returns – 5% of tax shown on return (reduced by timely deposit/credits) for each month or fraction of month up to max. 25%. 15% up to 75% if due to fraud Failure to pay employment taxes -.5% of unpaid tax for each month or fraction of month up to 25% Additional.5% for amount on IRS notice if not paid within 21 calendar days of demand (10 business days if $200K+) up to 25% If due to negligence 20% of amount due as a result of the negligence. If fraud 75% Reasonable cause or undue hardship Interest – federal short term rate plus 3% 21 21 LATE REPORTING AND PAYING OF TAXES Criminal penalties Willful failure to file, pay or keep records – fine up to 25K (100K for corp) and/or one year imprisonment Willful delivery of fraudulent tax returns, fine up to 10K (50K for corp) and/or one year imprisonment Willful evasion to pay taxes – 100k (500k corp) and/or 5 years imprisonment Knowingly signing fraudulent forms – 100K (500K corp) and/or 3 years of imprisonment 22 22 FROM W-2 Required anytime taxable compensation is paid even if not in cash subject to withholding. If not subject to withholding required if over $600 Mergers – standard or alternate procedures, schedule D Undeliverable forms – 4 years retention Reissue statement (unless electronic) “void” “corrected” 23 23 FROM W-2 (cont.) Multiple form W-2 Multiple states System limitation More than 4 items in box 12 Last minute 3 rd party sick Etc. When and where to file File with SSA by last day of Feb (Saturday, Sunday, holiday) Due to EE by 1/31 of following year (states may differ) Ex-employee – 30 days if requested Note that some states have shorter periods 24 24 FROM W-2 (cont.) Electronic forms Web or email attachment EE consent required Requirement Disclosure Withdraw consent any time with 30 days written notice Must be available by January 31 and remain until October 15 of following year. Includes W2C Notice to EEs by mail, email or in person by January 31 st of following year with access/print instructions with the statement “IMPORTANT TAX RETURN DOCUMENT AVAILABLE” in caps. If email, must be in subject line 25 25 FROM W-2 (cont.) Employers ceasing business File final returns by end of month following the end of the quarter they cease doing business; including forms W-2 Monthly filers due by 15 th of the calendar month following the month they cease doing business, W-2s by end of month Box by Box instructions – page 8-75 to 8-83 Form W-3 – box by box instructions pg 8-88 to 8-90 (not required for electronic filers) 26 26 RECONCILATION PROCESS Each payroll Quarterly – also balance to preliminary W2 Annually IRS and SSA do speak to each other (believe it or not ) SSA notice if 941 amounts are greater, IRS notice if W-2 forms are greater 27 27 FORMS W-2c AND W-3c Only items that need correcting should be included Form W-3c must be included even if one W-2c Electronic filing required if 250 or more File with SSA Undelivered forms – 4 year retention 28 28 FORM 1099 Multiple types – 1099-MISC, 1099-R, 1099-G etc. Filed with IRS not SSA. Include form 1096 Reportable payments $600 limit Attorney fees, if not reported on W-2 reported on 1099-MISC box 14 No TIN subject to backup withholding Due by January 31 of following year Separate 1096 form with each type of 1099 Pension and retirement plan distributions Electronic delivery similar to W2 requirements 29 29 FORMS 1099 PENALTIES General penalties – $15 per return for failure to file or provide correct information, max $75k per year ($25K for small employers) $30 if not correct in more than 30 days after due date but before 8/1, max 150k (50K form small ER) $50 per return if not correct by 8/1, max 250k (100K for small ER) Penalties increase for willful failure No penalty for errors due to reasonable cause IRS focuses on employers with most egregious mismatch rates 30 30 FAILURE TO PROVIDE INFORMATION STATEMENTS TO EEs W-2 or 1099 not provided on time or with correct information Changed with returns filed in 2011 $30 per statement if provided within 30 days, max $250k a year, $75K for small business $60 per statement if provided >30 days but by Aug. 1 of that year, max $500k a year, $200k for small business $100 per statement if not corrected by Aug. 1 up to 1.5M, $500k for small business Willful failure brings bigger penalties - $250 or 10% of monetary amounts required to be shown on statement, no maximum 31 31 ELECTRONIC REPORTING REQUIREMENTS FOR W-2 250 statement, not EEs Hardship waivers – form 8508, Request for “Waiver From Filing Information Returns Electronically” Automatic extension of 30 days – form 8809, “Application of Time to File Information Returns” sent by due date Penalties Internet – must register Report wages to SSA, view errors, request extension, acknowledge resubmission and view name and SSN mismatches Enter W2 information on line (up to 20) 32 32 ELECTRONIC REPORTING REQUIREMENTS FOR FORMS 1099 250 or more of any singel type of 1099 First time approval required – complete form 4419 “Application for Filing Returns Electronically” 33 33 EFILE FORMS 940, 941, 944 AND 941X Points out errors Instant acknowledgement Integrated payment options Electronic signature On line application to participate after registering for e- services on the IRS website Requires PIN registration Reporting agent registration – form 8655 Submit test file Not considered filed until receipt of acknowledgement as accepted State requirements – page 8-117 to 8-120 Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/4168696/
CC-MAIN-2017-13
en
refinedweb
In this article we can explore the table creation in storage account.As a refresh, I would like to repeat that there are 3 types in Azure Storage. The Blob creation was explored in the previous article. Table creation will be explored here.Concepts in Table Following are the key concepts in table. Note: The table we discuss here is not entirely same as our database table. The database table will be discussed in SQL Azure. In Blob Service, the data is stored in containers. But in Table Service, data is stored in tables. The steps involved in creating a table are following: Step 1: Create new project As always create a new azure project and add a web role into it. Now add reference to the StorageClient dll file. Step 2: Define the Entity Now we need to define the entity with the required properties, create a new class, derive it from TableServiceEntity and define the following properties in it. Our entity class is derived from TableServiceEntity because this class takes care of the key properties like PartitionKey, RowKey and necessary attributes. The class definition is given below: public class Contact : TableServiceEntity { public string Name { get; set; } public string Address { } Step 3: Code Part Now we can write the code to do the following activities: The following code performs the following: protected void Page_Load(object sender, EventArgs e){ // Authenticate StorageCredentialsAccountAndKey accountAndKey = new StorageCredentialsAccountAndKey("youraccount", "key"); CloudStorageAccount account = new CloudStorageAccount(accountAndKey, true); CloudTableClient client = account.CreateCloudTableClient(); client.CreateTableIfNotExist("Contact"); // Create table context TableServiceContext tableContext = new TableServiceContext(account.TableEndpoint.ToString(), account.Credentials); // Create entity Contact contact = new Contact(); contact.PartitionKey = "1"; contact.RowKey = "1"; contact.Name = "Contact1"; contact.Address = "Address1"; // Add entity to table and save the changes tableContext.AddObject("Contact", contact); tableContext.SaveChanges();} The account name and key can be obtained as explained in the previous article. Note: We are setting the PartitionKey and RowKey as "1" for demo purposes. The PartitionKey represents the partition identifier where the table is stored. The RowKey should be the unique identifier to the entity. More Info The classes involved for resembles the ADO.NET Entity Framework classes. Step 4: Viewing the entity inserted. You can use the Server Explorer to see the new entity inserted as shown below. Summary In this article we have seen how to define a table and insert entity into it. View All
http://www.c-sharpcorner.com/uploadfile/40e97e/windows-azure-create-table-in-storage-account/
CC-MAIN-2017-13
en
refinedweb
Microsoft Azure recently announced support for webhooks on Azure Alerts. Now you can provide an https endpoints to receive webhooks while creating an alert in the Azure portal.. In this article I will walk you through creating an sample application to receive webhooks from Azure Alerts, configure an Alert to use this endpoint and test the overall flow. Create a Receiver Application Open Visual Studio 2015 and create a New ASP.Net Web Application [Figure 1] Select the Empty template from the available ASP.Net 4.5 Templates and Check to add the Web API folders an core references as below. [Figure 2] Add the Microsoft.AspNet.WebHooks.Receivers.Azure Nuget package. Don’t forget to check Include prerelease if you can find this package in the search results. [Figure 3] After installing the nuget package add the the below line to the Register method in WebApiConfig class. config.InitializeReceiveAzureAlertWebHooks(); You can add the above code after the routing code as shown in Figure 4. [Figure 4] This code registers your webhooks reciever. Next step is to add the below application setting to your web.config file. This setting adds the secretkey to validate that the WebHook requests indeed Azure Alerts. It is advisable to use a SHA256 hash or similar value, which you can get from FreeFormatter Online Tools For Developers. This secret key will be part of the Reciever URL provided in the Azure Portal while creating the Azure Alerts. <appSettings> <add key="MS_WebHookReceiverSecret_AzureAlert" value="d3a0f7968f7ded184194f848512c58c7f44cde25" /> </appSettings> Next we need to add handlers to process the webhooks data sent by Azure Alerts. Add a new class AzureAlertsWebHooksDataHandler and add the below code to it. This is the most basic handler. In the construct we have initialized the Reciever to handle only Azure Alert webhooks. The ExecuteAsync method is the one which is responsible for processing the data posted and return response back to indicate webhooks was received. We will now expand this code to actually process the data received in the webhooks. Let’s store the data posted by the Azure Alerts webhooks sender in Azure table storage. To do this first add the WindowsAzure.Storage nuget package and add the below code to import the Windows azure storage namespaces required here. using Microsoft.WindowsAzure.Storage; using Microsoft.WindowsAzure.Storage.Auth; using Microsoft.WindowsAzure.Storage.Table; using System.Configuration; //To read connectionstring from the config files. Also add your Azure storage connection string in the application settings as below. <add key=”StorageConnectionString” value=”DefaultEndpointsProtocol=https;AccountName=your-account-name;AccountKey=your-account-key” /> And add the a small TableEntity implementation as below to store data in Azure table storage. Finally lets modify the ExecuteAsync method to process the data send by the Webhooks sender and store it in Azure Table storage as below. The data sent in by the WebHooks sender is stored in JSON format, in the Data field of the WebHookHandlerContext object which is passed in as a parameter to the ExecuteAsync method. In the above method, I’m converting it to string and storing in Azure Table storage. Now publish this code to an Azure Website. After publishing you can use the below URL to configure Azure Alerts to send Webhooks to the receiver we created above. https://<host>/api/webhooks/incoming/azurealert?code=d3a0f7968f7ded184194f848512c58c7f44cde25 Note: The Code in the above URL is the same as the secret key we have configured in the application settings. Configure webhooks for Azure Alerts Now Log in to the new Azure portal to configure an Azure alert to send Webhooks to the receiver we created above. Browse and select a resource for which you want to configure the alerts. For simplicity lets create an alert for the above webhooks reciever Azure website we created. Create a new alert(Webhooks currently supported on metric alerts only), and provide your webhooks reciever URL in the WebHooks field as below. [Figure 5] Verify the Results: Configure the alerts help you verify the results quickly. You can accomplish this by keeping the Threshold and the Period to the minimum. I have set the Period to 5 Minutes in the above example. Hence, after 5 minutes if the threshold is reached, an alert is fired and webhooks posted to our receiver URL. This data is then processed and stored to Azure table storage as below. [Figure 6] Sample JSON object posted by the Azure Alerts Webhooks is as below. { “status”: “Resolved”, “context”: { “id”: “/subscriptions/<your-subscriptionId>/resourceGroups/webhooksdemo1/providers/microsoft.insights/alertrules/webhooksdemo”, “name”: “webhooksdemo”, “description”: “webhooksdemo”, “conditionType”: “Metric”, “condition”: { “metricName”: “Requests”, “metricUnit”: “Count”, “metricValue”: “1”, “threshold”: “1”, “windowSize”: “5”, “timeAggregation”: “Total”, “operator”: “GreaterThan” }, “subscriptionId”: “<your-subscriptionId>”, “resourceGroupName”: “webhooksdemo1”, “timestamp”: “2015-10-14T09:43:20.264882Z”, “resourceName”: “mywebhooksdemo1”, “resourceType”: “microsoft.web/sites”, “resourceId”: “/subscriptions/<your-subscriptionId>/resourceGroups/webhooksdemo1/providers/Microsoft.Web/sites/MyWebhooksDemo1”, “resourceRegion”: “East US”, “portalLink”: “. com/#resource/subscriptions/ <your-subscriptionId>/resourceGroups/webhooksdemo1/providers/Microsoft.Web/sites/MyWebhooksDemo1″ }, “properties”: {} } Alternatively you can also use the Fiddler request composer to post to you webhooks Receiver URL and check the response the corresponding updates in the Azure Table storage. Make sure that the content-type is marked as json and the request body has json similar to the above example. A fiddler request should look like the below example. [Figure 7] Note: Webhooks are internally configured to retry a few times until they receive a successful response from the receiver within a short duration. Hence you might see multiple requests hitting an endpoint in the ExecuteAsync method if you are debugging it remotely. References: Receive WebHooks from Azure Alerts and Kudu (Azure Web App Deployment) by Henrik F Nielsen Introducing Microsoft ASP.NET WebHooks Preview by Henrik F Nielsen Webhooks for Azure Alerts How to configure webhooks for alerts
https://blogs.msdn.microsoft.com/cie/2015/11/02/webhooks-for-azure-alerts-creating-a-sample-asp-net-receiver-application/
CC-MAIN-2017-13
en
refinedweb
"Market Profile tries to provide this internal logic in the context of the market. It is a method of analysis that starts off with the understanding that price alone does not communicate information to the participant, just as words without syntax or context may have no meaning. Volume is an integral part of the direct expression of the market - understand it and you understand the language of the the market." Robin Mesh Introduction A long time ago, looking through a magazines subscription I have found an article "Market Profile and understanding the market language" (October, 2002) in the Russian journal "Valutny Spekulant" (At present time it called as "Active Trader"). The original article has been published in "New Thinking in Technical Analysis: Trading Models from the Masters". The Market Profile was developed by trully brilliant thinker Peter Steidlmayer. He found the natural expression of the market (volume) and organized it in a way that is readable (the bell curve), so that objective information generated by the market is accessible to market participiants. Steidlmayer has suggested to use the alternative representation of information about "horizontal" and "vertical" market movements that leads to completely different set of models. He assumed that there is an underlying pulse of the market or a fundamental pattern called the cycle of equilibrium and disequilibrium. Market Profile measures the horizontal market movement through the vertical one. Let's call that "equillibrium" through "disequillibrium". This relationship is the fundamental organizing principle of the market. A trader's whole style of trading may change depending on what part of the equilibrium/disequilibrium cycle the market is in. Market Profile can determine both when the market is going to shift from equilibrium to disequilibrium and how big the move is going to be. The two basic concepts of Market Profile are: - The market is an auction, and it moves in the direction of the price range where the supply and demand are more or less equal. - The market has two phases: horizontal activity and vertical activity. The market moves vertically when supply and demand are not equal or they are in disequillibrium, and horizontally when they are in equilibrium or balanced. The equilibrium market shown using the Market Profile in the chart below tends to form an almost perfect bell-shaped curve, rotated by 90 degrees because of the orientation of the chart: Fig 1. The Market Profile of equilibrium market The trend, non-equilibrium market also forms a bell-shaped curve, but its center is shifted up or down. Other configurations that form two peaks of bells ,depending on the price movement and the confidence of market players, are possible. Fig 2. The Market Profile of disequilibrium (trend) market Use of the daily profile shapes to determine the degree of balance/imbalance level of the market can be useful, because it gives you a starting point in understanding the shifts of between various market participants. A trading opportunity with the greatest benefit appears when the shift from balance to imbalance is about to occur. Moreover, if you can identify that trading opportunity and accurately estimate the potential magnitude of that shift, then you can estimate the quality of that trade and the amount of time that is necessary for it. You can find the example of methodology of working with this tool can be found at, where a group of traders has studied the Price Histrogram since 1998. The Enthios Universal strategy and example of its use you can also be found there. 1. Price Histogram The Price Histogram is a very reliable tool. It's a bit intuitive but extremely effective. The price histogram simply shows you the "most convenient" trading points of the market. This is aa leading indicator, because it shows the points where market can change its direction in advance. The indicators like moving averages or oscillators cannot specify the exact points of resistance and support, they can only show the fact whether the market is overbought or oversold. Usually, the Price Histogram (or Market Profile) is applied to 30 min. price charts to study the market activity during one day. I prefer to use the 5 min. charts for stock markets and 15-30 min. charts for FOREX. 2. Point of Control In the figure above you can see the level where the market was traded the maximum amount of time; it's outlined with the longest line in the histogram. It's called the Point of Control, or POC. Sometimes, as it seen in the figure, the histogram has two tops, one of them is a little bit lower. In such case, we see that the indicator shows just one POC, but in fact there are two of them, and it should be taken into the account. In addition, the percentage level of the range in the histogram also creates additional levels, so called Secondary POC levels: Fig 3. Points of control What does POC show? The price that is remembered by most of traders. The longer market is traded at this price, the longer market remembers it. Psychologically POC acts as a center of attraction. The next chart shows what happened a few days earlier. It's a good demonstration of power of the Price Histogram. Fig 4. The Point of Control isn't absolute; it shows the range of trade The Point of Control isn't absolute; it indicates the range of trading. Thus, the trader should be ready to act when the market approaches to POC. It helps to optimize orders, using the historical observations. Let's consider the Fig. 4. The POC on 29.12.2009 is located at price 68.87. It is clear even without the histogram and POC line that the market was within the 68.82~68.96 range almost all the day. The market closed at the end of the day at 5 points below the POC. On the next day it caused the opening of the market with a gap down. It's important to understand that we can't predict whether the market will go up or down. We can only suppose that the market will return to the POC line and to the maximal accumulation of the histogram lines. But what will happen when the price touches the POC? The same thing that happens with an elastic object that falls to the ground, it will jump back. If it happens quickly, like a tennis ball stroke back with a racket, the price will return very quickly back to the initial level. After the market opening on 30.12.2009 we see that it was a gap and then market touched the POC of the previous day, and then quickly returned to the open price and updated the minimum. Note that POC is not absolutely accurate (experienced traders know that there is no clear resistance levels when the price reaches a maximum, minimum or concentration range). What happens at this point depends on the market players. If the collective desire (for example, news publication) coincides, then market will pass through the POC, but it's rare and it can be used to develop a trading system. Pay attention that the market behavior was the same on 31.12.2009. When the price touched POC, the buyers yielded to the sellers. 3. Virgin Point of Control The Virgin POC (Virgin Point of Contro) is a level that price hasn't reached in the next days. The logics is simple, as described above, the POC is an attraction point for the market. As the price moves away from the POC, the force of attraction increases. And the more the price goes away from the Virgin POC, the greater is the possibility that when it returns to this level the rebounce will occur and probably a price reverse will occur as well. Fig 5. Former and current Virgin POC At Fig. 5 the former Virgin POCs that were the support and resistance levels are marked with circles. The working Virgin POC are marked with price values. Once the price has touched the Virgin POC, it ceases to be a "virgin". Psychologically, the market no longer sees it as a substantial level of support or resistance. The traders still can see the price levels, which initially has formed the POC, but as a simple accumulation of prices. 4. Price Histogram Implementation in MQL5 My first version of Price Histogram appeared in 2006 it was written in MQL4 in MetaTrader4 for personal use. During the development of this indicator I faced some troubles, here are some of them: - very short number of bars in history for M5, not to speak of M1; - the necessity to develop special functions for working with histrory, such as returning back by one day considering holidays, checking for the market close time on Friday, checking for the open and close time for CFD market, etc; - recalculation of indicator when changing timeframes and, as a result, terminal delays. Therefore when beta-testing of МetaТrader5 and MQL5 has started, I have decided to convert it to MQL5. As people say, "the first pancake is always a bit tricky", I have tried to implement it as an indicator. Let's start with the good: the presence of the long history of minute quotes for all symbols, the possibility of historical data obtaining for a certain time period at any time range. Now I will explain why it has turned out. I haven't considered the features of the MQL5 indicators: - the indicator's runtime is critical; - the features of the indicator's work after the timeframe changes. The execution of function OnCalculate(), which corresponds to Calculate event handler has a critical runtime. Accordingly, the processing of 260 days (annual period) using the minute bars history takes a long time, up to several minutes. Of course we can accept it, if the calculations performed at once after the indicator attaching to the chart. But this isn't the case for the timeframe changes. When indicator switches to the different timeframe, the old copy of the indicator is destroyed and created the new one. That's why after the timeframe changes we have to recalculate the same levels again and it spends a lot of time. But as saying, if you don't know what to do - "Read the documentation first", in our case, it's the documentation of MQL5. The solution was very simple - to implement this indicator as Expert Advisor which doesn't trade. The advantages of Expert Advisor are: - the processing time isn't critical for Init event handler in OnTick (); - the possibility to obtain the parameters of the handler OnDeinit (const int reason). The Expert Advisors are differing from the indicators the following: after the timeframe change the expert advisor just generates the DeInit event with REASON_CHARTCHANGE reason parameter, it doesn't unload the Expert Advisor from memory and serves the values of global variables. It allows us to perform all calculations at once after Expert Advisor attaching, changing its parameters and new data appearing, in our case for a new trading day. Let's introduce some definitions that will be needed later. The Object-oriented programming (OOP) - is a style of programming which basic concepts are the concepts of objects and classes. The Object is an entity in the virtual space, with specified state and behavior; it has some values of properties (called as attributes) and operations with them (called as methods). In OOP the Class is a special abstract data type, characterized by means of its construction. The class is a key concept in OOP. The class is differing from the other abstract data types. The data definition in class also contains class methods of its data processing (interface). In programming there is a software interface concept that means a list of possible computations that can be performed by some part of the program, including algorithms, description of arguments and order of input parameters to proceed and its return values. The abstract data type interface has been developed for a formalized description of such a list. The algorithms itself and the code that will perform all these calculations aren't specified and called as interface implementation. The class creation is creation of some structure with fields and methods. The entire class can be considered as a template for the objects creation, which are class instances. The class instances are created using the same template, so they have the same fields and methods. Let's get started... The source code is located in 4 files. The main file is PriceHistogram.mq5, the other files are: ClassExpert.mqh, ClassPriceHistogram.mqh and ClassProgressBar.mqh. The files with .mqh extension contain the classes' description and methods. All files must be located in the same directory, My directory is: \MQL5\ Experts\PriceHistogram. 4.1. PriceHistogram.mq5 The first statement in the source code is: #include "ClassExpert.mqh" The #include compiler directive includes the text from the specified file. In our case it is description of class CExpert (discussed below). The next one is a block of input variables which are parameters of Expert Advisor. // The block input parameters input int DayTheHistogram = 10; // Days for histogram input int DaysForCalculation= 500; // Days for calculation(-1 all) input uint RangePercent = 70; // Percent range input color InnerRange =Indigo; // Inner range input color OuterRange =Magenta; // Outer range input color ControlPoint =Orange; // Point of Control input bool ShowValue =true; // Show Values After that the variable ExtExpert (of CExpert class type) is declared. The next is the standard event handlers which are in MQL5-programs. The event handlers call for the corresponding methods of CExpert class. int OnInit() { //--- // We check for symbol synchronization before the start of calculations int err=0; while(!(bool)SeriesInfoInteger(Symbol(),0,SERIES_SYNCRONIZED) && err<AMOUNT_OF_ATTEMPTS) { Sleep(500); err++; } // CExpert class initialization ExtExpert.RangePercent=RangePercent; ExtExpert.InnerRange=InnerRange; ExtExpert.OuterRange=OuterRange; ExtExpert.ControlPoint=ControlPoint; ExtExpert.ShowValue=ShowValue; ExtExpert.DaysForCalculation=DaysForCalculation; ExtExpert.DayTheHistogram=DayTheHistogram; ExtExpert.Init(); return(0); } When I wrote the first version of the Expert Advisor and run it, I have some trouble with understanding why it terminates with error after the client terminal restart or a symbol changes. And it occurs when the client terminal was disconnected or a symbol hasn't used for a long time. It's great that developers have added the debugger to MetaEditor5. I remember a lot of Print() and Comment() commands, used for checking of values of variables in MetaEditor4. Many thanks to MetaEditor5 developers. In my case, everything was easy; the expert starts before connection to server and update of the historical data. To resolve this problem, I had to use the SeriesInfoInteger(Symbol(),0,SERIES_SYNCRONIZED), which reports that data is synchronized or not, and the cycle while (), the case of the connection absence it uses the counter variable err. Once the data has been synchronized, or the cycle has completed because of the counter in the absence of connection, we pass the input parameters of our expert class CExpert, and call the class initialization method Init (). As you see, thanks to classes concept in MQL5, our file PriceHistogram.mq5 has transformed into a simple template, and all further processing is in the CExpert class, declared in the file ClassExpert.mqh. 4.2. ClassExpert.mqh Let's consider its description. //+------------------------------------------------------------------+ //| Class CExpert | //| Class description | //+------------------------------------------------------------------+ class CExpert { public: int DaysForCalculation; // Days to calculate (-1 for all) int DayTheHistogram; // Days for Histogram int RangePercent; // Percent range color InnerRange; // Internal range color color OuterRange; // Outer range color color ControlPoint; // Point of Control (POC) Color bool ShowValue; // Show value The public section is open and accessible from the outside variables. You'll notice that names of the variables coincide with names of the input parameters section described in PriceHistogram.mq5. It isn't necessary because the input parameters are global. But in this case - is a tribute to the good breeding rules, it's desirable to avoid using of external variables within the class. private: CList list_object; // The dynamic list of CObject class instances string name_symbol; // Symbol name int count_bars; // Number of daily bars bool event_on; // Flag of events processing The private section is closed from the outside and accessible only within the class. I would like to outline the variable list_object of CList type, which is a class of the standard MQL5 library. The CList Class is a dynamic class with a list of instances of CObject class and its heirs. I will use this list for the references storage for the CPriceHistogram class elements, which is an heir of the CObject class; we'll consider the details below. The CList class description is in the List.mqh, and it includes by using the compiler directive #include <Arrays\List.mqh>. public: // Class constructor CExpert(); // Class destructor ~CExpert(){Deinit(REASON_CHARTCLOSE);} // Initialization method bool Init(); // Deinitialization method void Deinit(const int reason); // Method of OnTick processing void OnTick(); // Method of OnChartEvent() event processing void OnEvent(const int id,const long &lparam,const double &dparam,const string &sparam); // Method of OnTimer() event processing void OnTimer(); }; The following is a public methods section. As you have guessed these methods (functions) are available outside the class. And finally the brace with a semicolon completes the class description. Let's consider class methods in detail. The class constructor is a special block of statements, called when the object is created. Constructor is similar to the method, but differs from method that it hasn't explicitly a certain type of returned data. In MQL5 language, the constructors can't have any input parameters, and each class should have an only one constructor. In our case, the constructor is a primary initialization of variables. The destructor is a special class method that used for the object deinitialization (eg, free memory). In our case the method is called as Deinit (REASON_CHARTCLOSE); The Init() is a method for the class initialization. This is the most important method of CExpert class; the creation of histogram objects has performed there. Please look at the comments for the details. But I would like to consider there points. The first, to build up a daily Price Histogram we need the open time data for days to proceed. Here I would like to digress and draw your attention to the features of working with time series. For the data request from the other timeframes we needed a time, so the functions Bars () and CopyTime (), as well as other functions to work with timeseries aren't always return the desired data from the first call. So I had to put this function in the do (...) while () loop, but to make it finite, I have used the counter variable. int err=0; do { // Calculate the number of days which available from the history count_bars=Bars(NULL,PERIOD_D1); if(DaysForCalculation+1<count_bars) count=DaysForCalculation+1; else count=count_bars; if(DaysForCalculation<=0) count=count_bars; rates_total=CopyTime(NULL,PERIOD_D1,0,count,day_time_open); Sleep(1); err++; } while(rates_total<=0 && err<AMOUNT_OF_ATTEMPTS); if(err>=AMOUNT_OF_ATTEMPTS) { Print("There is no accessible history PERIOD_D1"); name_symbol=NULL; return(false); } Secondly, the minute history of MetaTrader 5 is equal to the days available, so it can take a lot of time to proceed, so it's necessary to visualize the calculation process. The class CProgressBar (#include "ClassProgressBar.mqh") has been developed for this purpose. It creates the progress bar in the chart window and updates it during the calculation process. // We create the progress bar on the char to shot the loading process CProgressBar *progress=new CProgressBar; progress.Create(0,"Loading",0,150,20); progress.Text("Calculation:"); progress.Maximum=rates_total; The third, in cycle, using the "new" statement, we create the CPriceHistogram object, configure it using its methods and initialize it by calling the Init(). If successful, we add it to the list_object list, overwise we delete the hist_obj by using delete statement. The CPriceHistogram class description will presented further, see comments in the code. // In this cycle there is creation of object CPriceHistogram // its initialization and addition to the list of objects for(int i=0;i<rates_total;i++) { CPriceHistogram *hist_obj=new CPriceHistogram(); // hist_obj.StepHistigram(step); // We set the flag to show text labels hist_obj.ShowLevel(ShowValue); // We set POCs colour hist_obj.ColorPOCs(ControlPoint); // We set colour for inner range hist_obj.ColorInner(InnerRange); // We set colour for outer range hist_obj.ColorOuter(OuterRange); // We set the percent range hist_obj.RangePercent(RangePercent); // hist_obj.ShowSecondaryPOCs((i>=rates_total-DayTheHistogram),PeriodSeconds(PERIOD_D1)); if(hist_obj.Init(day_time_open[i],day_time_open[i]+PeriodSeconds(PERIOD_D1),(i>=rates_total-DayTheHistogram))) list_object.Add(hist_obj); else delete hist_obj; // Delete object if there was an error progress.Value(i); }; The OnTick() is a method called when you receive a new tick for a symbol. We comparing the values of the number of days stored in the variable count_bars with the number of daily bars returned by Bars (Symbol (), PERIOD_D1) and if they are not equal we forcedly call the method Init () for the class initialization, clearing the list list_object and changing the variable to NULL name_symbol . If the number of days has not changed, the loop goes through all objects stored in the class CPriceHistogram list_object, and execute a method Redraw (), for those who are Virgin ( «virgin"). The Deinit() is a method for class deinitialization. In the case of REASON_PARAMETERS (input parameters were changed by the user) we clear the list_object list and set name_symbol variable to NULL. In other cases, the expert doesn't do anything, but if you want to add something, read the comments. The OnEvent() is a method for event processing of the client terminal. Events are generated by the client terminal when user working with chart. The details can be found in the documentation of MQL5 language. In this Expert Advisor the chart event CHARTEVENT_OBJECT_CLICK has been used. By clicking on histogram element the it shows the secondary POC levels and inverses the histogram color. The OnTimer(void) is a method for timer events processing. It doesn't used in my programs, but if you want to add some timer actions (for example, to show the time) - it's here. Before use it's necessary to add the following line to the class constructor: EventSetTimer(time in seconds); And the following line to the destructor: EventKillTimer(); before calling the method Deinit (REASON_CHARTCLOSE). iWe have considered the CExpert class;it has been created for the demonstration of the CPriceHistogram class methods. //+------------------------------------------------------------------+ //| Class CPriceHistogram | //| Class description | //+------------------------------------------------------------------+ class CPriceHistogram : public CObject { private: // Class variables double high_day,low_day; bool Init_passed; // Flag if the initialization has passed or not CChartObjectTrend *POCLine; CChartObjectTrend *SecondTopPOCLine,*SecondBottomPOCLine; CChartObjectText *POCLable; CList ListHistogramInner; // list for inner lines storage CList ListHistogramOuter; // list for outer lines storage bool show_level; // to show values of level bool virgin; // is it virgin bool show_second_poc; // show secondary POC levels double second_poc_top; // value of the top secondary POC level double second_poc_bottom; // value of the bottom secondary POC level double poc_value; // POC level value color poc_color; // color of POC level datetime poc_start_time; datetime poc_end_time; bool show_histogram; // show histogram color inner_color; // inner color of the histogram color outer_color; // outer color of the histogram uint range_percent; // percent range datetime time_start; // start time for construction datetime time_end; // final time of construction public: // Class constructor CPriceHistogram(); // Class destructor ~CPriceHistogram(){Delete();} // Class initialization bool Init(datetime time_open,datetime time_close,bool showhistogram); // To level value void ShowLevel(bool show){show_level=show; if(Init_passed) RefreshPOCs();} bool ShowLevel(){return(show_level);} // To show histogram void ShowHistogram(bool show); bool ShowHistogram(){return(show_histogram);} // To show Secondary POC levels void ShowSecondaryPOCs(bool show){show_second_poc=show;if(Init_passed)RefreshPOCs();} bool ShowSecondaryPOCs(){return(show_second_poc);} // To set color of POC levels void ColorPOCs(color col){poc_color=col; if(Init_passed)RefreshPOCs();} color ColorPOCs(){return(poc_color);} // To set internal colour of histogram void ColorInner(color col); color ColorInner(){return(inner_color);} // To set outer colour of histogram void ColorOuter(color col); color ColorOuter(){return(outer_color);} // To set percent range void RangePercent(uint percent){range_percent=percent; if(Init_passed)calculationPOCs();} uint RangePercent(){return(range_percent);} // Returns value of virginity of POC level bool VirginPOCs(){return(virgin);} // Returns starting time of histogram construction datetime GetStartDateTime(){return(time_start);} // Updating of POC levels bool RefreshPOCs(); private: // Calculations of the histogram and POC levels bool calculationPOCs(); // Class delete void Delete(); }; In the description of the class, I tried to provide comments for class variables and methods. Let's consider some of them in details. //+------------------------------------------------------------------+ //| Class initialization | //+------------------------------------------------------------------+ bool CPriceHistogram::Init(datetime time_open,datetime time_close,bool showhistogram) This method uses three input parameters - the opening of the building, the closing time of construction and a flag indicating to construct a histogram, or only the levels of POCs. In my example (class CExpert) input parameters are passed at the opening day and time of opening the next day day_time_open [i] + PeriodSeconds (PERIOD_D1). But when you use this class, nothing prevents to ask, for example, the time of European, American session, or the gap size in the week, month, etc. //+---------------------------------------------------------------------------------------+ //| Calculations of the histogram and POCs levels | //+---------------------------------------------------------------------------------------+ bool CPriceHistogram::calculationPOCs() In this method, the origin of all levels and the calculations of their construction, it is a closed private method, inaccessible from the outside. // We get the data from time_start to time_end int err=0; do { //--- for each bar we are copying the open time rates_time=CopyTime(NULL,PERIOD_M1,time_start,time_end,iTime); if(rates_time<0) PrintErrorOnCopyFunction("CopyTime",_Symbol,PERIOD_M1,GetLastError()); //--- for each bar we are copying the High prices rates_high=CopyHigh(NULL,PERIOD_M1,time_start,time_end,iHigh); if(rates_high<0) PrintErrorOnCopyFunction("CopyHigh",_Symbol,PERIOD_M1,GetLastError()); //--- for each bar we are copying the Low prices rates_total=CopyLow(NULL,PERIOD_M1,time_start,time_end,iLow); if(rates_total<0) PrintErrorOnCopyFunction("CopyLow",_Symbol,PERIOD_M1,GetLastError()); err++; } while((rates_time<=0 || (rates_total!=rates_high && rates_total!=rates_time)) && err<AMOUNT_OF_ATTEMPTS&&!IsStopped()); if(err>=AMOUNT_OF_ATTEMPTS) { return(false); } poc_start_time=iTime[0]; high_day=iHigh[ArrayMaximum(iHigh,0,WHOLE_ARRAY)]; low_day=iLow[ArrayMinimum(iLow,0,WHOLE_ARRAY)]; int count=int((high_day-low_day)/_Point)+1; // Count of duration of a finding of the price at each level int ThicknessOfLevel[]; // create an array for count of ticks ArrayResize(ThicknessOfLevel,count); ArrayInitialize(ThicknessOfLevel,0); for(int i=0;i<rates_total;i++) { double C=iLow[i]; while(C<iHigh[i]) { int Index=int((C-low_day)/_Point); ThicknessOfLevel[Index]++; C+=_Point; } } int MaxLevel=ArrayMaximum(ThicknessOfLevel,0,count); poc_value=low_day+_Point*MaxLevel; First, we get the minute bars history data for a certain period of time (iTime [], iHigh[], iLow[]). Then we find the maximum and minimum element of iHigh[] iand Low[]. Then we calculate the number of points (count) from minimum to maximum, and reserve the array ThicknessOfLevel with ThicknessOfLevel elements. In the cycle we go through the each minute candle from Low to High, and adding the data of the time period presence at this price level. Then we find the maximal element of the ThicknessOfLevel array, it will be the level at which price was the longest time. This is our POC level. // Search for the secondary POCs int range_min=ThicknessOfLevel[MaxLevel]-ThicknessOfLevel[MaxLevel]*range_percent/100; int DownLine=0; int UpLine=0; for(int i=0;i<count;i++) { if(ThicknessOfLevel[i]>=range_min) { DownLine=i; break; } } for(int i=count-1;i>0;i--) { if(ThicknessOfLevel[i]>=range_min) { UpLine=i; break; } } if(DownLine==0) DownLine=MaxLevel; if(UpLine==0) UpLine=MaxLevel; second_poc_top=low_day+_Point*UpLine; second_poc_bottom=low_day+_Point*DownLine; The next step is to find the secondary POC levels. Recall that our diagram is divided. Recall that our histogram is divided into two ranges, the internal and external (displayed in different colors) and size range is defined in percentage of time of the price at this level. The range of the internal boundaries are Secondary POC levels. After finding the Secondary POC - Borders percent range, proceed to the construction of the histogram. // Histogram formation if(show_histogram) { datetime Delta=(iTime[rates_total-1]-iTime[0]-PeriodSeconds(PERIOD_H1))/ThicknessOfLevel[MaxLevel]; int step=1; if(count>100) step=count/100; // Calculate the step of the histogram (100 lines as max) ListHistogramInner.Clear(); ListHistogramOuter.Clear(); for(int i=0;i<count;i+=step) { string name=TimeToString(time_start)+" "+IntegerToString(i); double StartY= low_day+_Point*i; datetime EndX= iTime[0]+(ThicknessOfLevel[i])*Delta; CChartObjectTrend *obj=new CChartObjectTrend(); obj.Create(0,name,0,poc_start_time,StartY,EndX,StartY); obj.Background(true); if(i>=DownLine && i<=UpLine) { obj.Color(inner_color); ListHistogramInner.Add(obj); } else { obj.Color(outer_color); ListHistogramOuter.Add(obj); } } } It should be mentioned that in order to reduce the load on the terminal, I bring to the screen a maximum of 100 lines for each histogram. Lines of the histogram are stored in two lists, and ListHistogramInner ListHistogramOuter, which are the objects already known to us class CList. But these pointers are stored in a standard class of objects CChartObjectTrend. Why two lists, I think you can guess from the title, to be able to change the color histogram. // We receive data beginning from the final time of the histogram till current time err=0; do { rates_time=CopyTime(NULL,PERIOD_M1,time_end,last_tick.time,iTime); rates_high=CopyHigh(NULL,PERIOD_M1,time_end,last_tick.time,iHigh); rates_total=CopyLow(NULL,PERIOD_M1,time_end,last_tick.time,iLow); err++; } while((rates_time<=0 || (rates_total!=rates_high && rates_total!=rates_time)) && err<AMOUNT_OF_ATTEMPTS); // If there isn't history, the present day, level is virgin, we hoist the colours if(rates_time==0) { virgin=true; } else // Otherwise we check history { for(index=0;index<rates_total;index++) if(poc_value<iHigh[index] && poc_value>iLow[index]) break; if(index<rates_total) // If level has crossed poc_end_time=iTime[index]; else virgin=true; } if(POCLine==NULL) { POCLine=new CChartObjectTrend(); POCLine.Create(0,TimeToString(time_start)+" POC ",0,poc_start_time,poc_value,0,0); } POCLine.Color(poc_color); RefreshPOCs(); I have tried to design the CPriceHistogram with all necessary methods, if it insufficient, you can add yourself, and I will help with it. Summary Once again I would like to remind that the Price Histogram is reliable, but the intuitive tool, so the confirmation signals are necessary for its use. Thank you for your interest. I am ready to answer to all your questions. Translated from Russian by MetaQuotes Software Corp. Original article:
https://www.mql5.com/en/articles/17
CC-MAIN-2017-13
en
refinedweb
Finance Midterm > Preview The flashcards below were created by user jtpdogyo on FreezingBlue Flashcards . Get the free mobile app Take the Quiz Learn more Ch8 In the context of capital budgeting, what is an opportunity cost? an opportunity cost refers to the value of an asset or other input that will be used in a project. The relevant cost is what the asset or input is actually worth today, not, for example, what it cost to acquire. Ch8 Incremental Cash Flows the reduction in the sales of the company's other products referred to as erosion. These lost sales are included because they are a cost (a revenue reduction) that the firm must bear if it chooses to produce the new product. Operating costs, depreciation expense, resale value, personnel. Ch6 Why does the value of a share of stock depend on dividends? The value of any investment depends on the present value of its cash flows; i.e., what investors will actually receive. The cash flows from a share of stock are the dividends. Ch6 Investors are willing to buy shares for companies that don't pay dividends. Why? Investors believe the company will eventually start paying dividends (or be sold to another company). Ch6 Under what circumstances might a company choose not to pay dividends? companies that need the cash will often forgo dividends since dividends are a cash expense. Young, growing companies with profitable investment opportunities are one example; another example is a company in financial distress. Ch6 What assumptions can we use dividend growth model to determine the value of a share of stock? Reasonableness? The general method for valuing a share of stock is to find the present value of all expected future dividends. The dividend growth model presented in the text is only valid (i) if dividends are expected to occur forever; that is, the stock provides dividends in perpetuity, and (ii) if a constant growth rate of dividends occurs forever. A violation of the first assumption might be a company that is expected to cease operations and dissolve itself some finite number of years from now. The stock of such a company would be valued by applying the general method of valuation explained in this chapter. A violation of the second assumption might be a start- but is expected to eventually start making dividend payments some number of years from now. Effective annual rate (EAR) If a rate is expressed annually, but compounded more frequently, then the effective rate is higher than the stated rate. What is a firm worth? A firm should be worth the present value of the firm's cash flows. Financial Decision Making Investment Decision Financing Decision Payout Decision Investment Decision Invest in Assets that earn more than the minimum acceptable rate of return Financing Decision Find the right mix of equity and debt, and the right kind of debt to finance your operations Payout Decision If you cannot find investments that return the minimum acceptable rate, then return cash to the shareholders. What kinds of securities are issued by corporations? Debt--ownership interest Equity--Short or long term borrowing Debt vs equity: debt not an ownership interest creditors do not have voting rights interest is considered cost of doing business and is tax deductible creditors have legal recourse if interest or principal payments are missed Debt vs equity: equity ownership interest common stockholders vote for the board of directors and other issues dividends are not considered a cost of doing business and are not tax deductible dividends are not the liability of the firm, and stockholders have no legal recourse if dividends are not paid Bond princing equation annuity of coupon payment and PV of principal face value Premium When YTM<coupon Discount YTM>Coupon Two types of interest rate risk; sensitivity depends on two things Price Risk Reinvestment rate risk 2 things: coupon rate, time to maturity higher coupons have lower interest rate risk lower coupons have higher interest rate risk higher maturity, greater interest rate risk higher maturity, lower reinvestment risk Price risk Changes in price due to changes in interest rate Reinvestment rate risk Uncertainty concerning rates at which cash flow can be reinvested bond classifications Registered vs bearer forms (terms of a bond) security *collateral--secured by financial securities *mortgage-secured by real property, norm land or bldgs *debentures-unsecured *notes-unsecured debt with original maturity less than 10 years Seniority (indicates preference in position over other lenders) Sinking funds (lower rate returns, req by investors, $$ to set aside to pay for debt) Call provisions (allows company to call) Protective covenants (limits certain actions a company might otherwise take during term of loan). Some common bonds Gov bonds Zero coupon bonds (pure discount)--no coupons, just payment @ end, cannot sell for more than face value Floating rate bonds--mortgage, adjustable rate; coupon rate floats depnedign on some index value; coupons may have a collar, canot exceed ceiling or floor. Other bond types Income bonds (coupon payments dependent on company income) Convertible bonds (can be swapped for fixed # of shares before maturity) Put bonds (allows holder to force issuer to buy back) Treasury securities Federal govt debt T-bill : pure discount bonds, maturity < 1 yr T-note : coupon debt, maturity b/w 1 and 10 yrs T-bond : coupon debt, maturity > 10 yrs Municipal securities Debt of state and local governments Varying degrees of default risk, rated similar to corp debt Interest received is tax-except at the federal level What is r? current market rate of interest; risk received by marekt in investing in bonds of this nature Real rate of interest change in purchasing power Nominal rate of interest quoted rate of interest, change in purchasing power and inflation; had not been adjusted for inflation term structure of interest rates relationship b/w time and matuirty and yields, all else equal yield curve is graphical representation of this Stock ownership produces cash flows from: Dividends Capital gains Ch6 stocks discount rate composed of dividend yield and capital gains great deal of estimation error: when stocks not paying dividends or stockts with g expected to equal or exceed R Growth opportunities Growth opportunities are opportunities to invest in positive NPV projects. Value of a firm : sum of 100% of earnings as dividends + NPVGO Prerequisities to growth It must not pay out all earnings as dividends It must invest in projects with a positive NPV Price-earnings ratio P/E ratio=price per share/EPS Factors impacting the P/E ratio firm's growth opportunites firm's R conservative accounting principles A note of preferred stock stated div must be paid before dividends can be paid to common stockholders. div are not a liability of the firm, and preferred dividends can be deferred indefinitely. most preferred div are cumulative--any missed preferred div have to be paid before common dividends can be paid. preferred stock generally does not carry voting rights. Stock markets Dealers (maintain inventory) vs brokers (bring buyers and sellers together) NYSE : largest stock market in world, specialists (assigned dealer for set of securities) NASDAQ : not a physical exchange, computer -based quotation system, multiple market makers (diff from specialists) Payback period method (dis and ad) Dis: cashflows beyond payback are ignored ignoring time value of money arbitrary cutoff date Ad: makes sense for smaller companeis & social enterprises. used when there is a limitation on capital. "limited budgets" use method to decide whether to invest. Average accounting return (dis) Doesn't take into account time value of $$, accounting numbers not cashflows. IRR (dis and ad) Accept if IRR > required return Dis: Does not distinguish b/w investing and borrowing IRR may not exist, or there may be multiple IRRs May not work when comparing mutually exclusive investments whose cashflows differ in scale and timing Ad: Easy to understand and communicate IRR will generall give the same answer as nPV (except for when problems above occur) NPV vs IRR most time same decision exception: cash flows sign change more than once mutually exclusive projects : initial investments are substantially different, timing of cash flows sub diff PI (dis and ad) Accept if PI>1 Dis: Problems with mutually exclusive investments Can't compare projects of different scale Ad: May be useful when available investment funds are limited Easy to understand and communicate Correct decision when evaluating independent projects What cashflows matter? Cash flows Incremental cash flows Taxes (after-tax) Inflation What is the basis of capital budgeting decisions? Base capital budgeting analysis on cash flow, not income. Cash flows does not equal earnings/accounting income. Examples of reconciling items: depreciation, amortization, deferrals and accruals. Incremental cash flows Sunk costs not relevant. Opp costs matter. Side effects matter : erosion and cannibalism - new product reduces sales; synergies + new product increases cash flows Overhead allocations Salvage value Changes in net working capital (difference b/w current assets and current liabilities). Returns have two components current income (e.g. interest or dividends) and capital gains or losses. arith vs geo geo < arith unless all returns equal arith : overly optimistic for long horizons geo : overly pessimistic for short Frequency distribution Small companies : higher avg return, wider array (variance of actual returns). Larger companies; lower avg return, narrow array of actual returns. Average stock returns and risk-free returns US debt is considered risk free because the gov can raise taxes to repay it Risk premium is the added return (over and above the risk free rate) resulting from bearing risk. Covariance product of deviations x probability of state Efficient set a graphical representation of a set of possible portfolios that minimize risk at specific return levels; and maximize returns at specific risk levels. section of the opportunity set above the min variance portfolio is the efficient frontier. correlation if p=1, no risk reduction is possible if p=-1, complete risk reduction is possible optimal portfolio with a risk-free asset with a risk free asset availabe and the efficient frontier identified, we choose the capital allocation line with the steepest slope. systematic vs unsystematic risk systematic (market or non-diversifiable risk): risk factors that affect a large number of assets; this is what is reward ed in market unsystematic (unique or asset-specific risk): risk factors that affect a limited number of assets Returns =exp and unexp return unexp=sys and unsys >>> total return=exp + sys + unsys diversification and portfolio risk; what can diversification get rid of? diversification can substantially reduce the variability of returns without an equivaelnet reduction in exp returns. diversification can get rid of unsys only. not sys. variance terms are essentialy diversified away, but not covar diversifiable risk risk that could be elim by combining assets into portfolio if hold only one asset, exposing self to risk that could be diversified. CAPM popularity over alternative models APT better than CAPM in explaining past returns, but not in predicting future expected returns Other modles are more complicated and require more inputs For most, expected returns generated by more sophisticated models are not sufficiently different to justify the trouble. Beta measures the responsiveness of a security to movements in the market portfolio. What sort of investor rationally views the variance of an individual security's return as the security's proper measure of risk? What sort of investor ratioanlly views the beta of a security as the securitys proper measure of risk? A rational, risk-averse investor. Above still applies, but, no longer interested in variance of each dinividaul security's return. Rather interested in contribution of an indiv security to variance of portfolio. Card Set Information Author: jtpdogyo ID: 139919 Filename: Finance Midterm Updated: 2012-03-06 23:41:28 finance Folders: Description: Finance Midterm 1 Show Answers: What would you like to do? Get the free app for iOS Get the free app for Android Learn more > Flashcards > Print Preview
https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=139919
CC-MAIN-2017-13
en
refinedweb
Python. This may seem old hat to most of my readers, but this came up just tonight in IRC with someone new to Python, so I’d like to try my hand at explaining it. In Python-speak, positional arguments are any arguments that aren’t named when you call a function. That is, which variable they go to in the function is defined by their position — first, second, third — instead of their names. This is how most languages work, and is intuitive to most programmers, regardless of their backgrounds. Some functions, however, may find it useful — or even necessary — to accept any number of arguments. These wouldn’t normally be just default arguments, Python deals with those a different way. Instead, this would be used when a function can deal with any number of objects, which are typically all of the same class. For example, you might write a function that adds numbers together, but you want it to be called as sum(1, 2, 3) or sum(1, 2, 3, 5, 8, 13, 21). The key here is to define a single argument to accept all the numbers at once, and to tell Python what you’re trying to do. You can instruct Python what’s going on by adding a single asterisk ( *) to the beginning of the argument’s name. Consider the following example, which implements the sum function described above. def sum(*numbers): total = 0 for number in numbers: total += number return total There are better ways to do this, of course, but it illistrates the point. Essentially, the function receives a single argument, which is actually a tuple of all the positional arguments that were sent to it. That way, the function can just iterate over it and deal with them accordingly. Django uses this approach in many places, including when adding objects on the reverse side of ForeignKey relationships. >>> author.article_set.add(article1, article2, article3) For reference, most Python code uses the name args for this variable argument, though this is just a convention. You can use any name you like, as demonstrated in the example above, which uses the name numbers. Python also supports keyword arguments. Function definitions are the same, but you can call a function and identify arguments by name instead of by position. This allows them to be out of order, or to skip over some default arguments entirely, supplying just the arguments that are actually important. In order to support variable arguments for these, a different modifier is needed, and it works slightly differently. The ability to accept variable keyword arguments is denoted by two asterisks before the argument name. Other than that, the definition looks pretty much like the one for positional arguments. The biggest difference inside the function itself is the fact that the variable will be populated with a dictionary instead of a tuple. The keys will be the names provided for the arguments, and the values will be the values given to those names. def print_(**values): for name, value in values.items(): print "%s: %s" % (name, value) This is used very often in Django, from populating models to most of the database API. Again, for reference, the name typically assigned to this variable is kwargs, but could be anything you like. These options, of course, can be used together, even alongside traditional arguments, both required and those with defaults. The only real concern here is the order. Required arguments still come first, following by those with defaults. After that, the single-asterisk (positional) argument can be used, if needed, followed by the double-asterisk (keyword) argument, if necessary. def func(self, verbose=False, *args, **kwargs): But the good news doesn’t sop there! Functions can be called using these same modifiers to arguments, even if the function wasn’t defined to take arguments this way. Just like Python can take the arguments that were given and turn them into tuples and dictionaries, it can take tuples and dictionaries and turn them into standard arguments. >>> def add(a, b, c): ... return a + b + c ... >>> positionals = (1, 2) >>> keywords = {'c': 3} >>> add(*positionals, **keywords) 6
https://www.martyalchin.com/2007/nov/22/dynamic-functions/
CC-MAIN-2017-13
en
refinedweb
Details - Type: Bug - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: 5.0, 5.0.3, 5.0.4, 5.0.5, 5.0.6, 5.0.7 - - Component/s: tapestry-core - Labels:None Description Rather than HTML, I want my page to return XML (it's an Atom feed). I want this: <?xml version="1.0" encoding="UTF-8"?> <feed xmlns="">...</feed> But the response is being rendered like this: <feed>...</feed> There are two issues: - The XML declaration is missing - The Atom namespace is being stripped from the <feed> element Issue Links - is duplicated by TAPESTRY-1621 T5 should be able to produce well formed xml - Closed Activity - All - Work Log - History - Activity - Transitions This is the latest patch after I svn'd the latest tapestry, svn: 575535 I just found a bug in my patch. This is the fixed up patch. The patch against the latest svn. New patch adding some more features for XML support!! Proper XML element printing ( no more <br>, but only <br/> ), and support for CDATA elements. 1) Support for XMLMarkupModel. When when the page has: @Meta("tapestry.response-content-type=text/xml") a) MarkupModel now has a getDocumentHeader. i. It is meant for XMLMarkupModel to output the XML Header: <?xml versio="1.0"?> ii. It will also output the Charset if set: <?xml version="1.0" encoding="UTF-8"?> 2) Support for CDATA: a) CDATA is handled as just a normal Text. i. does not output CDATA sections, just what's inside. ii. but we don't really care, since the output is valid: "<![CDATA[A<A]]>" == "A<A" iii. expansions ($ ) are now supported inside of CDATA sections. b) CDATAToken has been removed, StartCdataToken, EndCdataToken added i. to mark when cdata sections were started and ended in the input template ii. in the case that we decide to do something with cdata outputting later, but probably will not. The latest patch set against svn: 591544 latest patch set agains svn: 591544 the latest patch, after the huge formatting change. There seems to be interest in this bug from users, but no attention from developers. Some people have asked me to put up my build on a repository so they can play with it and comment. So here is it: please, if you're interested, give it a whirl and email me. Fernando, sorry that you felt like the cold shoulder. As we discussed on the mailing list, things can happen in an odd order in an open source project. Priorities are not always quite top down, as I know I often work on lower priority things that I can accomplish in a set time, rather than more ambitious and over-arching bug fixes. This is a patch to include namespace support in Tapestry 5 ( against 5.0.6-SNAPSHOT ). It maintains name,qname,uri for elements and attributes coming from a template. Adds elementNS methods for writer and Element, but you can't add attributes with name,qname,uri (thought too many api changes). Though you can add attributes that use a prefix, as long as that prefix has been defined earlier. It then prints out all elements/attributes using their qnames, and also passes through all xmlns attributes. It currently swallows all xmlns attributes that point to the tapestry 5 schema.
https://issues.apache.org/jira/browse/TAPESTRY-1600
CC-MAIN-2017-13
en
refinedweb
). elasticsearch: group: logging zookeeper: group: zookeeper redis: group: redis size: m2.2xlarge The Rakefile We use this nodes.yaml file with rake to produce packer templates to build out new AMIs. This keeps me from having to manage a ton of packer templates as they mostly have the same features. require 'erb' require 'y This is used in conjunction with a simple erb template that simply injects the nodename into it. { ). sleep 30, wget sudo dpkg -i puppetlabs-release-precise.deb sudo apt-get update sudo apt-get remove ruby1.8 -y sudo apt-get install ruby1.9.3 puppet -y sudo su -c 'echo """. import os import boto from fabric.api import task class Images(object): def __init__(self, **kwargs): self.conn = boto.connect_ec2(**kwargs) def get_ami_for_name(self, name): (keys, AMIs) = self.get_amis_sorted_by_date(name) return AMIs[0] def get_amis_sorted_by_date(self, name): amis = self.conn.get_all_images(filters={'name': '{}*'.format(name)}) AMIs = {} for ami in amis: (name, creation_date) = ami.name.split(' ') AMIs[creation_date] = ami # remove old ones! keys = AMIs.keys() keys.sort() keys.reverse() return (keys, AMIs) def remove_old_images(self, name): (keys, AMIs) = self['AWS_ACCESS_KEY_ID'], aws_secret_access_key=os.environ['AWS_SECRET_ACCESS_KEY'] ) images.remove_old_images? What does this do to long-term statistics gathering? Presumably newly-built servers get new names, and over time this adds up to lots of servers. How do you graph web server CPU over the course of a year in this scenario? – Henry What we typically do is bucket these together under the applications they run. In the monitoring software we use we wind up with the average across the servers.
https://www.javacodegeeks.com/2013/07/immutable-servers-with-packer-and-puppet.html/comment-page-1/
CC-MAIN-2017-13
en
refinedweb
QuantLib_CompositeQuote man page CompositeQuote< BinaryFunction > — market element whose value depends on two other market element Synopsis #include <ql/quotes/compositequote.hpp> Inherits Quote, and Observer. Public Member Functions CompositeQuote (const Handle< Quote > &element1, const Handle< Quote > &element2, const BinaryFunction &f) inspectors Real value1 () const Real value2 () const Quote interface Real value () const returns the current value bool isValid () const returns true if the Quote holds a valid value Observer interface void update () Additional Inherited Members Detailed Description template<class BinaryFunction> class QuantLib::CompositeQuote< BinaryFunction >" market element whose value depends on two other market element - Tests the correctness of the returned values is tested by checking them against numerical calculations. CompositeQuote(3), isValid(3), value1(3) and value2(3) are aliases of QuantLib_CompositeQuote(3).
https://www.mankier.com/3/QuantLib_CompositeQuote
CC-MAIN-2017-13
en
refinedweb
Agile Web Development with Ruby on Rails 222 At just over 500 pages, Dave Thomas' new book manages to cover a lot of ground in a concise, readable manner. One problem at the outset -- the book is not finished. Knowing that the Ruby on Rails community has been chomping at the bit for morsels of information, Dave and David (DHH) have answered the call by releasing the forthcoming book early. "The book has not had a full technical edit, so it will contain errors. It has not been copy edited, so it will be full of typos. And there's been no effort spent doing layout, so you'll find bad page breaks, over-long lines, incorrect hyphenations, and all the other ugly things that you wouldn't expect to see in a finished book. We can't be held liable if you follow our instructions, expecting to create a Rails application, and you end up with a strange shaped farm implement instead. Despite all this, we think you'll enjoy it!" And enjoy it I did. The "Getting Started" section of AWDRoR provides a whirlwind overview of the Ruby on Rails' architecture. I found Rails to be very intimidating at first. You can't just cut-and-paste a couple lines of code like you can in PHP. Rails generates all kinds of directories and files, making it feel like your first trip to Disneyland -- you know there's fun to be had, but it's a big place and you don't know your way around. The reason for all this is because, in programming, short simple scripts are easy and simple, full blown Web applications are not. Many LAMP projects developing in perl/Python/PHP and any number of templating engines have started simple, but grown into unruly messes that are difficult to maintain. While trying to grok Ruby on Rails, topics like Model-View-Control and Object-Relational Mapping really don't stick at first. Add to the confusion that many of us are also struggling to learn Ruby and a RDBMS (such as MySQL; Rails works with others databases as well). The overview of Rails is necessary, but I found it to be a lot more helpful rereading it after completing the tutorial section. So if you read through this first section and feel lost like I was, just know that the material will become familiar to you and press on, because it gets a whole lot easier from here on in. I really enjoyed the Tutorial section, a narrative designing a shopping cart application for a customer. Dave says it best: "Does the world need another shopping cart application? Nope, but that hasn't stopped hundreds of developers from writing one. Why should we be different? More seriously, it turns out that our shopping cart will illustrate many of the features of Rails development. We'll see how to create simple maintenance pages, link database tables, handle sessions, and create forms. Over the next seven chapters, we'll also touch on peripheral topics such as unit testing, security, and making our pages look nice." Dave begins not with lofty design plans, but with a tool most real programmers use: napkin drawings. Many of us sit down over coffee with a customer and talk about what they need, sketching out ideas with paper and pencil, not some complex software planning tool. Each chapter in the tutorial section allows a story to unfold, where the customer works alongside the developer. Real life situations like changing direction or refactoring code are covered as each programming session is done. You really see why Rails is becoming so popular. It wasn't written by a team of programmers trying to hammer out an arbitrary list of features, but rather Rails was built around a real application (Basecamp). Pragmatic considerations such as developer time, feature creep, and maintenance issues have all been skillfully addressed in Rails. The tutorial reflects this, and at the same time it also gently, almost unknowingly, teaches principles as outlined in the agile manifesto. Some of the goals include: - Individuals and interactions over processes and tools - Working software over comprehensive documentation - Customer collaboration over contract negotiation - Responding to change over following a plan The third section, "Rails in Depth," dives into the inner workings of Rails. Components such as ActiveRecord, ActionController, ActiveView, and Web Services (Ajax) are all covered well. There are even chapters on securing and deploying your applications properly. These chapters, in conjunction with the API docs found on, give a full overview of Rails. Most helpful in this section are the notes and diagrams which help pull everything together. The appendices that cap off the book also provide the full tutorial source code, as well as a brief introduction to Ruby, the language that makes all the magic happen. In short, Rails is a brilliant architecture, and Agile Web Development with Ruby on Rails is a great book. I'm reluctant to point out its shortcomings as it's still in beta, but it's really hard for me to find much to complain about. It took me some time for the light to come on with Rails, but once it does, you see that Rails could not exist without Ruby, the language it's inextricably woven into. As Dave Thomas is quoted on, Rails is probably "the framework to break Ruby into the mainstream." Whether you believe the hype or not of "super productivity," "Ten times faster development," and "Better than anything else," Ruby on Rails is a great tool to add to your belt. In fact, I find myself using it exclusively for Web apps, and I catch myself using python and PHP less and less and Ruby more and more for my day to day programs. If you want to learn Ruby on Rails, Agile Web Development with Ruby on Rails is a great choice, and will probably be the definitive book on the subject. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page. Ruby is quite cromulent. (Score:4, Funny) Re:Ruby is quite cromulent. (Score:2) Although, I know for a fact Mr. Buzzwords doesn't like salt, so you could be right. (Disclaimer: Yes, I know what 'salt' really is...) Suggestions (Score:3) Re:Suggestions (Score:2) / javalobby-founder-ruby-on-rails-is-a-powerhouse/ [rubyonrails.com] Re:Suggestions (Score:3, Informative) Re:Suggestions (Score:2) Re:Suggestions (Score:3) You can use all the MoveableType posting tools to maintain content - or use the builtin Active Record, HTML based Admin Tool. It's pretty young, and has a few bugs in the XML-RPC interface. But, it was easy to customize and fix the XML-RPC bug. If you promise to be nice to my home DSL line: The Fermata [bardes.org] what is it going to do (Score:2) Not looking for flames, but explaining why you are writing a new app., noting what you are trying to do that current ones don't do, might be a good place to start in getting advice. Re:Suggestions (Score:2) Yet another CMS project? (Score:2) Sure why not, there are probably more linux distros than linux users; so why not more CMS projects than those who use them? I like the idea behind F/OSS. But sometimes it seems like, instead of using common code to work together on a project, F/OSS developers prefer to go in a thousand different directions all working on their own projects. Re:Suggestions (Score:2) Re:Suggestions (Score:2) Re:Suggestions (Score:2) Can you enlighten us as to why you are now a detractor? I like Plone and I wonder what would turn you off to it. Re:Suggestions (Score:2) $22.50 for the Beta .pdf, $43.75 for the real book (Score:3, Insightful) Your math sucks. (Score:3, Informative) In other words, the printed book is $21.25 if you take out the cost of the PDF. I purchased the combo the day after it was released and must say it's more than worth the price. Hard to believe (Score:2) Re:Hard to believe (Score:3, Insightful) Re:Hard to believe (Score:2) Um, isn't Ruby TMTOWTDI? Granted people gravitate towards certain idiomatic ways of doing common things, but there is certianly more than one one, etc. Books and tutorials are necessary to beat this into the heads of incoming developers. Hey, great sales pitch. "Buy this book so we can beat some sense into your ignorant newbie head!" Serioulsy, an issue with Rails is that it is a DSL on top of Ruby; knowing Ruby i Re:Hard to believe (Score:2) Can you name an environment where that isn't true? Python, you also have to learn Zope. Java, you learn Hibernate (or whatever). Of course you could just learn PHP & code everything from scratch. Then you wouldn't need to learn anything else. Of course, you get orders of magnitude l Re:Hard to believe (Score:2) Or, get this wacky idea: Just learn Ruby and code everything from scratch. Then you wouldn't need to learn anything else. And still get extra productivity. There are indeed many good things in Rails, things that work best by offering a handy sublayer for common tasks without occluding the essential Rubyness of the code. But there Re:Hard to believe (Score:3, Interesting) hmmm (Score:5, Interesting) I believe that not grokking MVC detracts from the value of your review, particularly with respect to your opinion of the architecture of Rails. Without understanding MVC, you can have no understanding of the design decisions they made, and as such, no qualified understanding of the architecture itself. This was my only concern. Thanks for the review! Re:hmmm (Score:3, Interesting) Yep, I admit fully I'm probably not qualified as others to review the book. But I can only give my opinion, and for a newbie who's played with other architectures and run into a big pile of messy code after Re:hmmm (Score:3) I hate to be cynical, but O'Reilly are planning their own Rails books this summer. Not that they would hype something they didn't believe in ... Re:hmmm (Score:2) I'm not putting you down for writing it - and I think it would be a useful thing to read on a blog, or besides other reviews on a site like Amazon - but that this will be the only review of the book on Slashdot makes it a very bad decision to list it, in my opinion. Re:hmmm (Score:2) In this case, it's good to know that I can probably pick up this book, and without knowing much about either Ruby or the MVC design pattern, implement a clean web application that will be maintainable and extensible. I can get expert reviews elsewhere. I wouldn't think of consulting Slashdot for them anyway. Wow (Score:5, Insightful) The appendices that cap off the book also provide the full tutorial source code, as well as a brief introduction to Ruby, the language that makes all the magic happen. In short, Rails is a brilliant architecture Agile Web Development with Ruby on Rails is a great choice, and will probably be the definitive book on the subject. The tutorial is probably the best I've ever read. God. That last one sounds like Napoleon Dynamite. A "beta" book, in PDF form, with typographical and technical errors, will probably be the "definitive book" about a brand-new framework? What a statement... Don't get me wrong - I haven't had much time to play with Rails, and as a web developer, I probably should, in order to keep my skills fresh. I may even like it. But this fervent, sycophantic praise - spurred on by the blogerati (flamesuit enabled) - seems premature, especially when there are many capable web application frameworks out there. In the future, it might even seem silly. After all, anyone want to admit to owning a copy of Oreilly's book about Slash? Re:Wow (Score:3, Informative) Many This book is written primarily by Dave Thomas who wrote pretty much the definitive book on Ruby and who was kind en Re:Wow (Score:2) It's been written by Dave Thomas who pretty much helped bring Ruby from Japan into the rest of the English speaking world, and DHH the guy who wrote Rails. It really doesn't matter if it is the best book in the future or not, only time will tell. Having said that, I'll stick by my statement. Re:Wow (Score:2) As for Rails itself, there's a resaon why so many smart people are fawning all over it. I STRONGLY recommend you try out Rails before you criticize. At least watch one of the demo videos on the homepage. Until you see it in action, you really can't really fathom how powerful it is. You Better than Java, or just easier? (Score:2, Interesting) So, my question Re:Better than Java, or just easier? (Score:3, Informative) The coding analysis: Ruby has a lot of semantic advantages over Java. It's easier to read, has less syntactic salt, and encourages t Re:Better than Java, or just easier? (Score:2) With Rub Re:Better than Java, or just easier? (Score:2) If you use Struts, Spring, etc., you configure every action in mulitple XML files. You map everything around so that the application knows what to do. This means you must define knowledge about your application's architecture in multiple places. It will be in code and in several config files. This gets complicated fast. With ROR, the application code is in a folder called "app Re:Better than Java, or just easier? (Score:2) A week ago I've agree with you btw. :) Re:Better than Java, or just easier? (Score:2) Well, I realize that the main focus of CS degrees these days is either Java or C#, so perhaps you haven't heard of languages like Smalltalk or Lisp. Yes, I tend to agree with what you're saying about PHP, but the same doesn't apply to Ruby. Ruby is a language that CS grads can love, it's fully OO like Smalltalk and has: * mixins * closures * anonymous code blocks (which makes it very easy to define your own domain specific languag Re:Better than Java, or just easier? (Score:2) Ruby is often described as being strongly dynamically typed or perhaps dynamically strongly typed (depending on your emphasis). Contrast with C which is statically weakly typed. What that means practically is that if you try to send a method to an object that doesn't respond to that method you're going to get a runtime error. Sure, you probably won't catch this at compile Dave Thomas is writing from the grave?!?! (Score:3, Funny) Talk about an active guy! He helps Colonel Sanders start a chicken business, then founds his own restaurant chain, then dies, THEN writes a book about Ruby on Rails? I'm clearly not doing enough with my life... Well, _that_ is blatantly obvious (Score:2) I am a Java developer, but... (Score:2) I have been using it for a few months, and although ActiveRecord is lacks a little of the flexibility of Hibernate, it is good enough for lots of database backed web applications. I like how easy it is to replace automatically generated scaffolding code with your own code, but the scaffolding code gets you started quickly. Also, Ruby is a fine programming language - it has just about totally replaced Python as my scripting language of choice. Using Ruby has also cut way And for the Perl folks out there... (Score:3, Interesting) Catalyst can be found at catalyst.perl.org [perl.org] or on CPAN [cpan.org]. Re:And for the Perl folks out there... (Score:2) Who wouldn't want to learn Ruby? Re:And for the Perl folks out there... (Score:2) Whereas Ruby doesn't yet... (Score:2) Rails posts prediction ... (Score:5, Insightful) Either, one, that Rails is so amazing that after you use it sex seems laughably trivial by comparison, even and especially you count the production value -- one can, after all, only have one child (on average) using sex, but with Rails, dude, I HAD TEN. Or, two, that Rails is no big deal, it's just another MVC re-think, heck I rolled one of those myself one afternoon a coupla years back, yeah it ruled but, you know, I'm really into that Java thing now. Besides, Rails is no good for BIG projects, for that you need Hibernate and a crane. So I'll post one for the middle-of-the-road. Rails rules. I love it. I've reimplemented, in a week-and-a-half, a fairly large application that took me two months to do with Python. It's not a fair comparison because with Python I used Webware but did everything, like user management and logging, with no starting point, and also the first time around I wasn't as familiar with the problem domain. With Rails I used the Salted Hash Login Generator [rubyonrails.com] which got the basics of my user login and management done in one fell swoop, an hour or two of work. I also re-used the view code from the Python app. But the rest of it was fun. I enjoyed it. Things were done quickly and the API is awesome. ActiveRecord is not Hibernate -- yes, Javapeople, we know, we know -- but it's good. It's really good and super easy. And while there's some magic going on behind the scenes with Rails, it's not hard to understand at all. That said, yes, if you're an online payroll system for IBM, Rails won't cut it. There are flaws, but for day-to-day stuff, not too many. It's updated very frequently, too. My only complaint is the ubermensch of Rails, Dave Heinemeier, who, while smart, is also all too aware of it, and frequently shoots his blog [loudthinking.com] off about topics which go beyond Web frameworks and into areas of either glib tech-prejudice or, at times, more subtle see-how-smart-I-am dorkposts -- the most insufferable species of Geek. Otherwise, I strongly encourage anyone to check Rails out. It's great and a *lot* of other frameworks in other languages could stand to pay attention to the innovations in Rails. These innovations aren't so much technical epiphanies, as they are the meeting of many good ideas in one place, along with enthusiastic support and a lot of glue. Ruby's fun, too. Check out, also, the frameworks from other languages which are shamelessly stealing from Rails: Subway [python-hosting.com] (Python) Catalyst [perl.org] (Perl) Re:Rails posts prediction ... (Score:2) Castle Project [castleproject.org] works on Cake [sputnik.pl] is a Rails-like framework for PHP, and Biscuit [ripcord.co.nz] is another. Lest the Java folks feel left out, there's also Trails [java.net]. Re:Rails posts prediction ... (Score:2) Slightly flawed in that it uses a constant salt, meaning once a set of rainbow tables have been generated all your passwords are easily compromised. I had a patch to it that supported non-salted, statically-salted and dynamically salted passwords (upgrading on auth) without changing table schemas beyond the default 38 bytes. I don't have it to hand unfortunately, but it's not hard to do; just use pack() to b64 the hash, put in some random bytes next to it seperated from the ha Re:Rails posts prediction ... (Score:2) Um, if sex is complex for you, I think you're doing something wrong. The basics are very simple... (Score:2) Perhaps it would have been more accurate (if clumsier) to say that the context surrounding sex can be complicated. Slashdot these days: (Score:4, Funny) R! O! R! ROOOOOAAAAR! RUBY ON RAILS! So fucking awsome I gotta, I've got it! Yeah! I've got a vision, people, a vision! We're gonna have Ajax and we're gonna put it on Rails and it's all gonna be like Lucy in the Sky with Rubys. Oh my gosh am I seeing some shit.... This is so awesome... (And now for the real thing watch this post being modded +10 Insightfull) Re:Slashdot these days: (Score:3, Insightful) Re:Slashdot these days: (Score:3, Funny) Re:Slashdot these days: (Score:2) RoR already does Ajax. There's a nice JavaScript library called Prototype that's integrated with RoR out of the box. I remember this level of hype surrounding many other things in the last 15 years; Linux, Apache, MySQL, Perl, PHP, Python, Java, JBoss, etc. The thing is most of them have lived up to the hype to some degree and now they're taken for granted. RoR is worthy of the hype. It eliminates bullshit in the web app stack. That means you, th Re:Slashdot these days: (Score:4, Funny) Random facts about Ruby On Rails: Ruby On Rails came up with 97% of the famous quotes from Napoleon Dynamite." The constellation 'Ruby On Rails' is made up by connecting every single star of the night sky. For a brief period in history, Ruby On Rails had stolen the letter F from the alphabet, that is why we have words such as Photo and Dr. Phil. Ruby On Rails' leg hair is harvested bi-monthly for use in fine Scandinavian carpets due to it's extreme strength, durability, and ability to ward off Russians. Pick any two consecutive digits of the number pi. Added up, they will always equal Ruby On Rails' age. On the third day God actually said, "Let there be France!" So Ruby On Rails killed him, became God, and uttered the now famous, "Let there be Light!" If one attempts to calculate the awesomeness factor of Ruby On Rails, cubed by the awesomeness of a badger divided by the awesomeness of ninja-pirates, one has the basis for the weapon that destroys the universe. Ruby On Rails: Rockin the bitches since 1863. When Ruby On Rails told the Microsoft Word paper clip to go away, it never came back. Ruby On Rails possesses Excalibur. In the criminal justice system, the people are represented by two separate yet equally important groups: Ruby On Rails, who investigates crime; and Ruby On Rails, who prosecutes the offenders. Ruby On Rails owns 90% of patents in the USPTO under false names. Both Lee Harvey Oswald and Ruby On Rails killed JFK. Oswald fired Ruby On Rails out of his rifle. Ruby On Rails penetrated JFK's head then exploded. For God so loved the world, that he gave his only son, and his son's favorite web development framework, Ruby On Rails. Whoeverso believes in his son, and programs web applications with Ruby On Rails, shall not perish, but have eternal life. Whoever forgoes Ruby On Rails will burn for all eternity. -John 3:16 (more or less) (with apologies... [4q.cc]) Ruby vs Java (Score:4, Insightful) I use Java (and parts of J2EE). Its too bad there's no truly free/open source runtime yet, but I'm sure that day will come soon. Another problem with Java is really too much choice. Sun's higher level solutions tend to be over engineered, so everyone and their uncle have designed their own complex framework for object persistance and session management, along with the mixed bag of JSRs. Thus you have Java advocates like Javalobby saying Ruby on Rails is a great framework, and all the best free CMSs are PHP. I am sure that Ruby on Rails makes it quick to put up a web site that supports database CRUD operations, but just like using Visual Basic to create forms, what do you have after that part is done? The reason I like Java is its maturity and community, as well as the rigidity of the language. Because of its multi vendor and open software support, you can find an open source or at worst free-as-in-beer library to do just about anything, and there's plenty of discussion about using all the components that exist. If you don't like Java's rigidity and verbosity, there are some nice improvements in JDK 1.5. From pre-1.5: void cancelAll(Collection c) { for (Iterator i = c.iterator(); i.hasNext(); ) { TimerTask tt = (TimerTask) i.next(); tt.cancel(); } } in 1.5: void cancelAll(Collection c) { for (TimerTask task : c) task.cancel(); } Or you can execute Groovy, Jython, JRuby, etc in a JVM, or alongside PHP with JSR 223. I don't think execution speed arguments against Java are accurate any more, especially when comparing it to languages such as Ruby, Python and PHP, and I find the memory requirements to be easily manageable in typical situations (Firefox, alas, takes much more memory on my workstation). Creating a working application in any environment is fairly easy for anyone with sufficient training and experience, but unless you're a rare master, once you are at that plateau of a working app and you need to change it, the maturity and popularity of your environment become very important for support. I think hands down Java is the winner here with great tools like Eclipse, with great refactoring support and where you know even in large projects what is broken as you're working on code, as well as the large community for support. The portability of Java is also very good, you can become very OS agnostic (unless you need to get into a few aspects such as multimedia). Anyway, sorry to wax on, and I'm always trying to find out what other environments offer, but I don't think Java is given enough credit. Re:Ruby vs Java (Score:2) def cancelAll TimerCollection.find(:id => params[:id]).tasks.each { |t| t.cancel } end Find a timercollection with a certain idea, then iterate through each of its tasks and cancel them. Ruby is more wordy than Java 1.5, but this aids readability. I don't think Java is given enough credit. You are probably right there. Re:Ruby vs Java (Score:2) def cancelAll(c) c.each { |t| t.cancel } end Re:Ruby vs Java (Score:2) Well, not exactly. Ruby is Rails. Most frameworks require a language, a templating engine and other plumbing to make it all work. Ruby on Rails is all Ruby. Even the templating are Ruby tags using ERb. You don't need to learn a "simple" templating language (which confuse web designers most of the time). There's no need to cheat by stuffing code in places where it shouldn't be (aka seperating the HTML fro Re:Ruby vs Java (Score:2) Re:Ruby vs Java (Score:2) If you're referring to EJBs, you don't have to use them, and most people don't. "J2EE" to many people does mean EJBs, but you can use JSP and JDBC for the equivilent of the PHP world, or Java classes and some kind of lightweight (Spring) to heavy (EJB) kit depending on what you need to do. So you're left with a language with lots of history and support, which isn't a bad thing. You may have more code, but I'd be quite surprised if that doesn't have some advantages as well as you need to make finger grained What (Score:2, Interesting) explain for the newcomer (Score:2) Zope (Score:3, Insightful) Re:Zope (Score:2) That's why they tend to stumble on the shoelaces rather often. "I want effects, and I want them fast. So what if it breaks in 1 out of 20 cases, it's good enough for me". A review of a beta book about an alpha framework. (Score:2) Don't let your Java get run out on a Rail just yet (Score:4, Insightful) 1.) Give up a decent IDE. The development tools are crap. Good luck trying to fire up a Ruby IDE, and set a breakpoint in WEBrick or Apache mod_ruby. You can't. Even if you hack around with the breakpoint command and include the debugger in the code you want to debug, the debugger is buggy and makes old skool commandline tools look sharp. 2.) Bet the farm that RoR only deals with you 80% problem, and your requirements won't break how it needs the ActiveRecord pattern. ActiveRecord looses it's luster once things get complex (see 3.) How do you like your OO style? If it's from the Jacobson camp, you're in for a treat! Objects are just dumb repeats of database tables 1st and foremost. Oh sure, you can add methods to do that OO thing if you must, but that's not the true essance. If you believe true object nature comes from behavior and not data, (ala Yordon & Coad) you won't be comfortable here. 4.) You're agile? You "get" test driven development? Give it up. RoR says you use a script. This hurts even more if you take issue with number 3 above. RoR rewards you for being database driven. Just define your scheme and all of your objects and a few controllers will get generated for you along with stub unit tests that pass by default. Just accept the required two line *Helper classes as well(yeah. TDD would have pushed me to create those). 5.) More on testing: hope you like having to rely on populating test data into your database. We kept hearing you can mock your persistance, but even some of the experts we talked to couldn't show us how (folks who are paid to work on a RoR product). Sure, folks said dependancy injection via Needle, but we couldn't find jack out about it. 6.)Speaking of database driven, that is a greenfield project you have with no legacy concerns and absolutely no complex O/R mapping requirements, you're starting RoR on right? No?!?! That's ok. just shoehorn RoR with updateable views or change your schema so that ID's are done the way ActiveRecords likes. That's no problem for your existing aps, is it? 7.)That had better be an OpenSource database you're using. It's not unheard of to "enjoy" a broken release for packages like ActiveRecord when the developers don't have access to Oracle or SQL Server. This happened to us and RoR was broken for about a week between releases in the 0.9 to 0.10 range. Yeah. That was a "release". Not CVS, not alpha or beta. Release. On the upside, we did patch ourselves, so "go OpenSource". If the Rails fanboys want to mod me down, have at it. I stand by my overall opinion. Keep in mind, I have no issue with Ruby itself. In fact, it stands to give Java a real run for it's money. RoR on the other hand, is immature and over-hyped at best, and a rat's nest of garbage at worst. Re:Don't let your Java get run out on a Rail just (Score:2) For IDE choices I personally use Eclipse with the Ruby plug-in now, although I have used FreeRIDE in the past and some other alternatives (like WideStudio, which is an IDE that is distribut WebObjects Developer Enjoying RoR (Score:2) I'm a WebObjects developer that is always looking for something new and RoR so far seems great. While there are clearly still some things that could be improved (especially in the ActiveRecord ORM), for being less than a year old this thing is VERY far along. The opposite of something like Struts, you can feel that RoR came from a real application instead of design-by-committe. Meh. (Score:2) Show me a big app that uses it? (Score:2) Re:Show me a big app that uses it? (Score:2) As for Ruby offering you nothing that you can't do with Python I would argue that you can do things simpler, more concise, and more logically in Ruby. The code is cleaner so if you are picking up the code for something you developed six months ago things should fall into place easier. IMHO Ruby == Prettier and more OO than Perl and Ruby == More powerful than Re:wasn't this just on here? (Score:2, Funny) Re:It's Hawt (Score:2) No, not really. Many authors have made prerelease versions of their books available. The main difference here is that people are required to commit $$ to the final product before it is actually ready. This is damn good marketing. Late in releasing your product? Fearful a competitor will get to market before you and steal your thunder (and revenue)? Sell a product that isn't finished, claim this is so Re:It's Hawt (Score:2) Defending Python (Score:2) Ruby doesn't really bring any significant advantages over python. Neither does Ruby-on-rails. It's just the new-kid-on-the-block enthusiasm that Ruby is enjoying ATM. Really, look at the current situation: Python hasn't replaced Perl as the most popular scripting language yet, except perhaps in the open source community. Even if Python and Ruby both stand at the "400% better than perl" point. What chance do you think Ruby, which is 5% Re:Defending Python (Score:3, Insightful) > advantages over python. Neither does > Ruby-on-rails. It's just the new-kid-on-the-block > enthusiasm that Ruby is enjoying ATM. As someone who coded in Ruby, Perl and Python today(!), I'm inclined to agree to some extent. Where both Ruby and Python are failing to replace Perl is with something comparable to CPAN. For example, this morning I was trying to parse a bunch of data out of HTML tables in Ruby, and tearing my hair out with frustration; I switc Re:Charge for Beta PDF and booK? (Score:3, Informative) Considering the normal cost of technical books, I think it's a sweet deal. These are not small books. Re:Charge for Beta PDF and booK? (Score:5, Informative) J/K (Score:2) Re:Charge for Beta PDF and booK? (Score:2) In all cases, you get the beta version PDF now, intermediate PDF releases, and the final PDF version. If you buy the book too, you get the book when published. Re:Since when does "huge" apply to Ruby? (Score:4, Informative) Re:Yeah, Sure... (Score:2) I purchased the beta-book half a week ago and I'm very impressed and pleased with the quality, let alone the mindset behind the decision to make it available early. Re:Since when does "huge" apply to Ruby? (Score:2) Apparently there were a few people who bought the book the first week-end... n dom/RailsBeta2.rdoc [pragprog.com] Re:why is slashdot obsessed with RoR? (Score:4, Insightful) Rails is a flexible framework that allows you to keep your house in order when coding Web applications and which encourages test driven development. Ruby, as a language, makes this all very easy as almost everything in Ruby is an object, so the syntax becomes intuitive. Some people tried to replicate RoR with PHP, and the syntax was vile and full of syntactic salt. Using Rails to develop Web apps, as opposed to, say, PHP, is like using a language that has garbage collection over one that does not. It removes another level of complexity and lets you focus on the important stuff. Meh. (Score:2) It is neat and all; I like the structure, and it looks fine for form-per-table sorts of toy apps, but I really don't see the advantage for anything more complicated. Re:Meh. (Score:2) If you have to ask, you don't need to know. I'd talk about predicates and lazy evaluation, but I think you're more concerned with your e-penis than the question at, um, hand. Everything about what you've written says "clueless small business developing little piles of shit." Great. So why are you wasting time with me when you could be writing Enterprise Software(tm) for the big boys? Run along, little man. I'm sure there's a marketing Syntax preferences are not facts. (Score:2) Re:why is slashdot obsessed with RoR? (Score:2) I had a look at Ruby on Rails, and what entices me most is the fact that for every part you need, you can use the same language. I did an application in Zope last year, and I really had to jump through hoops. It is not possible to apply consistency in syntax across all levels, and you must study Python, Zope Page Templates and DTML. Ruby on Rails does not seem to have this problem. Re:Thats fairly meaningless... (Score:2) You have clearly never used Rails. Testing is very specifically implemented in Rails, it's actually part of the "way" of doing things. I understand that RoR is a mature programming language in that it offers layers of abstraction. Get in line behind half a dozen other solutions. Re:still sound and fury, and a touch a flame to bo (Score:2) "it's a framework around Ruby." Which. Means. What. ? If you don't know what "framework" means in relating to coding, this is not the article, book, or topic for you. Right tool, for the right job, for the right (development) price. We can agree on that. This is why I use other languages in ancillary systems around some of my Rails ap Re:What exactly is "on rails"? Someone help me out (Score:2) More seriously... Rails [rubyonrails.org] is a rapid web application development framework. It's written in the Ruby language, hence the "Ruby on Rails." It abstracts things quite well, leaving you to worry about actually implementing the program logic (and site design) rather than managing database connections, writing getter and setter methods, sanitizing user input, and all that oh-so-fun stuff. If you've got QuickTime, the "Show, Do [nextangle.com] Re:What exactly is "on rails"? Someone help me out (Score:2) Re:does RR have .... (Score:2) Yes, you can store objects in a session context. Sample and gotchas here [rubyonrails.com] at the RoR wiki. Also is there something that sync's sessions from one server to another to support load balanced environments? More than something, there are numerous ways to support this. You can store you session data in temporary files on the server (the default), share it using Distributed Ruby, store it to the database for retrieval... So y Re:I Didn't Know What Ruby Was, But I Found Rails (Score:2) Re:I Didn't Know What Ruby Was, But I Found Rails (Score:2) The efficiency part I can understand, but having to declare self as first argument for every class method, for example, is far from elegant IMHO. Re:I Didn't Know What Ruby Was, But I Found Rails (Score:2) Re:I Didn't Know What Ruby Was, But I Found Rails (Score:2) See, now, I was just thinking the same about languages that care how much whitespace is at the start of a line. Re:Ruby on Rails?? Looks like ASP to me. (Score:2) Yes. Rails does a huge amount of stuff for you that you would have to do on your own with ASP or PHP. You can do a hell of a lot of stuff without ever, ever writing the equivalent of rs.MoveNext(). Re:Ruby on Rails?? Looks like ASP to me. (Score:2)
https://books.slashdot.org/story/05/06/16/1914251/__SLASHLINK__
CC-MAIN-2017-13
en
refinedweb
. using namespace System; using namespace System::Threading; ref class Example { private: static AutoResetEvent^ event_1 = gcnew AutoResetEvent(true); static AutoResetEvent^ event_2 = gcnew AutoResetEvent(false); static void ThreadProc() { String^ name = Thread::CurrentThread->Name; Console::WriteLine("{0} waits on AutoResetEvent #1.", name); event_1->WaitOne(); Console::WriteLine("{0} is released from AutoResetEvent #1.", name); Console::WriteLine("{0} waits on AutoResetEvent #2.", name); event_2->WaitOne(); Console::WriteLine("{0} is released from AutoResetEvent #2.", name); Console::WriteLine("{0} ends.", name); } public: static void Demo() { Console::WriteLine("Press Enter to create three threads and start them.\r\n" + "The threads wait on AutoResetEvent #1, which was created\r\n" + "in the signaled state, so the first thread is released.\r\n" + "This puts AutoResetEvent #1 into the unsignaled state."); Console::ReadLine(); for (int i = 1; i < 4; i++) { Thread^ t = gcnew Thread(gcnew ThreadStart(&ThreadProc)); t->Name = "Thread_" + i; t->Start(); } Thread::Sleep(250); for (int i = 0; i < 2; i++) { Console::WriteLine("Press Enter to release another thread."); Console::ReadLine(); event_1->Set(); Thread::Sleep(250); } Console::WriteLine("\r\nAll threads are now waiting on AutoResetEvent #2."); for (int i = 0; i < 3; i++) { Console::WriteLine("Press Enter to release a thread."); Console::ReadLine(); event_2->Set(); Thread::Sleep(250); } // Visual Studio: Uncomment the following line. //Console::Readline(); } }; void main() { Example::Demo(); } /* This example produces output similar to the following: Press Enter to create three threads and start them. The threads wait on AutoResetEvent #1, which was created in the signaled state, so the first thread is released. This puts AutoResetEvent #1 into the unsignaled state. Thread_1 waits on AutoResetEvent #1. Thread_1 is released from AutoResetEvent #1. Thread_1 waits on AutoResetEvent #2. Thread_3 waits on AutoResetEvent #1. Thread_2 waits on AutoResetEvent #1. Press Enter to release another thread. Thread_3 is released from AutoResetEvent #1. Thread_3 waits on AutoResetEvent #2. Press Enter to release another thread. Thread_2 is released from AutoResetEvent #1. Thread_2 waits on AutoResetEvent #2. All threads are now waiting on AutoResetEvent #2. Press Enter to release a thread. Thread_2 is released from AutoResetEvent #2. Thread_2 ends. Press Enter to release a thread. Thread_1 is released from AutoResetEvent #2. Thread_1 ends. Press Enter to release a thread. Thread_3 is released from AutoResetEvent #2. Thread_3 ends. */ System::MarshalByRefObject System.Threading::WaitHandle System.Threading::EventWaitHandle System.Threading::AutoReset.
https://msdn.microsoft.com/en-us/library/system.threading.autoresetevent(v=vs.90).aspx?cs-save-lang=1&cs-lang=cpp
CC-MAIN-2017-13
en
refinedweb
Code. Collaborate. Organize. No Limits. Try it Today. The figure shows an implementation of three rounded rectangles to make up this effect. Note that with the current version of the code, you can select the corners you want to round. My obsession with the incomplete nature of the System.Drawing.Graphics class in the .NET Framework's Base Class Library began with the article here on CodeProject, which was titled “Extended Graphics – An implementation of Rounded Rectangle in C#” back in 2003. I chose the class after looking at some code in Java. The System.Drawing.Graphics class would not do certain things that similar classes in other language APIs could do. Most of my attention and focus was fixated on the one feature that I truly required – that being the ability to produce rectangles with rounded corners. And, to be honest, I was successful in creating an ad-hoc class that could pertain to my needs. System.Drawing.Graphics Six years after my initial endeavours, C# has grown both in functionality and extensibility. Alas, we are left with the same predicament, that of not having a proper implementation of the same features I longed for back then. I enlisted the help of my readers to share with me suggestions of other missing functionality from the GDI+ System.Drawing.Graphics class. While writing this code, I tried to include as much of the missing functionality as I could based on those very suggestions. In fact, I didn't have to look further than a few forums scattered throughout the Internet to find some other things that needed to be implemented in this updated installment of the code I wrote six years ago. This time however, C# 3.0 provided me with an excellent and a fresh approach to writing my code. I eventually planned on using the one new feature I loved the most in C# 3.0 – the Extension Methods. Read ahead to know what extension methods are and what they really do. Sometimes you want to add methods to existing classes (or types), ones that you do have access to directly. For instance, you might wonder if there was a way of extending the .NET Framework's base classes like System.String (or in our case, the System.Drawing.Graphics class). Well you'd be amazed to know that there is a certain way of doing that. That's where the extension methods come handy. System.String Extension methods are a way of extending methods for an existing type without creating an extended type. So, for instance, one can add methods to the classes already present in the .NET Framework without, say, creating a subclass for that type. Throughout this article, I would try my best to explain how extension methods are used to extend the functionality of the already existing System.Drawing.Graphics class. Because this article is not about extension methods, I would refrain from going deeper into a proper conversation about extension methods. We will begin this article by looking at an example of an extension method. Below is a code snippet that tells the compiler to include the code to the System.String class. If you note carefully there are a few things about the code below that makes it different than your average code. First and foremost, notice the static class definition. Then notice the third line with the code this string. I'll try to explain what happens when you run this code in a moment. static this string static class StringExtension { public static string Reverse(this string text) { // Logic for string reversal goes here } } Once the code above has been complete, built and compiled, the mere inclusion of this file to your project would automatically add the created Reverse() method to the System.String class. The method can hence be called with every object of the System.String class as such: Reverse() ... string text = "HELLO WORLD"; Console.WriteLine( text.Reverse() ); // This line should now print: DLROW OLLEH ... Note that when I called the Reverse() method, I did not give it the parameter as was made obvious in Listing 1.1. The reasoning for it is quite straight-forward and simple. Remember the this string code – it turns out that code is the real deal when it comes to extension methods. That code itself tells the compiler to attach the method definition to a certain class. In our case, it was the System.String class. You can only realise the true potential of extension methods when you see, code like the following being made possible. ... string text = "ARUN".Reverse(); // This should put "NURA" in the variable 'text'. ... In the previous installments of this code, I created a new wrapper class for the System.Drawing.Graphics class and named it ExtendedGraphics. Being a wrapper class, it encapsulated all the functionality of the inherited parent with some added features. Below is a sample of how the process worked: ExtendedGraphics ... System.Drawing.Graphics g = this.CreateGraphics(); ExtendedGraphics eg = new ExtendedGraphics(g); eg.FillRoundedRectangle(brush, x, y, width, height, arcRadius); ... You couldn't create a rounded rectangle with the System.Drawing.Graphics class, so you had to wrap the ExtendedGraphics around it to provide the missing functionality. The only problem with the above code was the actual creation of a new object. People had to remember the exact method calls for the new class and had to unwillingly add the class to the project's using directives. Wouldn't it have been much simpler if one could do the following: using ... System.Drawing.Graphics g = this.CreateGraphics(); g.FillRoundedRectangle(brush, x, y, width, height, arcRadius); ... With the possibility of extending any .NET Framework base class, it suddenly struck with this idea of extending the current System.Drawing.Graphics class and I sat one day and did just that. When I was finished with the initial implementation, I couldn't have been happier with the result. The new implementation was not only faster, but was also much cleaner and readable. Reducing the overhead by not creating yet another object-wrapper and instead just using an extended version of an already optimised class certainly gave this version an appeal. Download the source zip file above and extract the GraphicsExtension.cs file into your project. Once the file has been included in your project, you are almost half-way through. To use the features of this class in your projects, simply add a using directive on top of every code file that requires the code like this: GraphicsExtension.cs using System; using System.Drawing; using Plasmoid.Extensions; // this is how the directive should appear ... Once you've added the directive as explained, all occurrences of the System.Drawing.Graphics file will automatically be extended with the brand new functionality in the code. Whenever you use an object of the class throughout your code, you will have the IntelliSense detect the new methods for you. But remember, the GraphicsExtension isn't the only class you get with this implementation. There are some fancy new things that you can do with your code. Let's look at some of them now. GraphicsExtension If you would rather play around with the output before diving deep into code, download the test suite and familiarise yourself with the concepts around the issues addressed by this project. Creating rounded rectangles could never have been much easier. The following code shows how to create a simple rounded rectangle with all the corners rounded. x y width height brush System.Drawing.Brush arcRadius Just like the FillRoundedRectangle(..) method, you can also use another method offered to create the border of a rounded rectangle only. Following is the code for the generation of a border, where g is an object of the System.Drawing.Graphics class and pen is an object of the System.Drawing.Pen class. FillRoundedRectangle(..) g pen System.Drawing.Pen ... g.DrawRoundedRectangle(pen, x, y, width, height, arcRadius); ... New feature added to VER 1.0.0.2 | If however, you want to round only a select corner edge or even more than one edges of the rectangle, you can specify the RectangleEdgeFilter enum for that very purpose. A RectangleEdgeFilter enum holds only six values: RectangleEdgeFilter None = 0 TopLeft = 1 TopRight = 2 BottomLeft = 4 BottomRight = 8 All = TopLeft|TopRight|BottomLeft|BottomRight Using these one can write code to produce the effect of partially round edges where only some of the edges or corners of the rectangle would be rounded. For instance, if I were to round only the TopLeft and the BottomRight corners, I would write the following code: TopLeft BottomRight ... g.FillRoundedRectangle(brush, x, y, width, height, arcRadius, RectangleEdgeFilter.TopLeft | RectangleEdgeFilter.BottomRight); ... New feature added to VER 1.0.0.4 | New to the code in version 1.0.0.4 is the inclusion of the FontMetrics class which works hand-in-hand with the System.Drawing.Graphics class to present you with vital information about each individual font. The following code demonstrates how easy it is to obtain some very information about a font, where font is an object of the System.Drawing.Font class. FontMetrics font System.Drawing.Font ... FontMetrics fm = g.GetFontMetrics(font); fm.Height; // Gets a font's height fm.Ascent; // Gets a font's ascent fm.Descent; // Gets a font's descent fm.InternalLeading; // Gets a font's internal leading fm.ExternalLeading; // Gets a font's external leading fm.AverageCharacterWidth; // Gets a font's average character width fm.MaximumCharacterWidth; // Gets a font's maximum character width fm.Weight; // Gets a font's weight fm.Overhang; // Gets a font's overhang fm.DigitizedAspectX; // Gets a font's digitized aspect (x-axis) fm.DigitizedAspectY; // Gets a font's digitized aspect (y-axis) ... With such vital information in grasp, UI developers amongst us can take full advantage of this method by aptly applying this class to determine font boundaries and overall structure of the laid-out text for controls based on DrawString(..) outputs. DrawString(..) This figures shows in detail the various font metrics two lines of text can exhibit. It is very crucial to get the details right if you are doing anything even remotely related to typography. In order to get that perfect feel and readability, one must know these basics. All sizes measured by FontMetrics class are calculated as ems. em Rectangle RectangleF DrawRoundedRectangle FillRoundedRectangle The above examples illustrates clearly the benefits of using extension methods to extend functionalities in an existing .NET class (in this case, the System.Drawing.Graphics class). This significantly reduces the time spent in creating and managing separate inherited objects. This article, along with any associated source code and files, is licensed under The Microsoft Public License (Ms-PL) using Plasmoid.Extensions; using System.Drawing.Drawing2D; public static GraphicsPath GenerateRoundedRectangle GraphicsPath p = new GraphicsPath(); Graphics g = this.CreateGraphics(); // using optional named parameter style for clarity p = g.GenerateRoundedRectangle ( rectangle : this.Bounds, radius : 10.0F, filter : GraphicsExtension.RectangleEdgeFilter.All ); this.Region = new Region(p); if (radius >= (Math.Min(rectangle.Width, rectangle.Height)) / 2.0F) return graphics.GenerateCapsule(rectangle); diameter = radius * 2.0F; SizeF sizeF = new SizeF(diameter, diameter); RectangleF arc = new RectangleF(rectangle.Location, sizeF); if ((RectangleEdgeFilter.TopLeft & filter) == RectangleEdgeFilter.TopLeft) path.AddArc(arc, 180, 90); else { path.AddLine(arc.X, arc.Y + radius, arc.X, arc.Y); path.AddLine(arc.X, arc.Y, arc.X + radius, arc.Y); } arc.X = rectangle.Right - diameter; if ((RectangleEdgeFilter.TopRight & filter) == RectangleEdgeFilter.TopRight) path.AddArc(arc, 270, 90); else { path.AddLine(arc.X + radius, arc.Y, arc.X + arc.Width, arc.Y); path.AddLine(arc.X + arc.Width, arc.Y, arc.X + arc.Width, arc.Y + radius); } arc.Y = rectangle.Bottom - diameter; if ((RectangleEdgeFilter.BottomRight & filter) == RectangleEdgeFilter.BottomRight) path.AddArc(arc, 0, 90); else { path.AddLine(arc.X + arc.Width, arc.Y + radius, arc.X + arc.Width, arc.Y + arc.Height); path.AddLine(arc.X + arc.Width, arc.Y + arc.Height, arc.X + radius, arc.Y + arc.Height); } arc.X = rectangle.Left; if ((RectangleEdgeFilter.BottomLeft & filter) == RectangleEdgeFilter.BottomLeft) path.AddArc(arc, 90, 90); else { path.AddLine(arc.X + radius, arc.Y + arc.Height, arc.X, arc.Y + arc.Height); path.AddLine(arc.X, arc.Y + arc.Height, arc.X, arc.Y + radius); } path.CloseFigure(); IMQ wrote:appreciative work , Wonderful job. 5 stars from me ... but i'm not working on C# 3.0 BillWoodruff wrote:well-done, Arun. MeasureString BillWoodruff wrote:...very complete, very easy to understand. BillWoodruff wrote:Hope you'll add more as you have time. Graphics.DrawRoundedRectangle(..) Graphics.FillRoundedRectangle(..) Md. Marufuzzaman wrote:It will be very nice if you include some more detail on your idea, how your code works, the crucial parts as well. General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/38436/Extended-Graphics-Rounded-rectangles-Font-metrics?msg=3133823
CC-MAIN-2014-23
en
refinedweb
Ber this behaviour necessary? Is there any work around, e.g., employing > the foreign function interface? There's unsafeInlinePerformIO (sometimes called inlinePerformIO), which is usable in certain cases, but be very careful. From Data.ByteString.Internal: {-# INLINE inlinePerformIO #-} inlinePerformIO :: IO a -> a #if defined(__GLASGOW_HASKELL__) inlinePerformIO (IO m) = case m realWorld# of (# _, r #) -> r #else inlinePerformIO = unsafePerformIO #endif But even this might not give you tail recursion, depending on the context. Cheers, Simon
http://www.haskell.org/pipermail/glasgow-haskell-users/2008-February/014249.html
CC-MAIN-2014-23
en
refinedweb
OK let’s make this clear. You can totally call SQL Functions, SQL SPROCS, or any other raw SQL statement and use it in EF Code Only. What you don’t get is automatic or fluent API configuration statements that perform this mapping work for you and by default, no tracking occurs on the materialized objects (though you can manually do that work yourself to attach it), and the functions are not composable in LINQ statements that run on the server. This is all capable because of the SqlQuery<T>() method exposed on the Database type in EntityFramework. In a nutshell it directly executes the provided SQL statement by marshaling it to the underlying provider connection and materializes the result set as types shaped as T. Mapping occurs EXACTLY as property names in the target type (as far as I know there’s no overriding this behavior) so that means if you have a property named CustomerID then you need a column with the exact same name in the SQL statement. Note: The returned type does NOT need to actually be a class mapped in EF DbSet<T> types. It can be any type with a default constructor and properties with get/set (public visibility is not required). So here are my simple recommendations: In example imagine I have a sproc, GetPeople, that takes no parameters and returns a result set of Id int, FirstName varchar(50), LastName varchar(50) (doesn’t matter what the actual table/views are). I have this class to represent the output [DebuggerDisplay("FirstName = {FirstName}, LastName = {LastName}")] public class Person { public Guid Id { get; set; } public String FirstName { get; set; } public String LastName { get; set; } } I could map the sproc the Person type with the following method on my DbContext based type public virtual IEnumerable<Person> GetAllPeople() var results = this.Database.SqlQuery<Person>("execute dbo.GetPeople"); return results; That’s right folks. That is it. Of course you can see that you can call anything yourself here including your own SQL select statements or what not. It’s all based on convention. I've put together the worlds easiest working example solution (including database) that you can use to run this example (and others such as parameterized requests and SQL Commands). I hope this clears up things for people out there. Thanks Jimmy for the quick response on this... I have been spinning my head to trying to find a straight forward example on how this is done.... there are bits and pieces on Bing but no unified example like this one... you've simplified it just perfectly. Love this feature in EF, makes coding super fast and fun!
http://blogs.msdn.com/b/schlepticons/archive/2012/02/16/yes-you-can-execute-sprocs-with-ef-fluent-api-code-only.aspx
CC-MAIN-2014-23
en
refinedweb
[Brandon] > We are running a somewhat customized version of Jython and I am trying > to dynamically load a jar from the file system and import a class. I > do: > > sys.path.append(jar_file) > import com.vmware.vim25.mo.ServiceInstance > > This throws an ImportError saying it cannot import the name > ServiceInstance. Two suggestions. 1. What is the access modifier on ServiceInstance? Is it public, protected or private? 2. With the recent classloader changes, I'm not sure if this is still necessary, but in order to use jars dynamically, it used to be case that adding the jar to sys.path was not enough. You also have to to sys.add_package(packageName). For example, for your requirement, you would need to do import sys sys.path.append(jar_file) sys.add_package("com.vmware.vim25.mo") Or perhaps the root of the class hierarchy, i.e. sys.add_package("com.vmware") or sys.add_package("com.vmware.vim25") This issue is documented in the modjy documentation, under "Using external java packages" HTH, Alan. View entire thread
http://sourceforge.net/p/jython/mailman/message/26355865/
CC-MAIN-2014-23
en
refinedweb
If you have ever worked on an application that displayed large amounts of data, one of the cornerstones of your application was probably a DataGrid control. We have provided four .NET DataGrids over the years, two for ASP.NET and two for Windows Forms, but until now Silverlight and WPF were left out of the party. At MIX 2008 we shipped the first preview of the Silverlight DataGrid and a preview of it in WPF was also shown. Now that it is out there people want to know how to use it. If you are one of those people, then you have come to the right place. Here's a quick guide on how to get up and running with a Silverlight DataGrid. Start a new Silverlight Application as outlined in my previous post. When given the option, choose the default "Add a new ASP.NET Web project" option. If everything went smoothly in the previous step, your project should be loaded and opened to Page.xaml. Now just find the DataGrid on the Toolbox and drag it into the root layout Grid named "LayoutRoot". This does a few things behind the scenes: xmlns:data="clr-namespace:System.Windows.Controls; assembly=System.Windows.Controls.Data" <data:DataGrid></data:DataGrid> If you are the type of the person who likes to see things working after each step, feel free to F5 (choose the option to allow debugging in the popup) and take in the awesome sight that is an empty DataGrid. Not much here, so lets fill it with something. The way to make a DataGrid interesting is by giving it some data. This is done through the DataGrid's ItemsSource property. This is same property that other controls in WPF and Silverlight, such as ListBox, use to specify where they will get their data. The one difference here is that you cannot place arbitrary content in it and have it create a collection for you. Instead you need to provide it a collection of anything that implements IEnumerable such as a List or ObservableCollection. The ItemsSource can be specified inline in XAML such as: <data:DataGrid x: <data:DataGrid.ItemsSource> <!--Something that implements IEnumerable --> </data:DataGrid.ItemsSource> </data:DataGrid> However, it is more commonly set in code behind, which is what we will do in this example. Step 2 A: Name the DataGrid and build Before we go to the code behind you will want to be sure to give the DataGrid a name such as "dg". Also be sure to build so that you can reference the DataGrid in code: <my:DataGrid x:</my:DataGrid> Step 2 B: Create and Set the Items Source Now that the DataGrid is ready to have its ItemsSource set, go to the Page's constructor located in the code behind file for Page.xaml (A handy shortcut to do this from within Page.xaml is F7) and add the following line below InitializeComponent: C# public Page() { InitializeComponent(); dg.ItemsSource = "H e l l o W o r l d !".Split(); } VB Public Sub New() InitializeComponent() dg.ItemsSource = "H e l l o W o r l d !".Split() End Sub (If you get the build error: "The name 'dg' does not exist in the current context" with the code above be sure to build a second time so that the name has a chance to propagate) One of the easiest ways to generate an IEnumerable collection is String.Split. When the resulting array is set as the ItemsSource of the DataGrid a column will be automatically generated since AutoGenerateColumns is true. When you run the application, it will look like this: This is a little better, but so far this could be done with a ListBox. Lets add some more complicated data so that we actually need to use a DataGrid. Add a new class to your Silverlight project (not the Web project) and name it "Data". Then add a few properties to bind to. If you are using C#, you can use the great 3.0 Automatic Properties feature. public class Data { public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } public bool Available { get; set; } } Public Class Data Private _firstName As String Private _lastName As String Private _age As Integer Private _available As Boolean Property FirstName() As String Get Return _firstName End Get Set(ByVal value As String) _firstName = value End Set End Property Property LastName() As String Get Return _lastName End Get Set(ByVal value As String) _lastName = value End Set End Property Property Age() As Integer Get Return _age End Get Set(ByVal value As Integer) _age = value End Set End Property Property Available() As Boolean Get Return _available End Get Set(ByVal value As Boolean) _available = value End Set End Property End Class Once the Data class is defined, it can now be used to provide data for the DataGrid. Go back to the code behind file for Page.xaml and replace the previous ItemsSource assignment with the following. A useful trick here is to use the C# and VB 3.0 Object Initializer feature to initialize the Data objects as we add them to the List. public Page() { InitializeComponent(); //dg.ItemsSource = "H e l l o W o r l d !".Split(); List<Data> source = new List<Data>(); int itemsCount = 100; for (int i = 0; i < itemsCount; i++) { source.Add(new Data() { FirstName = "First", LastName = "Last", Age = i, Available = (i % 2 == 0) }); } dg.ItemsSource = source; } Public Sub New() InitializeComponent() 'dg.ItemsSource = "H e l l o W o r l d !".Split() Dim Source As List(Of Data) = New List(Of Data) Dim ItemsCount As Integer = 100 For index As Integer = 1 To ItemsCount Source.Add(New Data() With _ { _ .FirstName = "First", _ .LastName = "Last", _ .Age = index, _ .Available = (index Mod 2 = 0) _ }) dg.ItemsSource = Source End Sub When you run this you will notice that columns are created for you. This is because auto-generation takes over, using reflection to create a column for each property in Data, setting the column header to the name of the property, and choosing default column types based on the property type. For instance the Available column is a DataGridCheckBoxColumn. (If you did not want this behavior, but rather wanted to choose your own columns, you can do this by setting the DataGrid's AutoGenerateColumns property to false. For information on choosing your own columns see my post on Defining Columns for a Silverlight DataGrid) The easiest way to customize the DataGrid is through a variety of properties. The other two ways are through Styles and Templates which will be covered in future posts. Some of the most useful properties for customization are: GridLinesVisibility & HeadersVisibility These properties are enumerations that control what gridlines and headers are displayed. RowBackground & AlternatingRowBackground These properties are shortcuts to setting the background color for both rows and alternating rows. ColumnWidth & RowHeight These properties set the default column width and default row height. IsReadOnly & CanUserResizeColumns These properties control if the end user can edit the data in the grid and if the columns can be resized. For instance if you set the following properties to the following values: <data:DataGrid x: </data:DataGrid> You would get this: Now that you have the basics, enjoy using the DataGrid. Next time I'll go into how to explicitly define and customize columns instead of using auto generation. Also, you can read more about the features that the Silverlight 2 DataGrid has to offer...
http://blogs.msdn.com/b/scmorris/archive/2008/03/21/using-the-silverlight-datagrid.aspx
CC-MAIN-2014-23
en
refinedweb
InstanceContextMode Enumeration Specifies the number of service instances available for handling calls that are contained in incoming messages. Assembly: System.ServiceModel (in System.ServiceModel.dll) The System.ServiceModel.InstanceContext manages the association between the channel and the user-defined service objects. Use the InstanceContextMode enumeration with the ServiceBehaviorAttribute.InstanceContextMode property to specify the lifetime of the InstanceContext object. can create a new InstanceContext object for every call, every session or specify that the InstanceContext object is bound to a single service object. For a working example, see the Instancing Sample. The Single value specifies that a single InstanceContext object should be used for the lifetime of the service. The following code illustrates how to set the InstanceContextMode for a service class: // Service class which implements the service contract. [ServiceBehavior(InstanceContextMode=InstanceContextMode.PerSession)] public class CalculatorService : ICalculator { public double Add(double n1, double n2) { return n1 + n2; } public double Subtract(double n1, double n2) { return n1 - n2; } public double Multiply(double n1, double n2) { return n1 * n2; } public double Divide(double n1, double n2) { return n1 /.
http://msdn.microsoft.com/en-us/library/vstudio/system.servicemodel.instancecontextmode(v=vs.100).aspx
CC-MAIN-2014-23
en
refinedweb
Planning for a Standard Exchange Organization Applies to: Exchange Server 2007 SP3, Exchange Server 2007 SP2, Exchange Server 2007 SP1, Exchange Server 2007 Topic Last Modified: 2008-03-17 Of the four defined organizational models for Microsoft Exchange Server services, except for the Edge Transport server,. An Exchange organization with all of the previously listed characteristics is considered a standard Exchange organization. Standard Exchange organizations can also optionally include one or more Edge Transport Servers. Historically, deploying a dedicated Active Directory site for Exchange services has been a recommended best practice. This optimization partitioned the global catalog servers for Exchange and Active Directory replication, a strategy that is typically used to mitigate performance issues that arise from using a common collection of domain controllers for Exchange and normal user, application, and logon activities. In some situations when dedicated Active Directory sites are used, Exchange servers in those Active Directory sites are no longer considered to be in the routing path. This is usually the case when the Exchange site is subordinate to an Active Directory replication hub site via a single IP site link. There are several ways to address this issue including the placement of a Hub Transport server in the replication site or combining the sites. We recommend that you introduce a new IP site link to bring the dedicated Active Directory site into the back-off routing path. One way to do this is to introduce new IP site links, which cause the site to be an intermediate site between other Active Directory sites with Exchange servers. On these new IP site links, Exchange override costs are created to identify the preferred route for message flow. The override cost will not affect Active Directory replication if the site cost is such that it is not a low cost route for Active Directory replication. Another method is to introduce new IP site links, which place the dedicated Active Directory site between other sites with Exchange servers and then eliminate the existing site links. This method will not affect Active Directory replication to any branch offices but will change the Active Directory replication path for the dedicated Active Directory site. The standard Exchange organization is any Exchange organization that is not simple, large, or complex. In the simplest form, this topology includes a single Active Directory site definition per SDL and it also contains a single point of egress to the Internet. The following figure illustrates one example of a standard Exchange organization. As you can see in Figure 1, the Woodgrove Bank topology includes two Active Directory sites connected by an IP site link. In this example, each SDL is responsible for providing minimum dependent services, such as name resolution and directory services using resources deployed on the local LAN. In addition, there are multiple Hub Transport servers and Edge Transport servers, and the Unified Messaging server is co-located with each Mailbox server. During the planning phase of your deployment, and before you deploy any Exchange 2007 servers in a standard Exchange organization, we recommend that you consider the following points: - The single forest option offers the following advantages: - Provides the richest set of mail system features - Allows for a streamlined administrative model - Takes advantage of an existing Active Directory structure - Uses existing domain controllers and global catalog servers - Does not require provisioning or synchronization with other forests - An increase in the number of Exchange SDLs is generally accompanied by an overall increase in the number of mailboxes and an increased dependence upon reliable mail delivery. To meet these requirements, we recommend that you install multiple Edge Transport servers to address external mail flow requirements and multiple Hub Transport servers to address internal mail flow requirements. The requirement for multiple Hub Transport servers will not only be to service Mailbox servers from the immediate location, but will also likely include hub-to-hub communication across locations. - When Exchange servers are hosted across multiple Active Directory sites, directory replication latency becomes a consideration. Directory replication between Active Directory sites occurs much less frequently than it does between domain controllers within an Active Directory site. The actual cross-site replication interval cannot be predicted because this is configured according to the directory service administrator's design requirements. The replication latency across Active Directory sites is generally measured in fractions of or entire hours and continues to increase as the number of Active Directory sites increases. For more information about Active Directory replication within and between Active Directory sites, see Replication within a site, Replication between sites, and How the Active Directory Replication Model Works. - Deployment of Exchange 2007 server roles that respect network design assumptions is required to a much greater extent than with the simple Exchange organization. - Active Directory site and subnet mapping becomes critical for Exchange 2007 to function normally. - In this topology, although the Exchange organization is distributed across multiple physical locations, the external Simple Mail Transfer Protocol (SMTP)-specific and client protocol-specific namespaces are common across the locations. To provide resiliency and reliability of external services, and because in these environments, the network requirements for Internet connectivity become more stringent, we recommend that you implement a true perimeter network when deploying a standard Exchange organization. In addition, to achieve even higher security, we recommend that you use dissimilar firewall products on inner and outer firewalls, so that an attacker cannot use the same techniques on inner and outer firewalls to penetrate the internal network. For example, if you use Microsoft Internet Security and Acceleration (ISA) Server on the inner firewall, use a non-Microsoft product on the outer firewall, or vice versa. - When deploying a standard Exchange organization, providing high availability deployment options becomes a existing servers. You must add one or more Exchange 2007 servers to your existing organization, move mailboxes and other data to Exchange 2007, and then remove the Exchange 2003 or Exchange 2000 server from the organization. For more information about deploying and transitioning to a standard Exchange 2007 organization, see Deploying a Standard Exchange Organization.
http://technet.microsoft.com/en-us/library/bb124367(v=exchg.80)
CC-MAIN-2014-23
en
refinedweb
Type Promotion (Visual Basic) When you declare a programming element in a module, Visual Basic promotes its scope to the namespace containing the module. This is known as type promotion. The following example shows a skeleton definition of a module and two members of that module. Within projModule, programming elements declared at module level are promoted to projNamespace. In the preceding example, basicEnum and innerClass are promoted, but numberSub is not, because it is not declared at module level. The effect of type promotion is that a qualification string does not need to include the module name. The following example makes two calls to the procedure in the preceding example. In the preceding example, the first call uses complete qualification strings. However, this is not necessary because of type promotion. The second call also accesses the module's members without including projModule in the qualification strings. If the namespace already has a member with the same name as a module member, type promotion is defeated for that module member. The following example shows a skeleton definition of an enumeration and a module within the same namespace. In the preceding example, Visual Basic cannot promote class abc to thisNameSpace because there is already an enumeration with the same name at namespace level. To access abcSub, you must use the full qualification string thisNamespace.thisModule.abc.abcSub. However, class xyz is still promoted, and you can access xyzSub with the shorter qualification string thisNamespace.xyz.xyzSub. If a class or structure inside a module uses the Partial (Visual Basic) keyword, type promotion is automatically defeated for that class or structure, whether or not the namespace has a member with the same name. Other elements in the module are still eligible for type promotion. Consequences. Defeat of type promotion of a partial definition can cause unexpected results and even compiler errors. The following example shows skeleton partial definitions of a class, one of which is inside a module. In the preceding example, the developer might expect the compiler to merge the two partial definitions of sampleClass. However, the compiler does not consider promotion for the partial definition inside sampleModule. As a result, it attempts to compile two separate and distinct classes, both named sampleClass but with different qualification paths. The compiler merges partial definitions only when their fully qualified paths are identical. The following recommendations represent good programming practice. Unique Names. When you have full control over the naming of programming elements, it is always a good idea to use unique names everywhere. Identical names require extra qualification and can make your code harder to read. They can also lead to subtle errors and unexpected results. Full Qualification. When you are working with modules and other elements in the same namespace, the safest approach is to always use full qualification for all programming elements. If type promotion is defeated for a module member and you do not fully qualify that member, you could inadvertently access a different programming element.
http://msdn.microsoft.com/en-us/library/xz7s1h1x.aspx
CC-MAIN-2014-23
en
refinedweb
SFML 2.0 Game Config file #1 Members - Reputation: 176 Posted 06 November 2012 - 08:09 PM Does any one have any tutorials or example scripts which uses SFML 2.0 where you can load data from a text file to use in a script? Can't seem to find any examples out there for it. #2 Crossbones+ - Reputation: 16667 Posted 06 November 2012 - 11:06 PM (I'm assuming you're using C++, since you didn't mention what language you are using) Also, you say, "use in a script", but your thread's title says, "game config file". A script and a config file are two different things. A scripting language is a type of programming language, and it'd be "ran"/"executed" and do things like call functions. A config file is usually static data that is "read" into the program and then used to initialize some variables in the program's code. A script file can be a config file (though it's unusual), but not vice-versa. Could you give a more specific example of how you'd like this config file or script to look like, and what it'd do? >>IMAGE<< #3 Members - Reputation: 176 Posted 06 November 2012 - 11:15 PM I would make a class which loaded a file called Settings.txt Example data of this file: Name = Dave MaxFPS = 60 So then in my main function i call load setting class then i have all the information available to use so for example: Config["Name"] would equal Dave Config["MaxFPS"] would equal 60 I started an attempt at it here, this is in my settings.h, i found this on the internet but its quite ugly and not totally easy to understand how it works. (Granted its not working yet). [source lang="cpp"]<!-preserve.newline-->#include <fstream><!-preserve.newline-->#include <cctype><!-preserve.newline-->using namespace std;<!-preserve.newline--><!-preserve.newline-->// trim from start<!-preserve.newline-->static inline std::string <rim(std::string &s) {<!-preserve.newline--> s.erase(s.begin(), std::find_if(s.begin(), s.end(), std::not1(std::ptr_fun<int, int>(std::isspace))));<!-preserve.newline--> return s;<!-preserve.newline-->}<!-preserve.newline--><!-preserve.newline-->// trim from end<!-preserve.newline-->static inline std::string &rtrim(std::string &s) {<!-preserve.newline--> s.erase(std::find_if(s.rbegin(), s.rend(), std::not1(std::ptr_fun<int, int>(std::isspace))).base(), s.end());<!-preserve.newline--> return s;<!-preserve.newline-->}<!-preserve.newline--><!-preserve.newline-->// trim from both ends<!-preserve.newline-->static inline std::string &trim(std::string &s) {<!-preserve.newline--> return ltrim(rtrim(s));<!-preserve.newline-->}<!-preserve.newline--><!-preserve.newline-->std::map<string,string> loadSettings(){<!-preserve.newline--><!-preserve.newline-->ifstream file("settings.txt");<!-preserve.newline--> string line;<!-preserve.newline--><!-preserve.newline--> std::map<string, string> config;<!-preserve.newline--> while(std::getline(file, line))<!-preserve.newline--> {<!-preserve.newline--> int pos = line.find('=');<!-preserve.newline--> if(pos != string::npos)<!-preserve.newline--> {<!-preserve.newline--> string key = line.substr(0, pos);<!-preserve.newline--> string value = line.substr(pos + 1);<!-preserve.newline--> config[trim(key)] = trim(value);<!-preserve.newline--> }<!-preserve.newline--> }<!-preserve.newline--> return (config);<!-preserve.newline-->}[/source] In my main.cpp i have yet to work out how i retrieve back the information - im not sure i have the correct data type for the function for a start. Also its not in a class. #4 Members - Reputation: 1058 Posted 07 November 2012 - 06:33 AM #5 Crossbones+ - Reputation: 3575 Posted 07 November 2012 - 07:06 AM That said, the function simply returns an std::map<std::string, std::string> (see documentation). A simple usage would be: typedef std::map<std::string, std::string> SettingsType; SettingsType settings = loadSettings(); SettingsType::const_iterator it = settings.begin(); SettingsType::const_iterator it_end = settings.end(); for (; it != it_end; ++it) std::cout << "Key '" << it->first << "' maps to value '" << it->second << "'\n"; // alternatively: std::cout << "Value for key 'Name': '" << settings["Name"] << "'\n";The first code fragment will output all key/value pairs which were read from the config file. The alternative version will print the value corresponding to the key 'Name' (or an empty string if there is no such key in the map). #6 Crossbones+ - Reputation: 2510 Posted 07 November 2012 - 11:30 AM ---(Old Blog, still has good info): 2dGameMaking ----- "No one ever posts on that message board; it's too crowded." - Yoga Berra (sorta) #7 Members - Reputation: 176 Posted 07 November 2012 - 02:35 PM
http://www.gamedev.net/topic/634037-sfml-20-game-config-file/
CC-MAIN-2014-23
en
refinedweb
This post is for those of you interested in learning the basics behind WordprocessingML. That’s the schema that we built for Word 2003. You can save any Word document as XML, and we will use this schema to fully represent that document as XML. The new default XML format for Word 12 is going to look very similar to the WordprocessingML schema in Word 2003. The big differences are really around the use of ZIP as a container, and breaking the file out into pieces so that it’s no longer one large XML file. If you are interested in the Office 12 formats, it would be really valuable to first get familiar with the XML formats from Office 2003. Over the coming months I’ll provide more details about the 12 formats, but for the time being, I would suggest learning WordprocessingML. This post will serve as a simple introduction. If your first exposure to Word’s XML schema came from taking an existing document and saving it out using the XML format, you probably had a bad experience. First off, we don't pretty print our files, so if you opened it in a text editor you probably had no chance of making anything out. We also save out a processing instruction (I'll post more on this later) that tells IE and the shell that it's a Word XML file. This means that if you try to open the file in IE to get their XML view, it instead will launch Word. The other issue is that in Word, we maintain all sorts of information about the files that you may not care about. As a result , there are a ton of XML elements saved out that can at first make the file itself look a bit intimidating. This is the difference between a full featured format, and one that isn't. We can't lose any functionality by moving to these XML formats, so as a result, we have to be able to represent everything as XML. I will show in future posts that it's also possible to save into a non-full featured format by using XSLT on the way out. This would allow you go get a simpler file when you save, but it has the side effects of losing some functionality. That's why we would never do a non-full featured format as a default. Instead it's an optional thing. Word's XML format is actually fairly simple if you're only trying to do simple things. You only need to expose yourself to the complex side if you're trying to do something more complex. Often times, the functionality of a feature in Word is extremely complex, so as a result, the representation as XML of that feature is also complex. In future posts I'll drill into some of the areas where people have had more problems and try to better explain the mapping from the internal feature to the XML representation. For now though, let's just make a simple Word document. Just like in our simple Excel Example, the first thing we need to do is to create the root element for the Word document. The root element is "wordDocument", and the root namespace is "". So, we should start with the following in our XML file: <?xml version="1.0"?><w:wordDocument xmlns:</w:wordDocument> There are three things that we just did. The first was declaring at the top that it's an XML file following the 1.0 version of the W3C XML standard (<?xml version="1.0"?>). The second was that we declared that the "w:" prefix maps to the Word namespace (xmlns:w=""). And the third thing we did was to create the root element wordDocument in the Word namespace (<w:wordDocument>). OK, so we have a skeleton document, but there is nothing in it yet. Similar to HTML, the content of the Word document is contained within a "body" tag. Within the body tag, we can have paragraphs and tables. Let's also create a paragraph element, so that our file now looks like this: <?xml version="1.0"?><w:wordDocument xmlns: <w:body> <w:p> </w:p> </w:body></w:wordDocument> We now have a Word document with one empty paragraph Since this is just a simple introduction, let's keep it that way, and make this into a "hello world" example. Internally in Word, we assign formatting to text by breaking everything in the document into a flat list of runs. Each run then has a set of formatting properties associated with it. We do the same in WordprocessingML. A paragraph is made up of one or more runs of text. So, to make this Word document say "hello world", we need to add a run tag and a text tag inside our paragraph. The "hello world" text will then go inside that text tag: <?xml version="1.0"?><w:wordDocument xmlns: <w:body> <w:p> <w:r> <w:t>Hello World</w:t> </w:r> </w:p> </w:body></w:wordDocument> Go ahead and open that file in Word, and you'll see your text. Not too exciting yet, but it's a start. For the last part, let's make the text bold. As I already mentioned, all text in a word document is stored as a collection of runs with properties associated with them. We already created one run of text in that first paragraph, but it just used the default formatting. Let's add one more tag (<w:rPr>) tag inside of that run which allows us to specify properties for that text: <?xml version="1.0"?><w:wordDocument xmlns: <w:body> <w:p> <w:r> <w:rPr> <w:b/> </w:rPr> <w:t>Hello World</w:t> </w:r> </w:p> </w:body></w:wordDocument> Now we've said that that run of text has bold formatting (<w:b/>) applied to it. Not the most exciting example, but we have to start somewhere. In later posts we'll go into how to create a more complicated set of formatting using multiple runs of text, as well as working with lists, tables, images, etc. It's a bit different that other document formats out there, so I want to step through everything carefully. -Brian
http://blogs.msdn.com/b/brian_jones/archive/2005/07/05/intro-to-word-xml-part-1-simple-word-document.aspx
CC-MAIN-2014-23
en
refinedweb
XamlObjectWriter.WriteNamespace Method Defines a namespace declaration that applies to the next object scope or member scope. Namespace: System.XamlNamespace: System.Xaml Assembly: System.Xaml (in System.Xaml.dll) A namespace declaration can be written if the current scope is a root scope, object scope, or member scope. However, WriteNamespace can only be called immediately before a call to one of the following: WriteNamespace, WriteStartObject, or WriteStartMember. The consecutive WriteNamespace case is for writing multiple namespace declarations to the same node. Eventually, either WriteStartObject or WriteStartMember must be called. WriteNamespace does not use a Start/End metaphor. Although a XAML namespace has members in the CLR representation sense, the members are known and do not need to be represented as a special type of XamlMember for extensibility. To access the values of a NamespaceDeclaration, you access its Namespace and Prefix properties. A namespace declaration may have a String.Empty value for the prefix. A String.Empty prefix represents a declaration of the default XAML namespace. Do not attempt to use null to represent the default prefix; it throws an exception in this API. For more information, see NamespaceDecl.
http://msdn.microsoft.com/en-us/library/system.xaml.xamlobjectwriter.writenamespace(v=vs.110)
CC-MAIN-2014-23
en
refinedweb
Code. Collaborate. Organize. No Limits. Try it Today. Many people have been asking me about a how to create a proxy. Actually is very simple to do in Managed C++. This article is made very simple, you can add as much as you want at the base class we can call it CHttpProxy CHttpProxy I'll be using the known class TcpClient. TcpClient. // Header File // CHttpProxy.h __gc class CHttpProxy { public: CHttpProxy(String __gc *szHost, int port); String __gc *m_host; int m_port; unsigned char SendToServer(unsigned char Packet __gc[]) __gc[]; }; // CHttpProxy.cpp #using <mscorlib.dll> #using <SYSTEM.DLL> using namespace System; using System::Net::Sockets::TcpClient; using System::String; using System::Exception; using System::Net::Sockets::NetworkStream; #include "httpproxy.h" #include <stdio.h> CHttpProxy::CHttpProxy(String __gc *szHost, int port) { m_host = szHost; m_port = port; } unsigned char CHttpProxy::SendToServer(unsigned char Packet __gc[]) __gc[] { TcpClient * tcpclnt = new TcpClient(); unsigned char bytes __gc[]; try { tcpclnt->Connect(m_host,m_port); } catch (Exception * e ) { Console::WriteLine(e->ToString()); return NULL; } // Send it if ( tcpclnt ) { NetworkStream * networkStream = tcpclnt->GetStream(); int size = Packet->get_Length(); networkStream->Write(Packet, 0, size); bytes = new unsigned char __gc[tcpclnt->ReceiveBufferSize]; networkStream->Read(bytes, 0, (int) tcpclnt->ReceiveBufferSize); return (bytes); } return (NULL); } Simple isn't it? This class creates a connection to the "real" IIS server. So(...) How to use it now? Simpler than bowling water, you may remember this part of the code from my previous article: TcpListener * pTcpListener; TcpListener = new TcpListener(80); TcpListener->Start(); TcpClient * pTcpClient; unsigned char sendbytes __gc[]; pTcpClient = m_TcpListener->AcceptTcpClient(); NetworkStream * networkStream = pTcpClient->GetStream(); bytes = new unsigned char __gc[tcpCl->ReceiveBufferSize]; networkStream->Read(bytes, 0, (int) tcpCl->ReceiveBufferSize); // Now we got the request Console::WriteLine(S"Packet Received"); CHttpProxy *pProxy; pProxy = new CHttpProxy(S"", 80); // Goes somewhere else! sendbytes = pProxy->SendToServer(bytes); networkStream->Write(sendbytes, 0 , sendbytes->get_Length()); networkStream->Close(); We listen on port 80 for any connection. Anything that comes in that port is re-directed to It's actually a pretty easy concept. The most interesting part comes when you get the buffer. You got the control to remove images from the HTML or change the content on the fly. In my next article will look into those.
http://www.codeproject.com/Articles/2121/How-to-create-a-simple-proxy-in-Managed-Cplusplus
CC-MAIN-2014-23
en
refinedweb
Code. Collaborate. Organize. No Limits. Try it Today. This article discusses proxies/decorators in the context of the C# programming language, and shows a Visual Studio add-in which helps with the creation of such objects from your code. To use the compiled add-in presented in this article, unzip the installation files into your Add-ins folder. If you’ve ever done textual code generation from C#, you probably know how convenient it is to subclass StringBuilder and add features specific for the type of code you want to build. You would also know that you really can’t subclass StringBuilder since it’s sealed. One option is to use extension methods, but what if you want to keep, say, the indentation level of a particular builder class? You end up making HashTables of WeakReference classes, and it all gets messy. Luckily, there is an (arguably) better way. StringBuilder sealed HashTable WeakReference What I’m talking about is using a decorator instead. A decorator over the StringBuilder class can have all the methods that StringBuilder has, and more. Unlike the static extension method class, it can keep instance-specific state. It can also do other interesting and useful things, such as add proxy code, change return types, and other fun things. static Propagating (or proxying) lots of property assignments and method calls is no fun. That’s why I decided to write a small tool to do it for me. Let’s take a look at the tool in action. Okay, so I want to make a general-purpose CodeBuilder class from which I’d like to derive CSharpBuilder, FSharpBuilder, NemerleBuilder, and so on. How do I do it? CodeBuilder CSharpBuilder FSharpBuilder NemerleBuilder Step I (optional): I open mscorlib in Reflector, locate the StringBuilder class, and copy it verbatim into my project file (any filename will do – it doesn’t really matter). If I was making a decorator over one of the files in my solution, I’d skip this step. Since StringBuilder is not around, I copy it. Don’t bother fixing missing references or compiling the stuff – there’s no point. We just dragged in the source code so the decorator builder can find it. Step II: Now, I right-click the project I want the decorator in and choose Add | Decorator: Step III: Now, I select the StringBuilder class in the tree and tick its box. I type in the decorator name and press the OK button: Since many StringBuilder methods return a StringBuilder object, I get a warning that a fluent interface has been detected: Since I want to return CodeBuilder objects instead, I press Yes to make the substitution. Step IV: This is the final step. I’ve got my class, so all I need to do now is add the missing references and do some clean-up so that all the wrongly translated parts are either removed or are made compilable (Reflector isn’t perfect, you know). Of course, I also remove the stuff I copied from Reflector – it’s no longer necessary. That’s it! I’ve got my decorator. public class CodeBuilder { private readonly StringBuilder stringBuilder; private CodeBuilder(StringBuilder stringBuilder) { this.stringBuilder = stringBuilder; } public int Capacity { get { return stringBuilder.Capacity; } set { stringBuilder.Capacity = value; } } // other members omitted } The add-in parses the project content tree and locates every .cs file. Then, it uses a free C# parser that I found on CodePlex to parse the files and build a visual tree out of them. The last part is really obvious – it just goes through the structure of the classes the user chose, and makes propagating methods/properties. This project is my first (and only) use of WF. The decorator is built with a very simple workflow. In case you’re interested, here it is (not too exciting, is it?): You don’t have to make a decorator over just one class. If you are after some simulated multiple inheritance, you can specify two or more classes to decorate over. It will be up to you to extract interfaces and deal with name collisions, since my add-in doesn’t handle those directly. This add-in is part of a set called P/factor that I wrote mainly for internal use. I will write about other code generation add-ins in the near future. Meanwhile, feel free to experiment with the add-in. I also appreciate comments and votes, since it's the only indicator I have of whether my articles are useful..
http://www.codeproject.com/Articles/31555/A-tool-for-making-C-decorators-proxies?fid=1532149&df=90&mpp=10&noise=1&prof=True&sort=Position&view=None&spc=None&PageFlow=FixedWidth
CC-MAIN-2014-23
en
refinedweb
SharePoint is built on top of ASP.NET. There are several key classes/objects in the SharePoint infrastructure which customize the ASP.NET pipeline; or in the case of SharePoint 2010 the integrated ASP.NET/IIS pipeline. The abstract and base implementations for these classes, as well as important interfaces, are defined in the System.Web namespace. The SharePoint versions are all defined in the Microsoft.SharePoint.ApplicationRuntime namespace, in the Microsoft.SharePoint.dll assembly. Most of the information provided below comes from using .NET Reflector to inspect these classes, as well as inspecting the .config files on SharePoint servers. The first of these classes is SPHttpApplication. This class derives from System.Web.HttpApplication, and is associated with SharePoint Web Applications through a global.asax file in the root directory for each WebApp. Its responsibilities are minimal – it registers VaryByCustomString output cache profiles and registers a EventHandler for unhandled exceptions. In SharePoint 2010, it also provides a new SharePointEndRequest event which is fired in the EndRequest event of the HttpApplication. Presumably, developers could hook up to this event easily in the global.asax file. The next class is SPHttpHandler. It derives from System.Web.DefaultHttpHandler, which implements the IHttpHandler interface. SPHttpHandler is associated with all requests (well, GET, HEAD, and POST requests) via an <add /> element under <httpHandlers> in the web.config file for each SharePoint Web Application, at least in SharePoint 2007. Most of the work is done by the base class. The SharePoint derivation adds an override for the OverrideExecuteUrlPath, which determines if this request should be handled by owssvr.dll. It also sets some headers and adds HttpOnly for cookies, though this method doesn’t seem like the proper place for that – which brings us to SharePoint 2010, where this handler is gone. In SharePoint 2010, the handlers for all ASP.NET extensions are the default ones configured in applicationHost.config (under <location path=””>). So *.aspx is handled by System.Web.UI.PageHandlerFactory, *.asmx by System.Web.Services.Protocols.WebServiceHandlerFactory, etc., as they would be for a typical ASP.NET application. *.dll is handled by the ISAPI module in IIS7, which expects to load and execute a DLL in the Web Application’s directory. owssvr.dll isn’t in the Web Application’s directory of course; it’s in 14\ISAPI. So a special entry is added for the /_vti_bin/owssvr.dll path, pointing it to this universal path. If you think about it, this is also why using a special HttpHandler to hand off to owssvr.dll won’t work anymore – we’re no longer passing all (*) requests to the same handler. Unless we override this, DLL extensions won’t be passed to the ASP.NET page handlers, and once we’re overriding it, might as well use the built-in configuration options to just pass the request straight to where it’s supposed to go. The last, and probably most important, class is the SPRequestModule class. In 2007, this module is added to the ASP.NET pipeline via an <add /> element in the <system.web/httpModules> section in the web.config for each SharePoint Web Application. In 2010, the module is added to the integrated pipeline via the <system.webServer/modules> section, as you would expect. SPRequestModule directly implements the IHttpModule interface and provides most of the additional configuration and processing needed for SharePoint pages. For example, the Init() method of SPRequestModule is responsible for registering SharePoint’s VirtualPathProvider, which allows pages to be retrieved from the database or file system, as appropriate. It provides other functions as well, which would make a good topic for another post some time. On the topic of additional SharePoint modules, you’ll notice that a module named SharePoint14Module is also added in the <system.webServer/modules> section. No type is provided, because this is a reference to native module – our old friend owssvr.dll. Native modules must be declared in applicationHost.config in the <system.webServer/globalModules> section, then added to individual locations. You’ll find the SharePoint 14 module first declared there. I believe this indicates that owssvr.dll has now been re-written as an IIS7 native module (instead of an ISAPI filter), and is providing some functionality for each request. That wraps up an overview of the SharePoint-ASP.NET integration points – SPHttpApplication, SPHttpHandler, and SPRequestModule. Enjoy! Great post, thank you so much. I have a question related to this: How can I inject something into any page content while keeping the original page content untouched? The scenario I am try to implement is: 1. Inject "Before" and "After" to any page without changing master page or page layout. 2. Users won't see the words when they edit the pages, and once they save the pages, the words will show up. 3. Output cache will work fine Any help will be appreciated and thank you so much forums.iis.net/.../1184037.aspx I'm having issue of above describe link Wild card script mapping fails
http://blogs.msdn.com/b/besidethepoint/archive/2010/05/01/how-sharepoint-integrates-with-the-asp-net-infrastructure.aspx
CC-MAIN-2014-23
en
refinedweb
Name | Synopsis | Description | Return Values | Usage | Attributes | See Also | Notes #include <dlfcn.h> int dladdr(void *address, Dl_info_t *dlip); int dladdr1(void *address, Dl_info_t *dlip, void **info, int flags); The dladdr() and dladdr1() functions determine if the specified address is located within one of the mapped objects that make up the current applications address space. An address is deemed to fall within a mapped object when it is between the base address, and the _end address of that object. See NOTES. If a mapped object fits this criteria, the symbol table made available to the runtime linker is searched to locate the nearest symbol to the specified address. The nearest symbol is one that has a value less than or equal to the required address. The Dl_info are one of a family of functions that give the user direct access to the dynamic linking facilities. These facilities are available to dynamically-linked processes only. See Linker and Libraries Guide. See attributes(5) for descriptions of the following attributes: ld(1), dlclose(3C), dldump(3C), dlerror(3C), dlopen(3C), dlsym(3C), attributes(5) Linker and Libraries Guide The Dl_info_t pointer elements point to addresses within the mapped objects. These pointers can become invalid if objects are removed prior to these elements use. See dlclose(3C). If no symbol is found to describe the specified address, both the dli_sname and dli_saddr members are set to 0. If the address specified exists within a mapped object in the range between the base address and the address of the first global symbol in the object, the reserved local symbol _START_ is returned. This symbol acts as a label representing the start of the mapped object. As a label, this symbol has no size. The dli_saddr member is set to the base address of the associated object. The dli_sname member is set to the symbol name _START_. If the flag argument is set to RTLD_DL_SYMENT, symbol information for _START_ is returned. | Usage | Attributes | See Also | Notes
http://docs.oracle.com/cd/E19082-01/819-2243/6n4i098ta/index.html
CC-MAIN-2014-23
en
refinedweb
ASP.NET XML Web Service Basics Since ASP.NET provides the infrastructure for the inner workings of a Web service, developers can focus on implementing the functionality of their specific Web service. Enabling a Web service using ASP.NET entails creating a file with an .asmx file name extension, declaring a Web service in that file and possibly another file, and defining Web service methods. The procedures are listed in Walkthrough: Building a Basic XML Web Service Using ASP.NET and are elaborated upon here. Declaration of Web Services When you create a Web service in ASP.NET, you place the required @ WebService directive at the top of a text file with an .asmx file name extension. The presence of the .asmx file and the @ WebService directive correlate the URL address of the Web service with its implementation. You also implement the Web service class that defines the methods and data types visible by Web service clients. The Web service class you define can be included directly in the .asmx file, or in a separate file. If you use a separate file, it must be compiled into an assembly. Optionally, you can apply a WebService attribute to the class that implements the Web service. The class that implements the Web service can derive from the WebService class. By applying the optional WebService attribute to a class that implements a Web service, you can set the default XML namespace for the Web service along with a string to describe the Web service. It is highly recommended that this default namespace, which originally is, be changed before the Web service is made publicly consumable. This is important because the Web service must be distinguished from other Web services that might inadvertently use the namespace as the default (<>). Classes that implement a Web service created using ASP.NET can optionally derive from the WebService class to gain access to the common ASP.NET objects, such as Application, Session, User, and Context. The Application and Session properties provide access to storing and receiving state across the lifetime of the Web application or a particular session. For more information on state management, see How to: Manage State in Web Services Created Using ASP.NET. The User property contains the identity of the caller, if authentication is enabled, for the Web service. With the identity, a Web service can determine whether the request is authorized. For more information on authentication, see Securing XML Web Services. The Context property provides access to all HTTP-specific information about the Web service client's request. For more information on the Context property, see WebService.Context Property. Definition of Web Service Methods.
http://msdn.microsoft.com/en-us/library/a7xexaft.aspx
CC-MAIN-2014-23
en
refinedweb
The topic you requested is included in another documentation set. For convenience, it's displayed below. Choose Switch to see the topic in its original location. String.Concat Method (String[]) .NET Framework 4 SystemSystem Assembly: mscorlib (in mscorlib.dll) Parameters - values - Type: System.String[] An array of string instances. Return ValueType: System.String The concatenated elements of values. The following example demonstrates the use of the Concat method with a String array. using System; public class ConcatTest { public static void Main() { // make an array of strings. Note that we have included spaces string [] s = { "hello ", "and ", "welcome ", "to ", "this ", "demo! " }; // put all the strings together Console.WriteLine(string.Concat(s)); // sort the strings, and put them together Array.Sort(s); Console.WriteLine(string.Concat:
http://msdn.microsoft.com/en-us/library/windows/apps/0wkb0y3w(v=vs.100)
CC-MAIN-2014-23
en
refinedweb
Puppy gets a Pet action and no attack but cats and butterflies are still Attack Katie Russell 2013-06-03 Kimmo Rundelin 2013-09-13 The puppy (ab)uses the custom menu system to override the default menu items for RPEntities. The other PassiveNPCs can use the same trick if there are nice alternate actions for them. Here's the current situation: The cats in Felina's house are PassiveNPCs and they already have a custom menu entry - "Own", which makes the non-attackable. The pet cats are instances of a different class - Cat. They're not PassiveNPCs and have no custom menu entry, so they're attackable. Attackable PassiveNPCs are butterflies (Semos) and fishes (Ados). There's not much one could do to a fish (... maybe disturb it, and make it swim the other way?) or a butterfly. So adding a custom menu entry is not always easy. There's a check in Entity2DView.buildActions() if (!entity.getRPObject().has("menu")) { if (entity.isAttackedBy(User.get())) { list.add(ActionType.STOP_ATTACK.getRepresentation()); } else { list.add(ActionType.ATTACK.getRepresentation()); } } Maybe we could replace this "has-menu" condition with something more meaningful, like a NotAttackable marker interface. Something like: public interface NotAttackable { } public class Cat extends Pet implements NotAttackable public class PassiveNPC extends NPC implements NotAttackable and then check this way: if (!entity.getRPObject() instanceof NotAttackable) What do you guys think? Hendrik Brummermann 2013-12-01 Although this is more work and requires compatibility code: I think it may be a good idea to let the server decide to add an Attack menu. A quick idea (without looking too deep into the code) is to extend the "menu" attribute to contain a list of all menu items. The client will still need to do some situation specific processing, e. g. convert "Attack" into "Stop Attack". This way the client will become more generic. And it is one step forward on untangling the various aspects of Sheeps, Pets, Creatures, SpeakerNPCs, PassiveNPCs on the server side in the future. Anonymous
http://sourceforge.net/p/arianne/bugs/5706/
CC-MAIN-2014-23
en
refinedweb
Programming with DB-Library for C New Information - SQL Server 2000 SP3. Programming with DB-Library for C typically involves the following steps: - Connect to Microsoft® SQL Server™ 2000. - Put Transact-SQL statements into a buffer and send them to SQL Server. - Process the results, if any, returned from SQL Server, one statement at a time and one row at a time. You can put the results into program variables, where the application can manipulate them. - Handle DB-Library errors and SQL Server messages. - Disconnect from SQL Server. The following example shows the basic framework of many DB-Library for C applications. The application connects to SQL Server, sends a Transact-SQL SELECT statement to SQL Server, and processes the set of rows resulting from the SELECT statement. For more information about defining the target operating system prior to compiling your application, see Building Applications. #define DBNTWIN32 #include <stdio.h> #include <windows.h> #include <sqlfront.h> #include <sqldb.h> // Forward declarations of the error handler and message handler. int err_handler(PDBPROCESS, INT, INT, INT, LPCSTR, LPCSTR); int msg_handler(PDBPROCESS, DBINT, INT, INT, LPCSTR, LPCSTR, LPCSTR, DBUSMALLINT); main() { PDBPROCESS dbproc; // The connection with SQL Server. PLOGINREC login; // The login information. DBCHAR name[100]; DBCHAR city[100]; // Install user-supplied error- and message-handling functions. dberrhandle (err_handler); dbmsghandle (msg_handler); // Initialize DB-Library. dbinit (); // Get a LOGINREC. login = dblogin (); DBSETLSECURE (login); DBSETLAPP (login, "example"); // Get a DBPROCESS structure for communication with SQL Server. dbproc = dbopen (login, "my_server"); // Retrieve some columns from the authors table in the // pubs database. // First, put the command into the command buffer. dbcmd (dbproc, "SELECT au_lname, city FROM pubs..authors"); dbcmd (dbproc, " WHERE state = 'CA' "); // Send the command to SQL Server and start execution. dbsqlexec (dbproc); // Process the results. if (dbresults (dbproc) == SUCCEED) { // Bind column to program variables. dbbind (dbproc, 1, NTBSTRINGBIND, 0, name); dbbind (dbproc, 2, NTBSTRINGBIND, 0, city); // Retrieve and print the result rows. while (dbnextrow (dbproc) != NO_MORE_ROWS) { printf ("%s from %s\n", name, city); } } // Close the connection to SQL Server. dbexit (); return (0); } int err_handler (PDBPROCESS dbproc, INT severity, INT dberr, INT oserr, LPCSTR dberrstr, LPCSTR oserrstr) { printf ("DB-Library Error %i: %s\n", dberr, dberrstr); if (oserr != DBNOERR) { printf ("Operating System Error %i: %s\n", oserr, oserrstr); } return (INT_CANCEL); } int msg_handler (PDBPROCESS dbproc, DBINT msgno, INT msgstate, INT severity, LPCSTR msgtext, LPCSTR server, LPCSTR procedure, DBUSMALLINT line) { printf ("SQL Server Message %ld: %s\n", msgno, msgtext); return (0); } This example illustrates features common to most DB-Library for C applications, including: header files All source files that contain calls to DB-Library functions require two header files, Sqlfront.h and Sqldb.h. Before including the Sqlfront.h and Sqldb.h files, define the target operating system with #define: - DBMSDOS (for Microsoft MS-DOS®) - DBMSWIN (for 16-bit Microsoft Windows®) - DBNTWIN32 (for 32-bit Windows 95 and Microsoft Windows NT® 4.0) An alternative is to put DBMSDOS, DBMSWIN, or DBNTWIN32 on the compilation command lines. For more information, see the examples in "Include Files", in Building Applications. For Windows, Windows 95, and Windows NT 4.0, you must include Windows.h before including the Sqlfront.h and Sqldb.h files. Include Sqlfront.h before Sqldb.h. Sqlfront.h defines symbolic constants, such as function return values and the exit values STDEXIT and ERREXIT. These exit values can be used as the parameter for the C standard library function exit. The exit values are defined appropriately for the operating system running the application. The Sqlfront.h file also includes type definitions for data types that can be used in program variable declarations. These data types are described in DB-Library for C Data types. The Sqldb.h file contains additional type definitions and DB-Library function prototypes, most of which are meant to be used only by the DB-Library functions. They should not be accessed directly by the program. To ensure compatibility with future releases of DB-Library, use the contents of Sqldb.h only as documented here. dberrhandle and dbmsghandle The first of these DB-Library functions, dberrhandle, installs a user-supplied error-handling function, which is called automatically whenever the application encounters a DB-Library error. Similarly, dbmsghandle installs a message-handling function, which is called in response to informational or error messages returned from SQL Server. The error- and message-handling functions are user-supplied. It is strongly recommended that users supply error-processing functions. dblogin Supplies a LOGINREC structure, which DB-Library uses to log on to SQL Server. Two functions set entries in the LOGINREC. DBSETLPWD sets the password that DB-Library uses when logging in. DBSETLAPP sets the name of the application, which appears in the SQL Server sysprocess table. Certain functions set other aspects of the LOGINREC, which contains defaults for each value they set. Security Note Authorization information, including user name and password, is stored in memory in the LOGINREC structure. It is possible that someone accessing a memory dump of the machine running the application could access this information. Take precautions to prevent access to memory data by unauthorized individuals. dbopen Opens a connection between the application and SQL Server. It uses the LOGINREC supplied by dblogin to log on to the server. It returns a DBPROCESS structure, which serves as the conduit for information between the application and the server. After this function has been called, the application is connected with SQL Server and can send Transact-SQL statements to SQL Server and process the results. Simultaneous transactions must each have a distinct DBPROCESS. Serial transactions can use the same DBPROCESS. Security Note Connection information, including user name and password, is stored in memory in the DBPROCESS structure. It is possible that someone accessing a memory dump of the machine running the application could access this information. Take precautions to prevent access to memory data by unatuthorized individuals. dbcmd Fills the command buffer with Transact-SQL statements, which can then be sent to SQL Server. Each call to dbcmd, after the first, adds the supplied text to the end of any text already in the buffer. The programmer must supply necessary blanks between words, such as the space between the quotation mark and the word WHERE in the second dbcmd call in the example: dbcmd(dbproc, " WHERE state = 'CA' "); Although multiple statements can be included in the buffer, this example only shows how to send and process a single statement. DB-Library allows an application to send multiple statements (called a command batch) to SQL Server and process each statement's set of results separately. dbsqlexec Executes the command buffer; that is, it sends the contents of the buffer to SQL Server, which parses and executes the commands. This function causes DB-Library to wait until SQL Server has completed execution of the query. To avoid this delay, you can call dbsettime to set the DB-Library time-out, or you can use dbsqlsend, dbdataready, and dbsqlok (instead of dbsqlexec) to retain control while SQL Server is busy. dbresults Gets the results of the current Transact-SQL statement ready for processing. After dbresults returns SUCCEED, column meta data for the current result set is available. Your application should call dbresults until it returns NO_MORE_RESULTS. If your program fails to do this, the DB-Library error message 10038 "Results Pending" occurs the next time that DBPROCESS is used. dbbind Binds result columns to program variables. In the example, the first call to dbbind binds the first result column to the name variable. In other words, when the program reads a result row by calling dbnextrow, the contents of the first column in the result row are placed in the name variable. The data type of the binding is NTBSTRINGBIND, one of several binding types available for character data. The second call binds the second result column to the city variable. dbnextrow Reads a row and places the results in the program variables specified by the earlier dbbind calls. Each successive call to dbnextrow reads another result row until the last row has been read and NO_MORE_ROWS is returned. Processing of the results must take place inside the dbnextrow loop. This is because each call to dbnextrow overwrites the previous values in the bound program variables. dbexit Closes all SQL Server connections and frees all DBPROCESS structures created because of the application. It is usually the last DB-Library function in the program.
http://msdn.microsoft.com/en-us/library/aa936949(d=printer,v=sql.80).aspx
CC-MAIN-2014-23
en
refinedweb
Hi folks, rather than discussing about which operator symbol to use for record access, which is really a question of personal taste we should try to seriously discuss the proposals and get to a solutions! We all seem to agree that records are broken in Haskell. In order to fix that we need a new and most probably incompatible solution. However, I think the new solution should go in a new version of the Haskell standard (among other things :-) ). I would strongly disadvice to try to stick with the old system and improve it. Just because there are lots of different opinions we should still try to find a reasonable solution soon. Desite the minor problem of '.' that dominated the discussion so far, what are the REAL offences against Simons proposal [1] (denoted as SR in the following)? To me it sounds like a very reasonable starting point. Which other proposals exist? I quote David Roundy's list of problems [2] with a short annotation whether SR solves them: 1. The field namespace issue. solved by not sharing the same namespace with functions 2. Multi-constructor getters, ideally as a function. not solved. only possible by hand - As stated by Wolfgang Jeltsch [3] another datatype design might be better - I can image a solution is within SR example: > data Decl = DeclType { name :: String, ... } > | DeclData { name :: String, ... } > | ... > d :: Decl in addition to > d.DeclType.name > d.DeclData.name we provide (only if save, see 3.) > d.name 3. "Safe" getters for multi-constructor data types. not supported as it is - with the above suggestion it could be possible (don't know if desireable) 4. Getters for multiple data types with a common field. - solved with contrains > getx :: (r <: { x :: a }) => r -> a 5. Setters as functions. doesn't seem to be supported, or I don't see it right now. 6. Anonymous records. Supported 7. Unordered records. I don' t understand it. points added from me: 8. Subtyping Supported, quite nicely 9. higher order versions for selecting, updateing ... [4] not supported seems important to me, any solutions? Regards Georg [1] [2] [3] [4] Am Donnerstag, 17. November 2005 19:08 schrieb Dimitry Golubovsky: > > > I found it useful to use (mainly for debugging purposes) > > mapM (putStrLn . show) <some list> > > if I want to print its elements each on a new line. > > -- > Dimitry Golubovsky > > Anywhere on the Web -- ---- Georg Martius, Tel: (+49 34297) 89434 ---- ------- --------- -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available Url :
http://www.haskell.org/pipermail/haskell/2005-November/016931.html
CC-MAIN-2014-23
en
refinedweb
Release 11.1.2.2 Table of Contents: Setting Up the Environment Exercise 1: Importing Planning Suite Artifacts and Data Exercise 2: Exporting Planning Suite Artifacts and Data This document is a tutorial on how to migrate an Oracle Hyperion Planning application from one environment to another. The steps include artifacts and data of Planning and related components including Oracle Hyperion Calculation Manager, Oracle Essbase Server, Oracle Hyperion Financial Reporting, and Oracle Hyperion Shared Services artifacts. There are two exercises in this document that walk you through the steps needed to migrate artifacts and data from one environment to another. Exercise 1: Importing Planning Suite Artifacts and Data This exercise provides steps to import an entire Planning Suite application. – 15 Minutes Exercise 2: Exporting Planning Suite Artifacts and Data This exercise provides steps to export application artifacts using Oracle Hyperion Enterprise Performance Management System Lifecycle Management and to export Essbase data using Oracle Essbase Administration Services. – 30 Minutes Before you start these exercises, you will need an environment. To build this environment, follow the steps in the guide titled Oracle Enterprise Performance Management System Rapid Deployment of Oracle Hyperion Planning in Development Environments located at:. You will need some sample data for these exercises. To download the sample data: Go to the Oracle Documentation Library on Oracle® Technology Network (). Under Oracle Enterprise Performance Management System Release 11.1.2.2, click View Library. On the Foundation Services tab, select the file titled Migrating Oracle Hyperion Planning Applications - Data, and unzip its content to EPM_ORACLE_INSTANCE\import_export. If the import_export folder does not exist, you must create it You must use 7-Zip to unzip the contents of the file. The name of the actual zip file is epm_migrate_planning_app_data.zip. To import Planning Suite artifacts using Lifecycle Management: Log in to Oracle Hyperion Enterprise Performance Management Workspace and select Navigate, then Administer, and then Planning Administration. Create a datasource with the name FinSrv. Click OK to save the datasource. While still in EPM Workspace, select Navigate, then Administer, and then Shared Services Console. You can also log in to Oracle Hyperion Shared Services Console by accessing http://<host>:<port>/interop/index.jsp. Expand File System, right-click FinSrvApplicationSuite, and select Import. Note that you can also explore and check out artifacts in the File System folder before importing by expanding the folder and then browsing in the right-hand pane. The File System folder has application-related content from Planning, Calculation Manager, Financial Reporting, Essbase, and Shared Services. Select OK when asked if you want to proceed with the import. Wait for the import operation to complete. Log in to Oracle Hyperion Enterprise Performance Management Workspace and validate that the Planning application has been created successfully. You can also validate the Oracle Hyperion Calculation Manager business rules and the reports in the Reporting and Analysis repository. To import Planning data using MAXL: Launch Essbase Administration Services Console by accessing http://<EAS_SERVER_NAME>:<port>/easconsole. Select File, then Editors, and then MaxL script editor. Execute the following command: import database FinSrv.HCP data from data_file '<EPM_ORACLE_INSTANCE>\import_export\HCM.dat' on error write to 'c:\HCP.log' To export Planning Suite artifacts and data: Log in to Oracle Hyperion Shared Services Console (http://<host>:<port>/interop/index.jsp). Expand Application Groups, then Foundation, and then Shared Services. Click the Shared Services node. In the right pane, you should see all the Oracle Hyperion Shared Services artifacts. In the right pane, expand Native Directory, and select the users and groups artifacts. Expand Assigned Roles, and then Shared Services for your Oracle Hyperion Planning application, and then select the Assigned Roles artifact named after the application, FinSrv, Reporting and Analysis. Your screen should look similar to the following: In the left pane, select Foundation, and then Calculation Manager. In the right pane, expand Planning and select the FinSrv application. In the left pane, expand the HP Application group and explore the FinSrv application. You should be able to see all the artifacts of this application in right pane. Select all the artifacts. In the left pane, expand Application Groups, and then EssbaseCluster-1. Select the EssbaseCluster-1 node. You should now see the Substitution Variables folder of EssbaseCluster-1 listed in the right pane. Select all the artifacts. In the left pane, expand the Reporting and Analysis application group and select Reporting and Analysis. You should now see all the Oracle Hyperion Reporting and Analysis artifacts in the right hand pane. In the right pane, expand Repository Objects and select FinSrv; then, expand Security and select all users. These are the Oracle Hyperion Financial Reporting artifacts related to the FinSrv application. In the Export dialog box, enter the File System Folder name and click Export. This launches the Migration Status Report. Wait for the migration to complete. The migrated content is available at EPM_ORACLE_INSTANCE/import_export.
http://docs.oracle.com/cd/E17236_01/epm.1112/epm_migrate_planning_app/epm_migrate_planning_app.html
CC-MAIN-2014-23
en
refinedweb
Generics and Reflection (C# Programming Guide) Because the Common Language Runtime (CLR) has access to generic type information at run time, you can use reflection to obtain information about generic types in the same way as for non-generic types. For more information, see Generics in the Run Time (C# Programming Guide). In the .NET Framework 2.0 several new members are added to the Type class to enable run-time information for generic types. See the documentation on these classes for more information on how to use these methods and properties. The System.Reflection.Emit namespace also contains new members that support generics. See How to: Define a Generic Type with Reflection Emit. For a list of the invariant conditions for terms used in generic reflection, see the IsGenericType property remarks. In addition, new members are added to the MethodInfo class to enable run-time information for generic methods. See the IsGenericMethod property remarks for a list of invariant conditions for terms used to reflect on generic methods.
http://msdn.microsoft.com/en-us/library/ms173128(v=vs.110).aspx
CC-MAIN-2014-23
en
refinedweb
Libraries/WhenToRewriteOrRename From HaskellWiki Revision as of 10:35, 9 June 2010 There have been a few cases of major API changes / rewrites to famous old packages causing problems, including: - QuickCheck 1 vs 2 - parsec 2 vs 3 - OpenGL - Haxml 1.13,1.19 to call the new library 'fgl'. It is a controversial step to write a new library and give it the same name as an existing, famous library. Let's look at the arguments. 1 Reasons to use the old. - It makes development by new users simpler by not fracturing the package-space (the "Which version of QuickCheck should I use?" problem). - It decreases the maintainer workload as the same person or team will often be responsible for both packages. - A lot of respected members of the Haskell community (e.g. Cale) do not like many aspects of the current API (and thus refuse to use it) and we're taking their points of view into account. - The new version of the library is keeping the "spirit" of old fgl alive, but modernising the interface and providing new functionality (e.g. the ability to restrict the label types or use a custom Node type). - There will be a full transition guide between the old and new versions (something that was lacking for packages like QuickCheck from what I could tell). - Major version numbers exist for a reason: to denote breakage. We really need to educate developers to avoid having too lax or open-ended package dependencies. 2 Reasons not to use the name - Code that depends on 'fgl' will break. There are 23 direct and 25 indirect dependencies on fgl. - Rebuttal: - Have contacted all maintainers of packages on Hackage which have packages without an upper bound on the version of fgl used; most have already gotten back to me saying they will release a bug-fix version to resolve this. - With the Package_versioning_policy, people should always have upper bounds on their dependencies anyway. - Until new fgl is stabilised, Hackage can set the default version of fgl to be < 6 (same as what happened with base-3 to base-4 transition, etc., thus any packages that do not have an upper bound will not be affected by cabal-install, etc. -. - This is true. However, every now and then someone tries to work out what this mystical packedstring library is and tries to use it (old invalid deps, etc.). - The package has been stable for ~10 years -- why change a stable API? It is already "perfect" - As mentioned above: many people do not think that the current API is perfect. - The new package really isn't the same package in any sense. - Rewrites by new teams damage the brand of famous packages (e.g. parsec 3) - No additional breakages are introduced. - Not sure what your point is here. - If you weren't maintainer of 'fgl' this rewrite wouldn't even be possible to call 'fgl' -- there's a conflict of interest. - Of course not, but I volunteered to become the maintainer of fgl precisely to modernise the interface (which as far as I know is why Martin Erwig gave fgl up for adoption: he didn't have time to make changes that people were asking him for). - Maintaining Haskell98 compatability. Keep it simple. (See regex-posix's mistakes here) - Not sure what you mean by this point; what are regix-posix's mistakes? Whilst in general I can see Haskell98 (or Haskell2010) compatability being a good thing to keep (in case someone uses another compiler, etc.) if there's a good argument to be made for why a certain extension would be useful then why shouldn't we use it? Whilst I mightn't have been working on a major Haskell library back then, it was pointed out to me a while back that you shouldn't constrain yourself by enforcing Haskell98 compatability for no reason. - Distros that support the Haskell Platform will have to keep an old version of fgl around for a long time anyway. - I don't intend to have the new fgl be actually used by people for a while yet anyway, as I intend to get the ecosystem built up around it (fgl-algorithms, etc.) first. - I think that keeping base-3 compatibility in xmonad just to ensure that people using the Long Term Release of Ubuntu has in a sense held it back, as it was more of a pain to transition to base-4 later on than it would have been to do it earlier (using extensible-exceptions if nothing else). - The original author might not approve of the use of the name. - If this is true, then why did he publicly state in mailing lists that he wanted someone to take over? - having tutorials not work for later revisions is more confusing than having various packages doing the same thing. - The current tutorials do not fully work with the current version anyway, and we will be writing tutorials (already had one offer to help out with this). - separate names (both for the package name and module name-space) is that its easier to have both packages installed at the same time - A valid argument, especially when seeing the fall-out between mtl and transformers. 3 Possible Compromises - Until we're ready to release, either don't release fgl on Hackage or call it fgl-experimental or something. - The name "Functional Graph Library" is rather vague anyway, whereas something like "inductive-graphs" makes more sense in terms of the actual data structures, etc. involved. As such we could give the new version of the library something like that if down the track we could officially deprecate the fgl library (like how packedstring has been deprecated). - We could officially split up the fgl package namespace even further: rather than having fgl + fgl-algorithms, etc. we could have something like fgl-classes, fgl-algorithms, etc. As such the base name is kept whilst there is no ambiguity on which version is being used.
http://www.haskell.org/haskellwiki/index.php?title=Libraries/WhenToRewriteOrRename&diff=prev&oldid=34932
CC-MAIN-2014-23
en
refinedweb
Hello, everyone. I currently am having some trouble with C++, mostly the Object Oriented aspect of it. I have been working on a series of problems on one homework assignment and I am left stumped to the point that everything I have learned somehow makes no sense.(hard to explain) I had to take a cpp file which has the class and the main in it, and then split it into three files. a header, main and core file. The core file will use a constructor which calls on two variables(i think this is the right term for this). The header will be like any header would, and the main would execute the whole program. The problem I am currently having which maybe caused by another error but right now I only have two. I somehow am unable to call upon the function which has a return of Void. I'm running the Microsoft Visual C++ 2008 Express Edition compiler incase that makes any difference. Below I'll post the code for each of the files. I have another problem from the same assignment but this one has been plaguing me right form the start. By the way, I am in a Game Programming degree course in case that helps in where I am coming from. I have looked online and seen mixed answers on how to do this, and any help i tried to get from other sites, was not really helpful. I've gone through tutorials both online and in my course book, but somehow something is not adding up right. Thank you ahead of time for any help I get, I'll surely be looking on here more and asking more questions to future homework assignments. Main file: Core file:Core file:Code:#include <iostream> using namespace std; #include "Asteroids.h" int main() { Asteroid firstAsteroid; Asteroid secondAsteroid; firstAsteroid.displayStats(1,20); secondAsteroid.displayStats(3,4); char wait; cin>> wait; return 0; } };};Code:#include <iostream> using namespace std; class Asteroid { int astSize; int astSpeed; int AsteroidNumber; public: Asteroid ( int Size, int Speed, int AsteroidNumber ) { setSpeed( Size); setSize (Speed); } int getSize() { return astSize; } int getSpeed() { return astSpeed; } void displayStats( int AsteroidNumber) { cout<< "Asteroids " << endl; cout<< " Asteroid number " << AsteroidNumber << endl; cout<< " Size " << astSize << endl; cout<< " Speed " << astSpeed << endl; } void setSize(int Size) { astSize = Size; } void setSpeed(int Speed) { astSpeed = Speed; } header: Code:#define Asteroids_h class Asteroid { public: int astSpeed; int astSize; int AsteroidNumber; Asteroid () {} void displayStats(int astSize, int astSpeed, int AsteroidNumber); void setSize( int astSize); void setSpeed(int astSpeed); int setSpeed() { return astSpeed; } int setSize() { return astSpeed; } };
http://cboard.cprogramming.com/cplusplus-programming/124615-confusion-constructors-classes.html
CC-MAIN-2014-23
en
refinedweb
A new version of the IBM Cloud Pak for Integration, 2019.4.1, was recently released which includes new IBM App Connect Enterprise certified container features. The new features in this release include: - Support for Red Hat OpenShift Container Platform 4.2 IBM Cloud Pak for Integration 2019.4.1 requires OpenShift 4.2, and for IBM App Connect Enterprise users that includes the addition of OpenShift Routes for HTTP and HTTPS traffic to a deployed integration server, which provide consistent endpoints for interactions. - Deploy API flows authored in IBM App Connect Designer You can now combine the ease of authoring flows for APIs in IBM App Connect Designer with the flexibility of deploying into your own private cloud. Previously, the managed cloud service had been the only valid deployment target for a Designer-authored flow, but now you can deploy into IBM Cloud Pak for Integration with IBM App Connect Enterprise. In 2019.4.1 this also includes the ability to run a subset of connectors locally, in addition to interacting with connectors in the IBM App Connect managed cloud service. See this blog post for more information about this feature. - User authorization for the IBM App Connect Enterprise dashboard User authorization has been added for the IBM App Connect Enterprise dashboard using the Identity and Access Management (IAM) service. The authority to use a dashboard can now be determined by the user’s authority for the namespace that contains the dashboard. The different roles for the users provide different permissions for actions in the dashboard. IBM Cloud Pak for Integration 2019.4.1 is available in a PPA via IBM Passport Advantage (Part CC4R5EN), or via the IBM Cloud Entitled Registry. It’s Really nice. Thank you for this.
https://developer.ibm.com/integration/blog/2019/12/06/cloud-pak-for-integration-2019-4-1-ace/
CC-MAIN-2020-34
en
refinedweb
soa1d_container::const_accessor and aos1d_container::const_accessor Lightweight object provides efficient array subscript [] access to the read elements from inside a soa1d_container or aos1d_container. #include <sdlt/soa1d_container.h>and #include <sdlt/aos1d_container.h> Syntax template <typename OffsetT> soa1d_container::const_accessor; template <typename OffsetT> aos1d_container::const_accessor; Arguments - typename OffsetT - The type offset that embedded offset that will be applied to each operator[] call Description const_accessorprovides [] operator that returns a proxy object representing a const Element inside the Container that can export the Primitive's data. Can re-access with an offset to create a new const_accessorthat when accessed at [0] will really be accessing at index corresponding to the embedded offset. Lightweight and meant to be passed by value into functions or lambda closures. Use const_accessorsin place of const pointers to access the logical array data.
https://software.intel.com/content/www/us/en/develop/documentation/cpp-compiler-developer-guide-and-reference/top/compiler-reference/libraries/introduction-to-the-simd-data-layout-templates/user-level-interface/accessors/soa1d-container-const-accessor-and-aos1d-container-const-accessor.html
CC-MAIN-2020-34
en
refinedweb
Implementation status: partially implemented Synopsis #include <sys/socket.h> int connect(int socket, const struct sockaddr *address, socklen_t address_len); Description The connect() function attempts to make a connection on a connection-mode socket or to set or reset the peer address of a connectionless-mode socket. Arguments: socket - the file descriptor of the specified socket. address - a pointer to a sockaddr structure containing the peer address. The length and format of the address depend on the address family of the socket., address_len - the length of the sockaddr structure pointed to by the address argument. If the socket has not already been bound to a local address, connect() binds it to an address which, unless the socket's address family is AF_UNIX, is an unused local address. If the initiating socket is not connection-mode, then connect() sets the socket's peer address, and no connection is made. For SOCK_DGRAM sockets, the peer address identifies the destination of all datagrams to be sent on subsequent send() functions, and limits the remote sender for subsequent recv() functions. If the sa_family member of address is AF_UNSPEC, the socket's peer address is reset. Note that despite no connection being made, the term connected is used to describe a connectionless-mode socket for which a peer address has been set. If the initiating socket is connection-mode, then connect() attempts to establish a connection to the address specified by the address argument. If the connection cannot be established immediately and O_NONBLOCK is not set for the socket, connect() blocks until the connection is established. If the timeout interval expires before the connection is established, connect() fails and the connection attempt is aborted. If connect() is interrupted by a signal that is caught while blocked waiting to establish a connection, connect() fails and set errno to [ EINTR], but the connection request is not aborted, and the connection is established asynchronously. If the connection cannot be established immediately and O_NONBLOCK is set for the file descriptor for the socket, connect() fails and sets errno to [ EINPROGRESS], but the connection request is not aborted, and the connection is established asynchronously. Subsequent calls to connect() for the same socket, before the connection is established, fail and errno is set to [ EALREADY]. When the connection has been established asynchronously, pselect(), select(), and poll() indicate that the file descriptor for the socket is ready for writing. The socket in use requires the process to have appropriate privileges to use the connect() function. Return value 0 on success, -1 otherwise ( errno is then set to indicate the corresponding error). Errors [. For AF_UNIX address family of the socket: [ EIO] - An I/O error occurred while reading from or writing to the file system. [ ELOOP] - A loop exists in symbolic links encountered during resolution of the pathname in address. [ ENAMETOOLONG] - The length of a component of a pathname is longer than { NAME_MAX}. [ ENOENT] - A component of the pathname does not name an existing file or the pathname is an empty string. [}. [ ENETDOWN] - The local network interface used to reach the destination is down. [ ENOBUFS] - No buffer space is available. [ EOPNOTSUPP] - The socket is listening and cannot be connected. Implementation tasks *Implement error detection as described above.
http://phoenix-rtos.com/documentation/libphoenix/posix/connect
CC-MAIN-2020-34
en
refinedweb
Preferences, also known as “prefs”, are key-value pairs stored by Chrome. Examples include the settings in chrome://settings, all per-extension metadata, the list of plugins and so on. Individual prefs are keyed by a string and have a type. E.g., the “browser.enable_spellchecking” pref stores a boolean indicating whether spell-checking is enabled. The pref service persists prefs to disk and communicates updates to prefs between services, including Chrome itself. There is a pref service instance per profile (prefs are persisted on a per-profile basis). The service is used through a client library that offers clients fast and synchronous access to prefs. To connect to the service and start reading and writing prefs simply use the ConnectToPrefService factory function: #include "services/preferences/public/cpp/pref_service_factory.h" class MyService : public service_manager::Service { void OnStart() { auto* connector = context()->connector(); auto pref_registry = base::MakeRefCounted<PrefRegistrySimple>(); // Register any preferences you intend to use in |pref_registry|. prefs::ConnectToPrefService( connector, std::move(pref_registry), {}, base::Bind(&MyService::OnPrefServiceConnected, base::Unretained(this))); } void OnPrefServiceConnected(std::unique_ptr<::PrefService> pref_service) { // Use |pref_service|. } }; The returned PrefService class predates the Pref Service and its behavior hasn't changed (i.e. all existing documentation still applies). Updates made on the PrefService object are reflected immediately in the originating service and eventually in all other services. In other words, updates are eventually consistent. Every pref should be owned by one service. The owning service provides the type and default value for that pref. Owned prefs can be registered as public, meaning other services can read and/or write them, or private (the default). Services that want to access a pref not owned by them must still register those prefs as “foreign” prefs. Registration happens through the PrefRegistry passed to ConnectToPrefService. For example: //services/my_service/my_service.cc void MyService::OnStart() { auto pref_registry = base::MakeRefCounted<PrefRegistrySimple>(); pref_registry->RegisterIntegerPref(kKey, kInitialValue, PrefRegistry::PUBLIC); prefs::ConnectToPrefService(...); } //services/other_service/other_service.cc void OtherService::OnStart() { auto pref_registry = base::MakeRefCounted<PrefRegistrySimple>(); pref_registry->RegisterForeignPref(kKey); prefs::ConnectToPrefService(...); } The design doc is here:
https://chromium.googlesource.com/chromium/src/+/0180320f15c87cb90320c4f523586c491db1d654/services/preferences/
CC-MAIN-2020-34
en
refinedweb
In complex applications, UI components consist of more building blocks than some state and UI. Before I already described a different way to look at our reusable UI components. We can look at them from developers' and users' perspectives at the same time. But on a conceptual level, components have more elements important to their behavior. It is important for developers to understand these concepts. Especially when working on big, complex and critical applications. We have to dive into the anatomy of a UI component. The API, also known as properties Interfaces are a way to describe how we want others to use and interact with our work, our components. The UI is a good example of an interface. It describes what we want our users to see and what we allow for interaction. "Interfaces are a way to describe how we want others to use and interact with our components" But what about the developers? The API of our components, better known as props or properties in most frameworks, is the interface for developers. There are some different API types we can define for other developers. - Configuration: interfaces that allow developers to determine how our UI component should look and act. These are often static values that do not change based on user interaction. Examples are classNameor usePortal; - Data: data often lives higher in the component tree. These interfaces allow data to be present and used in our component. These flows are uni-directional. An example is the valueproperty; - Actions: sometimes we need to invoke changes higher in the component tree. This requires callback functions to pass through the API. An example is the onChangeproperty. Note: to be in line with modern frameworks, I both use the terms properties and API State State is a mutable object that dictates the behavior and UI of our component. It is often combined with data received through the API. In the example below, we have a modal component with an incorporated button. When clicking the button, we set the value of show to true. Now our modal becomes visible for the user. function MyModal (props) { const [show, setShow] = useState(false); const handleShow = () => setShow((s) => !s); return (<br/> <> <button onClick={handleShow}>...</button> {show && <Modal onClose={handleShow}>...</Modal> </> ); } The addition of a state to a component makes it sometimes easy to introduce bugs. The data and action properties are part of the 'data-flow'. But we often interrupt this with our state by copying values from the data properties into our state. But what happens if the values change? Does our state also change? Should it? Look at the example below look of what happens when showModal updates. If MyComponent is already part of the component tree, then nothing happens. We have interrupted the data-flow. Don't. function MyModal({ showModal }) { const [show, setShow] = useState(showModal); if (show) return null; return <Modal onClose={handleShow}>...</Modal>; } Actions As you can see in the diagram, actions link everything together. They are functions harboring small pieces logic. User interaction (e.g. a button click) trigger actions. But life-cycle methods, as described later, also trigger actions. Triggered actions can use data from the state and properties in their execution. Actions can come in many forms: - Actions defined inside the component as a separate function; - Actions defined in the life-cycle method of the component; - actions defined outside the component and used in many components. Good examples are the actions within a module of the scalable architecture. Below you can see part of a small React component example with two different actions. The first action changes the state on interaction (e.g. typing in an <input /> field). The second action triggers the changes. It removes the modal, it makes an external call to a server to save the values and resets the internal state. function MyComponent(props) { const [show, setShow] = useState(true); const [state, setState] = useState(); const save = useMyApiCall(...); function handleChange(value) { setState((old) => ({ ...old, key: value }); } function handleClose() { setShow(false); save(state); setState(); } return <>...</>; } Note: the above component has some small flaws, as does two different state updates in one action. But, it fits its purpose. Lifecycle User inaction results in changes in the state of our component, or higher in the component tree. Data received through the API reflect these changes. When change happens, our component needs to update itself to reflect these changes. Or it needs to re-render. Sometimes, we want your component to execute extra logic when this happens. A so-called 'side-effect' needs to be triggered. of the changing values. A simple example is a search component. When our user types, the state of the component should change, invoking a re-render. Every time we type, we want our component to perform an API-call. We can do this with the onChange handler of <input />. But what if our API-call depends on a value provided through the properties? And what if that value changes? We need to move our API-call to an update life-cycle method, as you can see below. function SearchComponent({ query }) { const [search, setSearch] = useState(''); useEffect(() => { myApiCall({ ...query, search }); }, [query, search]); const handleSearch = (e) => setSearch(e.target.value); return <input value={search} onChange={handleSearch} />; } Updates are not the only life-cycle methods. We also have the initialization of the component or the mounting of the component. Life-cycle methods trigger after rendering. This means that the initialization happens after the initial render. We have the life-cycle method for when a component is removed from the component tree. It is unmounted. Most times, the logic called in life-cycles methods can be shared with other life-cycle methods or with handlers in the UI. This means we are invoking actions in our life-cycle methods. Actions, as illustrated, can cause changes in the state. But, life-cycle methods are called after state changes. Calling state-changing actions might cause a re-rendering loop. Be cautious with these types of actions. The UI The UI describes what we want our users to interact with. These interactions, such as clicking on a button, trigger actions. It results from the rendering of our UI component. State changes or changing properties trigger the rendering. It is possible to trigger some 'side-effects' when this happens in the components' life-cycle methods. It is often possible to add logic to our rendering. Examples are conditional visibility or showing a list of data with varying sizes. To do so, we need logic, rendering logic. This be something simple as using a boolean value from the state, or use an array.map() function. But sometimes we must combining many values in our rendering logic or even use functions to help us. In such a case, I would take that logic outside the rendering function itself as much as possible. function MyModal ({ value }) { const [show, setShow] = useState(false); const showModal = show && value !== null; return ( <> <span>My component!</span> {showModal && <Modal onClose={handleShow}>...</Modal> </> ); } Conclusion When building our components, we can use various building blocks that work together. On both ends, we have interfaces for different audiences. We allow developers to interact with our UI components and change their behavior. On the other side, we have users interacting with our components. Different elements inside a component link these two interfaces together. This article was originally posted on kevtiq.co Posted on by: Kevin Pennekamp 👋 Hey, I'm Kevin. I'm a Dutch software engineer. I love CSS, front-end architecture, engineering and writing about it! Read Next My website now loads in less than 1 sec! Here's how I did it! ⚡ C M Pandey - 7 security tips for your React application. 🔐 Vaibhav Khulbe - How knowledgable you are about React? See common mistakes people make adam klein - Discussion nice article
https://dev.to/vycke/ui-component-anatomy-the-architecture-of-a-component-14pc
CC-MAIN-2020-34
en
refinedweb
Frequency of Repeated words in a string in Java In this Java tutorial, we are going to find the frequency of the repeated words in Java. In order to do this, we have taken a sentence and split it into strings and compare each string with succeeding strings to find the frequency of the current string. Thus we can count the occurrences of a word in a string in Java. How to count repeated words in a string in java Before jumping into the code snippet to count repeated words in a string in java take a look at these below things. What is split() string in Java? split() is used to split a string into substrings based on regular expression. Suppose, you have a sentence and you have to deal with each string(words) of the sentence then there we use split() The algorithm to find the frequency of Repeated word in a sentence in Java - First, we have entered a string and then using split() string method. We split the input string into substrings based on regular expression. - Using for loop we start checking from the first substring and check for strings those are equal to current string and incrementing the count. - We initialize the count from 1 because we have to include the current string along with succeeding repeating string. - We are printing those strings for which we are getting count more than one and the current string along with repeating string “-1”. So we will not count that string again. Java Code to count repeated words in a string in java import java. util. Scanner; public class codespeedy { public static void main(String[] args) { Scanner scan= new Scanner(System.in); String str=scan.nextLine(); String[] s=str.split(" "); int count=1; for(int i=0;i<=s.length;i++) { for(int j=i+1;j<s.length;j++) { if(s[i].equals(s[j]) && s[i]!="-1") { s[j]="-1"; count++; } } if(count>1 && s[i]!="-1") { System.out.println(s[i]+" "+count); s[i]="-1"; } count=1; } } } INPUT I am indian , I am proud to be indian . OUTPUT I 2 am 2 indian 2 So, as we see in input sentence I is repeating 2 times. am is repeating 2 times and Indian is repeating 2 times. So these repeating strings or we can say substrings of input string along with their frequency are our outputs. Nice explained Sir
https://www.codespeedy.com/frequency-of-repeated-words-in-a-string-in-java/
CC-MAIN-2020-34
en
refinedweb
More Dart — Literally Various extensions that make Dart a better place: cache.dartis a collection of different caching strategies and their expiry policy. char_matcher.dartis a model for character classes, their composition and operations on strings. collection.dartis a collection of collection types: bi-map, bit-list, multi-set, set and list multi-map, range, and string. iterable.dartis a collection of iterables and iterators. math.dartis a collection of common mathematical functions. number.dartprovides fractional, complex and quaternion arithmetic. ordering.darta fluent interface for building comparator functions. printer.darta fluent interface for configuring sophisticated formatter. tuple.darta generic sequence of typed values. And there are more to come .... Misc Installation Follow the installation instructions on dart packages. Import one or more of the packages into your Dart code using: import 'package:more/cache.dart'; import 'package:more/char_matcher.dart'; import 'package:more/collection.dart'; import 'package:more/hash.dart'; import 'package:more/iterable.dart'; import 'package:more/math.dart'; import 'package:more/number.dart'; import 'package:more/ordering.dart'; import 'package:more/printer.dart'; import 'package:more/tuple.dart'; Contributing The goal of the library is to provide a loose collection of carefully curated utilities that are not provided by the Dart standard library. All features must be well tested. New features must have significant advantages over alternatives, such as code reduction, readability improvement, speed increase, memory reduction, or improved accuracy. In case of doubt, consider filing a feature request before filing a pull request. History This library started in April 2013 as I was working through the puzzles of Project Euler and encountered some missing features in Dart. Over time the library grew and became more useful in many other places, so I created this reusable library. Some parts of this library are inspired by similar APIs in Google Guava (Google core libraries for Java) and Apache Commons (a repository of reusable Java components). License The MIT License, see LICENSE. Libraries - more.char_matcher - A first-class model of character classes, their composition and operations on strings. - more.collection - - more.collection.cache - - more.hash - The Jenkins hash function copied and adapted from 'package:math'. - more.iterable - Some fancy iterables and iterators. - more.math - A collection of common mathematical functions. - more.number - Support for fractional and complex arithmetic. - more.ordering - Provides a first-class model of comparators, their composition and operations on iterables. - more.printer - Provides a first-class model to convert object to strings using composition and highly configurable formatting primitives. - more.tuple - Tuple data type.
https://pub.dev/documentation/more/latest/
CC-MAIN-2020-34
en
refinedweb
ASP.NET Output Cache Provider for Azure Cache for Redis The Redis Output Cache Provider is an out-of-process storage mechanism for output cache data. This data is specifically for full HTTP responses (page output caching). The provider plugs into the new output cache provider extensibility point that was introduced in ASP.NET 4. For ASP.NET Core applications, read Response caching in ASP.NET Core. To use the Redis Output Cache Provider, first configure your cache, and then configure your ASP.NET application using the Redis Output Cache Provider NuGet package. This topic provides guidance on configuring your application to use the Redis Output Cache Provider. For more information about creating and configuring an Azure Cache for Redis instance, see Create a cache. Store ASP.NET page output in the cache To configure a client application in Visual Studio using the Azure Cache for Redis Session State NuGet package, click NuGet Package Manager, Package Manager Console from the Tools menu. Run the following command from the Package Manager Console window. Install-Package Microsoft.Web.RedisOutputCacheProvider The Redis Output Cache Provider NuGet package has a dependency on the StackExchange.Redis.StrongName package. If the StackExchange.Redis.StrongName package is not present in your project, it is installed. For more information about the Redis Output Cache Provider NuGet package, see the RedisOutputCacheProvider NuGet page. Note In addition to the strong-named StackExchange.Redis.StrongName package, there is also the StackExchange.Redis non-strong-named version. If your project is using the non-strong-named StackExchange.Redis version you must uninstall it; otherwise, you will experience naming conflicts in your project. For more information about these packages, see Configure .NET cache clients. The NuGet package downloads and adds the required assembly references and adds the following section into your web.config file. This section contains the required configuration for your ASP.NET application to use the Redis Output Cache Provider. <caching> <outputCache defaultProvider="MyRedisOutputCache"> <providers> <add name="MyRedisOutputCache" type="Microsoft.Web.Redis.RedisOutputCacheProvider" host="" accessKey="" ssl="true" /> </providers> </outputCache> </caching> Configure the attributes with the values from your cache blade in the Microsoft Azure portal, and configure the other values as desired. For instructions on accessing your cache properties, see Configure Azure Cache for Redis settings. Attribute notes Setting connectionString The value of connectionString is used as key to fetch the actual connection string from AppSettings, if such a string exists in AppSettings. If not found inside AppSettings, the value of connectionString will be used as key to fetch actual connection string from the web.config ConnectionString section, if that section exists. If the connection string does not exists in AppSettings or the web.config ConnectionString section, the literal value of connectionString will be used as the connection string when creating StackExchange.Redis.ConnectionMultiplexer. The following examples illustrate how connectionString is used. Example 1 <connectionStrings> <add name="MyRedisConnectionString" connectionString="mycache.redis.cache.windows.net:6380,password=actual access key,ssl=True,abortConnect=False" /> </connectionStrings> 2 <appSettings> <add key="MyRedisConnectionString" value="mycache.redis.cache.windows.net:6380,password=actual access key,ssl=True,abortConnect=False" /> </appSettings> 3 <sessionState mode="Custom" customProvider="MySessionStateStore"> <providers> <add type = "Microsoft.Web.Redis.RedisSessionStateProvide" name = "MySessionStateStore" connectionString = "mycache.redis.cache.windows.net:6380,password=actual access key,ssl=True,abortConnect=False"/> </providers> </sessionState> Notes on throwOnError Currently, if an error occurs during a session operation, the session state provider will throw an exception. This shuts down the application. This behavior has been modified in a way that supports the expectations of existing ASP.NET session state provider users while also providing the ability to act on exceptions, if desired. The default behavior still throws an exception when an error occurs, consistent with other ASP.NET session state providers; existing code should work the same as before. If you set throwOnError to false, then instead of throwing an exception when an error occurs, it will fail silently. To see if there was an error and, if so, discover what the exception was, check the static property Microsoft.Web.Redis.RedisSessionStateProvider.LastException. Notes on retryTimeoutInMilliseconds This provides some retry logic to simplify the case where some session operation should retry on failure because of things like network glitch, while also allowing you to control the retry timeout or opt out of retry entirely. If you set retryTimeoutInMilliseconds to a number, for example 2000, then when a session operation fails, it will retry for 2000 milliseconds before treating it as an error. So to have the session state provider to apply this retry logic, just configure the timeout. The first retry will happen after 20 milliseconds, which is sufficient in most cases when a network glitch happens. After that, it will retry every second until it times out. Right after the time out, it will retry one more time to make sure that it won’t cut off the timeout by (at most) one second. If you don’t think you need retry (for example, when you are running the Redis server on the same machine as your application) or if you want to handle the retry logic yourself, set retryTimeoutInMilliseconds to 0. About redisSerializerType By default, the serialization to store the values on Redis is done in a binary format provided by the BinaryFormatter class. Use redisSerializerType to specify the assembly qualified type name of a class that implements Microsoft.Web.Redis.ISerializer and has the custom logic to serialize and deserialize the values. For example, here is a Json serializer class using JSON.NET: namespace MyCompany.Redis { public class JsonSerializer : ISerializer { private static JsonSerializerSettings _settings = new JsonSerializerSettings() { TypeNameHandling = TypeNameHandling.All }; public byte[] Serialize(object data) { return Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(data, _settings)); } public object Deserialize(byte[] data) { if (data == null) { return null; } return JsonConvert.DeserializeObject(Encoding.UTF8.GetString(data), _settings); } } } Assuming this class is defined in an assembly with name MyCompanyDll, you can set the parameter redisSerializerType to use it: <sessionState mode="Custom" customProvider="MySessionStateStore"> <providers> <add type = "Microsoft.Web.Redis.RedisSessionStateProvider" name = "MySessionStateStore" redisSerializerType = "MyCompany.Redis.JsonSerializer,MyCompanyDll" ... /> </providers> </sessionState> Output cache directive Add an OutputCache directive to each page for which you wish to cache the output. <%@ OutputCache Duration="60" VaryByParam="*" %> In the previous example, the cached page data remains in the cache for 60 seconds, and a different version of the page is cached for each parameter combination. For more information about the OutputCache directive, see @OutputCache. Once these steps are performed, your application is configured to use the Redis Output Cache Provider. Third-party output cache providers Next steps Check out the ASP.NET Session State Provider for Azure Cache for Redis.
https://docs.microsoft.com/en-gb/azure/azure-cache-for-redis/cache-aspnet-output-cache-provider
CC-MAIN-2020-34
en
refinedweb
Forests A forest is a set of one or more domain trees that do not form a contiguous namespace. All trees in a forest share a common schema, configuration, and global catalog. All trees in a given forest exchange trust according to transitive hierarchical Kerberos trust relationships. Unlike trees, a forest does not require a distinct name. A forest exists as a set of cross-reference objects and Kerberos trust relationships recognized by the member trees. Trees in a forest form a hierarchy for the purposes of Kerberos trust; the tree name at the root of the trust tree refers to a given forest. The following figure shows a forest of noncontiguous namespaces.
https://docs.microsoft.com/en-us/windows/win32/ad/forests
CC-MAIN-2020-34
en
refinedweb
[ ] Doug Cutting commented on AVRO-1261: ------------------------------------ The no-arg constructor is also used to create instances when reading. Setting field defaults in this case may harm performance, especially when new copies of mutable default values are allocated each time. Similarly, setting field values to defaults when writing may harm performance when the application overwrites the default. In general, there are cases where it's probably fastest to create instances without defaults set. Currently the no-arg constructor serves this purpose and the Builder API supports the case where defaults are desired. Perhaps we could better document this? The generated no-arg constructor might include javadoc cautioning that no default values are set and that the builder should be used if they're desired? > Honor schema defaults with the Constructor in addition to the builders. > ----------------------------------------------------------------------- > > Key: AVRO-1261 > URL: > Project: Avro > Issue Type: Bug > Components: java > Affects Versions: 1.7.4 > Reporter: Christopher Conner > Priority: Minor > > As I understand it, currently if you want to utilize defaults in a schema, ie: > { > "namespace": "com.chris.test", > "type": "record", > "name": "CHRISTEST", > "doc": "Chris Test", > "fields": [ > {"name": "firstname", "type": "string", "default": "Chris"}, > {"name": "lastname", "type": "string", "default": "Conner"}, > {"name": "username", "type": "string", "default": "cconner"} > ] > } > Then I have to use the builders to create my objects. IE: > public class ChrisAvroTest { > public static void main(String[] args) throws Exception { > CHRISTEST person = CHRISTEST.newBuilder() > .build(); > System.out.println("person:" + person); > } > } > Is my understanding correct? Is it possible to make it so the default constructor as well? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see:
http://mail-archives.apache.org/mod_mbox/avro-dev/201302.mbox/%3CJIRA.12634471.1361993873150.352469.1361995153565@arcas%3E
CC-MAIN-2017-51
en
refinedweb
Amazon EC2 offers the CloudWatch service to monitor cloud instances as well as load balancers. While this service comes at some cost (0,015$/hour/instance) it offers useful infrastructure metrics about the performance of your EC2 infrastructure. While there are commercial and free tools out there which provide this service, you might not want to invest in them or add another tool to your monitoring infrastructure. This post will provide step-by-step guidance on how to extend your monitoring solution to retrieve cloud metrics. The code sample is based on the free and open-source dynaTrace plugin for agent-less cloud monitoring. Some parts however have been simplified or omitted in tutorial. The major parts that are missing in this sample are dynamic discovery of EC2 instances and an algorithm which is a bit more reliable and accurate in retrieving monitoring data. Step 1 – Basic Infrastructure So let’s get started by setting up our basic infrastructure. First we need to download the Java Library for Amazon CloudWatch . Alternatively you can create your own Web Service stubs or simply use the REST interface. For simplicity we rely on the ready-to-use library provided by Amazon. The we create a Java class for our cloud monitor which implements the basic functionality we need. For brevity I will omit any imports needed – in Eclipse CTRL – SHIFT – O will do the job 🙂 public class CloudWatchMonitor { private static class MeasureSet{ public Calendar timestamp; public HashMap<String, Double> measures = new HashMap<String, Double>(); @Override public int compareTo(MeasureSet compare) { return (int) (timestamp.getTimeInMillis() - compare.timestamp.getTimeInMillis()); } public void setMeasure(String measureName, double value) { measures.put(measureName, value); } public Set<String> getMeasureNames() { return measures.keySet(); } public double getMeasure(String measureName) { return measures.get(measureName); } } private String instanceId; private AmazonCloudWatchClient cloudWatchClient; public static void main(String... args) throws Exception { CloudWatchMonitor monitor = new CloudWatchMonitor("<instanceName>", Credentials.accessKeyId, Credentials.secretAccessKey); for (;;) { MeasureSet measureSet = monitor.retrieveMeasureSet(measureNames); if (measureSet != null) { printMeasureSet(measureSet); } Thread.sleep(60000); } } public CloudWatchMonitor(String instanceId, String accessKeyId, String secretAccessKey) { cloudWatchClient = new AmazonCloudWatchClient(accessKeyId, secretAccessKey); this.instanceId = instanceId; } } So what have we done? We defined the CloudWatchMonitor which will contain all our logic. The main method simply queries every minute for new measures and prints them. We have chosen an interval of one minute as CloudWatch provides accuracy to one-minute intervals. Additionally, we defined the inner MeasureSet class which represent a set of measures collected for a given timestamp. We have used a HashMap to make the implementation more generic. The same is true for the retrieveMeasureSet method which takes the measures it retrieves as an input. Finally we defined the constructor of our monitor to create an instance of an AmazonCloudWatchClient – this is supplied by the Amazon library we use – and store the instanceID of the EC2 instance to monitor. The accessKeyID and secretAccessKey are the credentials provided for your Amazon EC2 account. Step 2 – Retrieve Monitoring Information Now we have to implement the retrieveMeasureSet method which is the core of our implementation. As there are quite a number of things we have to do, I will split the implementation of this method into several parts. We start by creating a GetMetricStatisticsRequest object which contains all information which data we are going to request. First we set the namespace of the metrics which in our case is AWS/EC2 (in case we want to retrieve load balancer metrics it would be AWS/ELB). Next we define which statistical values we want to retrieve. Then we define the period of the monitoring data. In our case this is one minute. If you want aggregated data you can specify any multiple of 60. Then we define which measure aggregates we want to retrieve. CloudWatch offers average, minimum and maximum values. As our aggregation will only contain one data point all of them will be the same. Therefore we only retrieve the average. public MeasureSet retrieveMeasureSet(ArrayList<String> measureNames) throws AmazonCloudWatchException, ParseException { GetMetricStatisticsRequest getMetricRequest = new GetMetricStatisticsRequest(); getMetricRequest.setNamespace("AWS/EC2"); getMetricRequest.setPeriod(60); ArrayList<String> stats = new ArrayList<String>(); stats.add("Average"); getMetricRequest.setStatistics(stats); ArrayList<Dimension> dimensions = new ArrayList<Dimension>(); dimensions.add(new Dimension("InstanceId", instanceId)); getMetricRequest.setDimensions(dimensions); Next we have to define the time frame for which we want to retrieve monitoring data. This code looks a bit complex simply because we have to do some number formatting here. CloudWatch expects the time in a special format and all date values in ISO 8601 format which use UTC and looks like this 2010-04-22T19:12:59Z. Therefore, we have to get the current UTC time and format the date strings in the proper format. We take the current time as the end time and the start time is 10 minutes back in the past. Why are we doing this? The reason is that CloudWatch data is written asynchronously and the latest metrics we will get will be a couple of minutes in the past. If we set the start time to one minute in the past we would not get any metrics. String dateFormatString = "%1$tY-%1$tm-%1$tdT%1tH:%1$tM:%1$tSZ"; GregorianCalendar calendar = new GregorianCalendar(TimeZone.getTimeZone("UTC")); calendar.add(GregorianCalendar.SECOND, -1 * calendar.get(GregorianCalendar.SECOND)); getMetricRequest.setEndTime(String.format(dateFormatString, calendar)); calendar.add(GregorianCalendar.MINUTE, -10); getMetricRequest.setStartTime(String.format(dateFormatString, calendar)); Additionally we have to add the following code to the constructor to calculate our UTC offset and define the timezone member field. TimeZone zone = TimeZone.getDefault(); timeOffset = zone.getOffset(new Date().getTime()) / (1000 * 3600); The next thing we have to do now is retrieve the actual metrics. As we will get more than one measurement we have to store them to later on select the latest measurement. The inconvenient part here is that the CloudWatch API does not allow us to retrieve more than one timer at once. Therefore we have to make a request for each metric we want to retrieve. Additionally you will notice some possibly cryptic date parsing and calculation. What we do here is parse the date string we get back from Amazon and create a calendar object. The tricky part is that we will have to add (or subtract) the offset of our current timezone to UTC. The formatter is defined as private DateFormat formatter = new SimpleDateFormat(“yyyy-MM-dd’T’HH:mm:SS’Z'”); HashMap<Long, MeasureSet> measureSets = new HashMap<Long, MeasureSet>(); for (String measureName : measureNames) { getMetricRequest.setMeasureName(measureName); GetMetricStatisticsResponse metricStatistics = cloudWatchClient.getMetricStatistics(getMetricRequest); if (metricStatistics.isSetGetMetricStatisticsResult()) { List<Datapoint> datapoints = metricStatistics.getGetMetricStatisticsResult().getDatapoints(); for (Datapoint point : datapoints) { Calendar cal = new GregorianCalendar(); cal.setTime(formatter.parse(point.getTimestamp())); cal.add(GregorianCalendar.HOUR, timeOffset); MeasureSet measureSet = measureSets.get(cal.getTimeInMillis()); if (measureSet == null) { measureSet = new MeasureSet(); measureSet.timestamp = cal; measureSets.put(cal.getTimeInMillis(), measureSet); } measureSet.setMeasure(measureName, point.getAverage()); } The last part is to retrieve the latest available measurements and return them. Therefore we will simply sort the measurements and return the latest one. ArrayList<MeasureSet> sortedMeasureSets = new ArrayList<MeasureSet>(measureSets.values()); if (sortedMeasureSets.size() == 0) { return null; } else { Collections.sort(sortedMeasureSets); return sortedMeasureSets.get(sortedMeasureSets.size() - 1); } In order to make sorting work we have to make the MeasuresSet implement comparable private static class MeasureSet implements Comparable<MeasureSet> { @Override public int compareTo(MeasureSet compare) { return (int) (timestamp.getTimeInMillis() - compare.timestamp.getTimeInMillis()); } // other code omitted } Step 3 – Printing the Results Last we have to print the results to the console. This code here is pretty straightforward and shown below. public static void printMeasureSet(MeasureSet measureSet) { System.out.println(String.format("%1$tY-%1$tm-%1$td %1tH:%1$tM:%1$tS", measureSet.timestamp)); for (String measureName : measureSet.getMeasureNames()) { System.out.println(measureName + ": " + measureSet.getMeasure(measureName)); } } Step 4 – Defining the Metrics to Retrieve Our code is now nearly complete the only thing we have to do is define which metrics we want to retrieve. We can either pass them as command-line parameters or explicitly specify them. CloudWatch supports the following parameters for EC2 instances: - CPUUtilization - NetworkIn - NetworkOut - DiskReadBytes - DiskWriteBytes - DiskReadOperations Step 5 – Visualizing and Storing the Data As you most likely do not want to look at the data on your console, the final step is to visualize and store the data. How to implement this depends on the monitoring infrastructure you are using- Below you can see a sample of how this data looks in dynaTrace. Conclusion Building your own CloudWatch monitoring is pretty easy. The metrics provided enable an initial understanding how your EC infrastructure is behaving. These metrics are also input to the Amazon EC2 Auto Scaling infrastructure. If you want to read more articles like this visit dynaTrace 2010 Application Performance Almanac
https://www.dynatrace.com/blog/week-14-building-your-own-amazon-cloudwatch-monitor-in-5-steps/
CC-MAIN-2017-51
en
refinedweb
How is a Class like a Microservice? At work we’re discussing moving some stuff to microservices. A lot of people said that they like “how microservices separate concerns while monoliths entangle them”. Others argued that “monoliths can be separated just fine with modules”, to which someone responded “it’s really hard to keep modules separate”. But “don’t you have the same problem with federated microservices?” etc etc etc. As the discussion went on I realized that we all actually wanted the same thing out of our architecture, but we weren’t able to make that thing explicit. We didn’t have a common language to talk about what we actually mean by “separation of concerns”. I’d like to try to pin that down here. Note that we’re going to be discussing microservices exclusively on separation of concerns: other qualities, like independent scaling, language freedom, etc are not a part of this essay. First, let’s abstract out the implementation of code. No classes, no services, no functions, just this black box we’re going to call… a box.1 A box is a pile of stuff. We don’t know or care how it’s organized. It just sits there. To make it useful we’ll add arrows, which represent interactions. We can have more than one box, of course. If an arrow starts in a box, it (the box) is doing the interacting, and if it ends in a box, that box is being interacted with. Arrows don’t have to start or end in boxes, which represents an interaction with the environment, like a person kicking it. With boxes and arrows we can draw pretty diagrams, but to actually make it useful for our purposes we need to add just one more thing. Every box has some number of circles on the ends, or ports. We’ll also put in the restriction that arrows must end on ports. This gives us enough structure to call diagrams valid or invalid based on whether or not they obey this restriction. It’s obvious that we can use our boxes-and-ports diagram to represent some code organization. It’s a little less obvious just how many things we can represent. We can say that the boxes are classes, the arrows are method calls, and the ports are public methods. That gives us OOP. Or we can say that the boxes are services, the arrows are HTTP calls, and the ports are APIs. That gives us a microservice architecture. Abstract far enough and classes and microservices have the same representation! From a design perspective, an OO monolith and a collection of microservices have the same topology. Can we use this to figure out why the monolith has a higher risk of coupling? We’ll add another property to our abstraction: we’ll make it fractal. This means you can zoom out of a diagram to see the containing box, and you can zoom in on an box to see the internal diagram. To make this more concrete, here’s a class: class Foo def bar; end private def baz; end end In the context of the program, Foo is a box with bar as a port. baz is not a port, since an outside class can’t call it. But in the context of Foo, both bar and baz are boxes with their function calls as ports. In the context of our software system, this ruby program is itself a box with some ports, most likely its I/O classes. Foo.bar may or may not be a port of the program box. And that program may be part of a larger server application. We can keep zooming in and out with arbitrary resolution, albeit limited usefulness. As a final rule, we’ll say that arrows cannot cross into a box. If box A contains box B, an arrow starting from outside A can end on a port of A but not a port of B. We’ll still allow an arrow to cross from B to outside A. We want to see what we can do with as few rules as possible. We have a way of describing encapsulation. We can hide a collection of code sharing a single domain inside a larger box that abstracts away the implementation. If done properly, we have an expectation (of sorts) that we don’t need to know what’s inside the outer box to use its ports. This is the idea of interface/implementation decoupling, or the “black box”. As long as the interface stays the same, we can make whatever changes we want to the implementation without having to QA our entire system. The box is our structure, the ports are our interface, and the insides are the implementation. If something outside a box can’t use a port inside of the box, we’ll say that box is/has a proper interface. In Ruby, the biggest box with a proper interface is the class.2 Modules are intended to be a bigger box, but they don’t have a proper interface. Rather, they only act as namespaces. You can make internal modules and classes private via private_constant, but this is both obscure and not very Rubyish.3 Rather, we generally ‘enforce’ proper interfaces via convention, telling people not to use “internal classes”. Convention, as we all know, is fragile: - The programmer has to know that you’re using a boxes-and-ports abstraction in the first place, and that you want only certain methods to be usable to outside classes. - You need some way to document what methods are the ports. Comments might work for small projects but they don’t scale, especially when you have a complex code architecture. - It’s easy for a programmer to accidentally violate the interface, putting extra pressure on code review to catch the error. - It’s easy for a harried programmer to intentionally violate the interface and convince everybody else it’s “just this once”. Eventually, the system becomes entangled and everything becomes terrible. A microservice architecture, on the other hand, has a bigger proper interface than the class: the microservice. The only way to interact with a microservice is to make an HTTP request.4 You can choose what methods the API exposes, thereby enforcing a proper interface. We can guarantee decoupling between the boxes in different services! This only goes so far, though. In the architecture discussion, one engineer warned against “federated microservices”. This was where services “knew about” other services and added API calls to their logic. In his experience this leads to implicit couplings and a “microlith” in practice, where the services all depended on each other. In many cases this happened because the “microservice” was the largest possible proper interface, but they needed a bigger box. There was no way to group clusters of services into larger boxes with proper interfaces. Not only that, but there was no way to describe this as a problem. Without a common notation for boxes, ports, and interfaces, there’s no easy to communicate what, exactly, it is you’re losing. And without that notation, it’s hard to see that one of the major claimed benefits of a microservice architecture, separation of concerns, is us using devops to compensate for a flaw in our programming language. This doesn’t mean that microservices are bad. What it does mean is that we need to be clear of what we want out of microservices. If we want microservices to improve scalability or use multiple languages, then they may be the right choice. But if we want them primarily for the separation of concerns, I think that’s a bad idea. Let’s tie this off with an exercise for the reader. Notation is really powerful. With just three components and a couple rules we were able to see symmetries between classes and services. By defining a specific subset of this abstraction we were able to show why Ruby’s module system encourages coupling. But there are all sorts of ways we can strengthen or weaken the model. Just a few examples: - Arrows have to end at ports. What if we placed restrictions on where they start? - For proper interfaces, arrows can’t cross into a box. What if arrows couldn’t cross out of a box? - Can two boxes intersect? Can a box be inside two different boxes at once? - What if we assigned boxes and ports different “colors” and used that to restrict arrows? Make some small tweaks to the model. What real-world programming structures does it represent now? What can we learn from it? Thanks to Alex Koppel for feedback. - I’m trying really hard to avoid any terms already used to describe code organization because all the good ones are overloaded into meaninglessness. [return] - Technically classes don’t have proper interfaces, because you can always use Object.sendto call private methods. In practice classes do have proper interfaces: use sendand your coworkers will throw rocks at you. [return] - While reading a draft of this article, Alex Koppel discovered private_constantin this article, which was the first he’d ever heard of it. He’s been professionally writing Ruby for almost a decade now. Nobody else in our company had heard of it, either. [return] - This isn’t the only way, of course. You can interact with message queues, pub/sub, a shared data store, etc. I’m simplifying here. [return]
https://hillelwayne.com/post/box-diagrams/
CC-MAIN-2017-51
en
refinedweb
QuantLib_DefaultProbKey man page DefaultProbKey Synopsis #include <ql/experimental/credit/defaultprobabilitykey.hpp> Inherited by NorthAmericaCorpDefaultKey. Public Member Functions DefaultProbKey (const std::vector< boost::shared_ptr< DefaultType > > &eventTypes, const Currency cur, Seniority sen) const Currency & currency () const Seniority seniority () const const std::vector< boost::shared_ptr< DefaultType > > & eventTypes () const Size size () const Protected Attributes std::vector< boost::shared_ptr< DefaultType > > eventTypes_ aggregation of event types for which the contract is sensitive. Currency obligationCurrency_ Currency of the bond and protection leg payment. Seniority seniority_ Reference bonds seniority. Detailed Description Used to index market implied credit curve probabilities. It is a proxy to the defaultable bond or class of bonds which determines the credit contract conditions. It aggregates the atomic default types in a group defining the contract conditions and which serves to index the probability curves calibrated to the market. Author Generated automatically by Doxygen for QuantLib from the source code. Referenced By The man pages DefaultProbKey(3), eventTypes(3), eventTypes_(3), obligationCurrency_(3), seniority(3) and seniority_(3) are aliases of QuantLib_DefaultProbKey(3).
https://www.mankier.com/3/QuantLib_DefaultProbKey
CC-MAIN-2017-51
en
refinedweb
[ aws . cloudwatch ]: Amazon CloudWatch retains metric data as follows: retrievable only with a resolution of 5 minutes. After 63 days, the data is further aggregated and is available with a resolution of 1 hour. CloudWatch started retaining 5-minute and 1-hour metric data as of July 9, 2016. For information about metrics and dimensions supported by AWS services, see the Amazon CloudWatch Metrics and dimensions Reference in the Amazon CloudWatch User Guide . See also: AWS API Documentation get-metric-statistics --namespace <value> --metric-name <value> [--dimensions <value>] --start-time <value> --end-time <value> --period <value> [--statistics <value>] [--extended-statistics <value>] [--unit <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>] --namespace (string) The namespace of the metric, with or without spaces. --metric-name (string) The name of the metric, with or without spaces. --dimensions (list) The dimensions.. For an example, see Dimension Combinations in the Amazon CloudWatch User Guide . For more information about specifying dimensions, see Publishing Metrics in the Amazon CloudWatch User Guide . Shorthand Syntax: Name=string,Value=string ... JSON Syntax: [ { "Name": "string", "Value": "string" } ... ] --start-time (timestamp) The time stamp that determines the first data point to return. Start times are evaluated relative to the time that CloudWatch receives the request. The value specified is inclusive; results include data points with the specified time stamp. The time stamp must be in ISO 8601 UTC format (for example, 2016-10-03T23:00:00Z). CloudWatch rounds the specified time stamp as follows: - Start time less than 15 days ago - Round down to the nearest whole minute. For example, 12:32:34 is rounded down to 12:32:00. - Start time between 15 and 63 days ago - Round down to the nearest 5-minute clock interval. For example, 12:32:34 is rounded down to 12:30:00. - Start time greater than 63 days ago - Round down to the nearest 1-hour clock interval. For example, 12:32:34 is rounded down to 12:00:00. If you set period to 5, 10, or 30, the start time of your request is rounded down to the nearest time that corresponds to even 5-, 10-, or 30-second divisions of a minute. For example, if you make a query at (HH:mm:ss) 01:05:23 for the previous 10-second period, the start time of your request is rounded down and you receive data from 01:05:10 to 01:05:20. If you make a query at 15:07:17 for the previous 5 minutes of data, using a period of 5 seconds, you receive data timestamped between 15:02:15 and 15:07:15. --end-time (timestamp) The time stamp that determines the last data point to return. The value specified is exclusive; results include data points up to the specified time stamp. The time stamp must be in ISO 8601 UTC format (for example, 2016-10-10T23:00:00Z). --period (integer)-metric-data call that includes a StorageResolution of 1 second. If the StartTime parameter specifies a time stamp that is greater than 3 hours ago, you must specify the period as follows or no data points in that time range is returned: - Start time between 3 hours and 15 days ago - Use a multiple of 60 seconds (1 minute). - Start time between 15 and 63 days ago - Use a multiple of 300 seconds (5 minutes). - Start time greater than 63 days ago - Use a multiple of 3600 seconds (1 hour). --statistics (list) The metric statistics, other than percentile. For percentile statistics, use extended-statistics . When calling get-metric-statistics , you must specify either statistics or extended-statistics , but not both. Syntax: "string" "string" ... Where valid values are: SampleCount Average Sum Minimum Maximum --extended-statistics (list) The percentile statistics. Specify values between p0.0 and p100. When calling get-metric-statistics , you must specify either statistics or extended-statistics , but not both. Syntax: "string" "string" ... --unit (string) The unit for a given metric. Metrics may be reported in multiple units. Not supplying a unit results in all units being returned. If the metric only ever reports one unit, specifying a unit has no effect. get the CPU utilization per EC2 instance The following example uses the get-metric-statistics command to get the CPU utilization for an EC2 instance with the ID i-abcdef. aws cloudwatch get-metric-statistics --metric-name CPUUtilization --start-time 2014-04-08T23:18:00 --end-time 2014-04-09T23:18:00 --period 3600 --namespace AWS/EC2 --statistics Maximum --dimensions Name=InstanceId,Value=i-abcdef Output: { "Datapoints": [ { "Timestamp": "2014-04-09T11:18:00Z", "Maximum": 44.79, "Unit": "Percent" }, { "Timestamp": "2014-04-09T20:18:00Z", "Maximum": 47.92, "Unit": "Percent" }, { "Timestamp": "2014-04-09T19:18:00Z", "Maximum": 50.85, "Unit": "Percent" }, { "Timestamp": "2014-04-09T09:18:00Z", "Maximum": 47.92, "Unit": "Percent" }, { "Timestamp": "2014-04-09T03:18:00Z", "Maximum": 76.84, "Unit": "Percent" }, { "Timestamp": "2014-04-09T21:18:00Z", "Maximum": 48.96, "Unit": "Percent" }, { "Timestamp": "2014-04-09T14:18:00Z", "Maximum": 47.92, "Unit": "Percent" }, { "Timestamp": "2014-04-09T08:18:00Z", "Maximum": 47.92, "Unit": "Percent" }, { "Timestamp": "2014-04-09T16:18:00Z", "Maximum": 45.55, "Unit": "Percent" }, { "Timestamp": "2014-04-09T06:18:00Z", "Maximum": 47.92, "Unit": "Percent" }, { "Timestamp": "2014-04-09T13:18:00Z", "Maximum": 45.08, "Unit": "Percent" }, { "Timestamp": "2014-04-09T05:18:00Z", "Maximum": 47.92, "Unit": "Percent" }, { "Timestamp": "2014-04-09T18:18:00Z", "Maximum": 46.88, "Unit": "Percent" }, { "Timestamp": "2014-04-09T17:18:00Z", "Maximum": 52.08, "Unit": "Percent" }, { "Timestamp": "2014-04-09T07:18:00Z", "Maximum": 47.92, "Unit": "Percent" }, { "Timestamp": "2014-04-09T02:18:00Z", "Maximum": 51.23, "Unit": "Percent" }, { "Timestamp": "2014-04-09T12:18:00Z", "Maximum": 47.67, "Unit": "Percent" }, { "Timestamp": "2014-04-08T23:18:00Z", "Maximum": 46.88, "Unit": "Percent" }, { "Timestamp": "2014-04-09T10:18:00Z", "Maximum": 51.91, "Unit": "Percent" }, { "Timestamp": "2014-04-09T04:18:00Z", "Maximum": 47.13, "Unit": "Percent" }, { "Timestamp": "2014-04-09T15:18:00Z", "Maximum": 48.96, "Unit": "Percent" }, { "Timestamp": "2014-04-09T00:18:00Z", "Maximum": 48.16, "Unit": "Percent" }, { "Timestamp": "2014-04-09T01:18:00Z", "Maximum": 49.18, "Unit": "Percent" } ], "Label": "CPUUtilization" } Specifying multiple dimensions The following example illustrates how to specify multiple dimensions. Each dimension is specified as a Name/Value pair, with a comma between the name and the value. Multiple dimensions are separated by a space. If a single metric includes multiple dimensions, you must specify a value for every defined dimension. For more examples using the get-metric-statistics command, see `Get Statistics for a Metric`__ in the Amazon CloudWatch Developer Guide. aws cloudwatch get-metric-statistics --metric-name Buffers --namespace MyNameSpace --dimensions Name=InstanceID,Value=i-abcdef Name=InstanceType,Value=m1.small --start-time 2016-10-15T04:00:00Z --end-time 2016-10-19T07:00:00Z --statistics Average --period 60 Label -> (string) A label for the specified metric. Datapoints -> (list) The data points for the specified metric. (structure) Encapsulates the statistical data that CloudWatch computes from metric data. Timestamp -> (timestamp)The time stamp used for the data point. SampleCount -> (double)The number of metric values that contributed to the aggregate value of this data point. Average -> (double)The average of the metric values that correspond to the data point. Sum -> (double)The sum of the metric values for the data point. Minimum -> (double)The minimum metric value for the data point. Maximum -> (double)The maximum metric value for the data point. Unit -> (string)The standard unit for the data point. ExtendedStatistics -> (map) The percentile statistic for the data point. key -> (string) value -> (double)
http://docs.aws.amazon.com/cli/latest/reference/cloudwatch/get-metric-statistics.html
CC-MAIN-2017-51
en
refinedweb
NinjaboyGames asked 5 days ago in Help Room 0 Replies 0 Votes Larry-Dietz answered 1 Reply Buckslice edited Dec 2, '17 c#·array·classes·lists·element korimako edited Nov 21, '17 inheritance·classes 1 Votes Maylina answered Nov 19, '17 c#·instantiate·classes·design 3 Replies 2 Votes Bunny83 commented Nov 15, '17 classes·values·passing 4 Replies remy_rm edited Oct 31, '17 c#·classes·script loading·derived-class MakabreGaming commented Oct 29, '17 c#·variable·inspector·inheritance·classes Bunny83 edited Oct 21, '17 in Help Room Litleck edited Oct 19, '17 c#·variables·particlesystem·classes Oct 18, '17 in Help Room -1 Votes PizzaPie edited Oct 18, '17 classes·namespace amirl_unity commented Oct 14, '17 c#·classes tmalhassan edited Oct 11, '17 static·classes NeonTheCoder answered Oct 6, '17 c#·editor·editor-scripting·scriptableobject·classes NearlyANinja commented Oct 2, '17 in Help Room Mekanofreak answered Sep 25, '17 ai·classes·data storage 2 Replies Paulk33 commented Sep 23, '17 unity5·java·convert·classes Xepherys commented Sep 2, '17 gameobject·classes·inventory system·items·oop SiatUnity asked Aug 24, '17 in Help Room OffhandBurrito asked EDKPDKGMS asked Aug 18, '17 in Help Room Niboruga asked Aug 13, '17 objects·classes·object reference·class object·class instance SaraCecilia published Aug 11, '17 in Help Room gumuselkenan commented Aug 9, '17 in Help Room GeorgeMincila asked Aug 8, '17 in Help Room WideEyeNow published Jul 19, '17 c#·vector3·classes·lists·interface megabrobro asked Jul 17, '17 in Help Room Jwizard93 commented Jul 5, '17 c#·array·list·class·classes james82 commented Jul 5, '17 classes·api.
https://answers.unity.com/topics/classes.html
CC-MAIN-2017-51
en
refinedweb
Payment systems in Hong Kong - Cleopatra Hawkins - 2 years ago - Views: Transcription 1 Payment systems in Hong Kong 2 3 Table of contents List of abbreviations Institutional aspects Legal and regulatory framework Payment instruments and systems Securities settlement Institutions Providers of payment services Providers of securities services Other service providers Role of other private and public sector bodies Payment media Cash Cheques Direct credit transfers Direct debit transfers Payment cards Credit cards Debit cards Other cards - stored value cards Electronic non-pos debit instructions Interbank settlement systems The real-time gross settlement (RTGS) system for HKD Ownership Participation Types of transactions Operation of the system Settlement Risks and risk management measures Pricing policies Governance The real-time gross settlement (RTGS) system for USD Ownership Participation Types of transactions Operation of the system Settlement Risks and risk management measures CPSS - Red Book 4 3.2.7 Pricing policies Governance Major projects and policies being implemented Cross-border joint cheque clearing facility with cities in mainland China Review of retail payment services in Hong Kong Industry-wide cheque imaging and truncation EUR clearing system in Hong Kong Securities settlement systems Exchange Fund Bills and Notes and other debt securities Trading Pre-settlement Settlement Equities Trading Pre-settlement Settlement Major projects and policies being implemented CMU modernisation and two-way link with Euroclear Implementation of CCASS/ Role of the HKMA Provision of settlement accounts Operation of payment systems Operation of securities settlement systems Oversight Other roles CPSS - Red Book 5 List of abbreviations AI AMS CCASS CCPMP CHATS CMT CMU CMUP CNS DTC DTCA EFBN EPS FOP GEM HKAB HKEx HKFE HKICL HKMA HKNPL HKSCC IFTP LAW LB MM MPC PPS RD RLB SAP SEHK SFC SI authorised institution Automatic Order Matching and Execution System Central Clearing and Settlement System Cross Currency Payment Matching Processor Clearing House Automated Transfer System CMU user terminal Central Moneymarkets Unit Central Moneymarkets Unit Processor continuous net settlement deposit-taking company DTC Association Exchange Fund Bill and Note Easy Pay System free of payment Growth Enterprise Market Hong Kong Association of Banks Hong Kong Exchanges and Clearing Limited Hong Kong Futures Exchange Hong Kong Interbank Clearing Limited Hong Kong Monetary Authority Hong Kong Note Printing Limited Hong Kong Securities Clearing Company Limited Interbank Fund Transfer Processor liquidity adjustment window licensed bank market-maker multipurpose stored value card Payment by Phone Service recognised dealer restricted licence bank Settlement Account Processor Stock Exchange of Hong Kong Securities and Futures Commission settlement instruction CPSS - Red Book 6 7 1. Institutional aspects 1.1 Legal and regulatory framework There is no specific legislation on payment systems in Hong Kong. However, there are a number of laws that have a direct bearing on various payment instruments and institutions. The relevant ordinances are set out in the following sections Payment instruments and systems Section 3A(1) of the Exchange Fund Ordinance provides, inter alia, that the Financial Secretary may, by notice, require an authorised institution (see Section ) to open an account ( settlement account ) with the Hong Kong Monetary Authority (HKMA) and to maintain and operate such a settlement account on such terms and conditions as the Financial Secretary considers appropriate. The Financial Secretary has delegated this power to the HKMA. The Legal Tender Notes Issue Ordinance regulates the issue of banknotes and currency notes. Under the Ordinance, the banknotes issued by the Bank of China, Standard Chartered Bank and the Hongkong and Shanghai Banking Corporation Limited are legal tender within Hong Kong. The legal definition of a cheque is laid down in the Bills of Exchange Ordinance. According to Section 73(1) of the Ordinance, a cheque is a bill of exchange drawn on a banker and payable on demand. A stored value card is defined in the Banking Ordinance as a card (or similar) on which data may be stored in electronic, magnetic or optical form and for or in relation to which an individual pays a sum of money to the issuer of the card, directly or indirectly, in exchange for the storage of the value of that money, in whole or in part, on the card and an undertaking by the issuer to supply goods or services itself or that a third party will supply goods and services (including money or money s worth) on production of that card. A stored value card is a purse-like payment device, the usage of which does not require user identity verification or bank account validation, and the value stored on the card is instantaneously deducted at the point of sale (POS). There are two types of stored value card: (a) a single purpose card, which is not subject to the regulatory regime under the Banking Ordinance; and (b) a multipurpose card, with the issuer s undertaking that, on presentation of the card to the issuer or a third party, the issuer or the third party will supply goods or services (including money or money s worth to cater for the redemption of unused value). In view of the increasing interest in the issue of multipurpose stored value cards with their potential to substitute to a significant degree for cash and cheques, the Banking (Amendment) Ordinance 1997 was enacted to empower the HKMA to regulate the issue of these cards. The Ordinance provides that only licensed banks in Hong Kong should have the ability to issue multipurpose cards that are unrestricted in terms of the goods and services which they can be used to purchase. The objectives are to maintain the stability of the payment system and provide a measure of protection to cardholders. A non-bank service provider may, however, be authorised as a deposit-taking company whose principal business is to issue or facilitate the issue of multipurpose cards which are more limited in scope of usage. Furthermore, the amended Ordinance provides for the HKMA to grant exemption from the approval process to certain types of multipurpose cards where the risk to the payment system and to cardholders is considered to be slight. In developing this regulatory framework, the HKMA seeks to strike a balance between the need to maintain the stability of the payment system (and thus of the financial system as a whole) and the desirability of not stifling developments which would promote competition and innovation. The Electronic Transactions Ordinance was enacted on 7 January 2000 to facilitate the use of electronic transactions for commercial and other purposes. It gives electronic records and digital signatures used in electronic transactions the same legal status as that of their paper-based counterparts. The provisions for legal recognition of electronic records and digital signatures in relation CPSS - Red Book 8 to rules of law and admissibility of electronic records as evidence in court came into operation on 7 April Securities settlement The Securities and Futures Commission (SFC) administers Hong Kong s securities and futures legislation. Section 4(1)(b) of the Securities and Futures Commission Ordinance provides that the Commission shall have the function of ensuring that the provisions of the relevant ordinances, and the provisions of any other ordinance so far as they relate to securities, futures contracts or property investment arrangements, are complied with. The Securities and Futures Commission Ordinance and nine other securities and futures related ordinances were consolidated into a new Securities and Futures Ordinance, which was passed by the Legislative Council on 13 March 2002 and is expected to come into operation in early The SFC has oversight responsibility for the Hong Kong Exchanges and Clearing Limited (HKEx) and its subsidiaries, namely the Stock Exchange of Hong Kong (SEHK), the Hong Kong Futures Exchange (HKFE) and their clearing houses. It also has front-line regulatory responsibility for takeovers and mergers activity, regulation of offers of investment products, and the enforcement of laws regarding market malpractices. Since 6 March 2000, the SFC has taken over the front-line regulation of all exchange participants from the two exchanges. As for listed companies, SEHK is the front-line regulator for all companies listed on the Main Board and the Growth Enterprise Market (GEM), except the HKEx, which is regulated by the SFC. In December 2000, the SFC approved a memorandum of understanding (MOU) with the HKEx. The MOU covers matters relating to the supervision of exchange participants, market surveillance and oversight of the activities of the HKEx, the two exchanges and clearing houses, including their rulemaking powers. Among its other regulatory responsibilities in relation to the HKEx, the SFC s Enforcement Division monitors trading on the two exchanges with a view to detecting and understanding unusual price and volume movements, and conducts investigations if necessary; the Intermediaries and Investment Products Division conducts routine inspection visits of exchange participants (as well as other intermediaries who are not exchange participants) to ensure that intermediaries are complying with regulatory requirements; and the Supervision of Markets Division oversees the operations of the HKEx and its subsidiaries to ensure the sound functioning of their trading, settlement and operational systems. 1.2 Institutions Providers of payment services Banks Hong Kong maintains a three-tier system of deposit-taking institutions, namely licensed banks, restricted licence banks and deposit-taking companies. They are collectively known as authorised institutions (AIs) under the Banking Ordinance. Under the Banking Ordinance, the HKMA is the authority responsible for the authorisation, suspension and revocation of all three types of AIs. Checks and balances are provided in the Banking Ordinance with the requirement that the HKMA consult the Financial Secretary on important authorisation decisions, such as suspension or revocation. The Chief Executive-in-Council is the appellate body for hearing appeals against decisions made by the HKMA. (a) Licensed banks (LBs) - only LBs may operate current and savings accounts, accept deposits of any size and maturity from the public and pay or collect cheques drawn by or paid in by customers. LBs are required to open and maintain an account with the HKMA for the settlement of HKD. In other words, they have direct access to the HKD real-time gross settlement (RTGS) interbank payment system. Therefore LBs are the major providers of payment services in Hong Kong. (b) Restricted licence banks (RLBs) - RLBs principally engage in merchant banking and capital market activities. They may take call, notice or time deposits of any maturity of HKD 500, CPSS - Red Book 9 (approximately USD 64,103) and above. In May 2000, legal arrangements were finalised to allow RLBs with a clear business need to join the RTGS interbank payment system for the settlement of HKD. However, they are not allowed to participate in the clearing of cheques given the restriction on their extending current accounts to customers. (c) Deposit-taking companies (DTCs) - DTCs are mostly owned by, or otherwise associated with, banks. They engage in a range of specialised activities, including consumer finance and securities business. These companies may take deposits of HKD 100,000 (approximately USD 12,821) or above with an original term to maturity, or call or notice period, of at least three months. DTCs do not have direct access to the HKD RTGS interbank payment system. Hong Kong has one of the highest concentrations of banking institutions in the world. At the end of December 2001, there were 147 LBs, 49 RLBs and 54 DTCs in business. There are, in addition, 111 representative offices of overseas banks in Hong Kong. A local representative office is not allowed to engage in any banking business. Its role is confined mainly to liaison work between the bank and its customers in Hong Kong. AIs have to comply with the provisions of the Banking Ordinance, which, among other things, require them to maintain adequate liquidity and capital adequacy ratios, to submit periodic returns to the HKMA on required financial information, to adhere to limitations on loans to any one customer or to directors and employees, and to seek approval for the appointment of controllers, directors and senior management. In May 2000, the HKMA issued a Guideline on the Authorisation of Virtual Banks under Section 16(10) of the Banking Ordinance. The Guideline sets out the principles that the HKMA will take into account in deciding whether to authorise virtual banks. The main principle is that the HKMA will not object to the establishment of virtual banks in Hong Kong provided that they can satisfy the same prudential criteria that apply to conventional banks. In line with existing authorisation policies for conventional banks, a locally incorporated virtual bank cannot be newly established other than through the conversion of an existing locally incorporated AI. Furthermore, local virtual banks should be at least 50% owned by a well established bank or other supervised financial institutions. Applicants incorporated overseas must come from countries with an established regulatory framework for electronic banking. In addition, they must have total assets of more than USD 16 billion Hong Kong Interbank Clearing Limited (HKICL) HKICL is a private company jointly owned by the HKMA and the Hong Kong Association of Banks (HKAB). HKICL was established in May 1995 to take over in phases the HKD clearing functions provided by the former Management Bank of the Clearing House, the Hongkong and Shanghai Banking Corporation Limited (HSBC). This process was completed in April The principal activity of HKICL is therefore the provision of interbank clearing services to banks in Hong Kong. In March 2000, the HKMA appointed HSBC as the settlement institution for the USD clearing system in Hong Kong. In this connection, HKICL has also taken up the role of clearing operator for HSBC, responsible for the development and operation of the USD clearing system. In July 2002, the HKMA appointed Standard Chartered Bank as the settlement institution for the EUR clearing system in Hong Kong. As in the cases of the HKD and USD clearing systems, Standard Chartered Bank also appointed HKICL as its clearing agent. Apart from payment systems, HKICL also operates the computer system of the Central Moneymarkets Unit (CMU), a central clearing and settlement system for public and private debt securities, on behalf of the HKMA Providers of securities services Licensed dealers Broadly speaking, any business entity which carries on or presents itself as carrying on a business in Hong Kong of dealing in securities, trading in commodity futures contracts, giving advice on investment in securities or futures contracts, providing margin financing for the trading of securities listed on a stock exchange, or leveraged foreign exchange trading is required to be registered with the CPSS - Red Book 10 SFC as a dealer, an adviser, a securities margin financier or a leveraged foreign exchange trader, as the case may be. Licensed intermediaries must meet a number of ongoing requirements, including the maintenance of adequate liquid capital, the maintenance of proper books and records, the safe custody of customers securities, the segregation of investors monies and the submission by registered intermediaries and their auditors of returns and reports. The licensing requirements relating to securities dealers and investment advisers are established in Part VI of the Securities Ordinance. The licensing requirements relating to futures dealers and advisers are established in Part IV of the Commodities Trading Ordinance. The licensing requirements relating to securities margin financiers are established in Part XA of the Securities Ordinance. The licensing requirements relating to leveraged foreign exchange trading are established in the Leveraged Foreign Exchange Trading Ordinance Exempt dealers An AI within the meaning of Section 2(1) of the Banking Ordinance is exempt from the licensing requirement. In other words, LBs, RLBs and DTCs are exempt dealers which also offer a wide range of securities services Hong Kong Securities Clearing Company Limited (HKSCC) HKSCC was incorporated in May Pursuant to the Exchanges and Clearing Houses (Merger) Ordinance, HKSCC was converted from a company limited by guarantee to a company limited by shares and its constitution was amended accordingly. Following an allotment of shares prescribed by the Ordinance, HKSCC became a wholly owned subsidiary of the HKEx in HKSCC created the Central Clearing and Settlement System (CCASS) in 1992, and became the central counterparty providing book-entry settlement in securities among its participants, either free of, or against, payment. Only securities listed or to be listed on the Exchange will be accepted as eligible securities for settlement in CCASS and only brokers, clearing agencies, custodians, stock lenders and stock pledgees based in Hong Kong and such other persons as HKSCC may determine from time to time in accordance with the rules will be accepted as participants. HKSCC may from time to time accept other categories of securities, whether or not listed on the Exchange, as eligible securities and may admit other categories of participants. HKSCC also offers nominee and company registrar services. Building upon the capability of the RTGS systems in Hong Kong, the HKMA has extended the delivery versus payment (DVP) facility for debt securities transactions to shares transactions. A link between HKICL and CCASS was set up in May 1998 to provide a DVP facility for shares denominated in HKD in order to reduce settlement risks and improve settlement efficiency. Following the implementation of the USD clearing system in Hong Kong, the DVP facility was extended to shares transactions denominated in USD in August Central Moneymarkets Unit (CMU) The CMU, established in 1990, is operated by the HKMA to provide computerised clearing and settlement facilities for Exchange Fund Bills and Notes (EFBNs). In December 1993, the HKMA extended the service to other HKD debt securities. It offers an efficient, safe and convenient clearing and custodian system for HKD debt instruments. In December 1994, the CMU established a one-way link to such international clearing systems as Euroclear and Clearstream. This helps to promote HKD debt securities to overseas investors, who can make use of this link to participate in the HKD debt market. The CMU also set up a network of bilateral linkages with the central securities depositories (CSDs) in the Asia-Pacific region, including Australia (December 1997), New Zealand (April 1998) and South Korea (September 1999), to facilitate crossborder trades in securities in the region. In December 1996, a seamless interface between the CMU and the newly launched HKD RTGS interbank payment system was established. This enables the CMU system to provide for its members real-time and end-of-day DVP services in HKD-denominated securities. Through this interface, banks in the HKD RTGS system are able to obtain HKD liquidity from the HKMA to facilitate payment flows through intraday and overnight repos of EFBNs. 188 CPSS - Red Book 11 Following the implementation of the USD RTGS system in Hong Kong, the CMU system established another seamless interface with the USD RTGS system in December With this system interface in place, the CMU provides its members with real-time and end-of-day DVP settlement of USD-denominated debt securities. Furthermore, this interface enables automatic intraday repos, which helps to provide intraday USD liquidity to the participants of the USD RTGS system. All debt instruments cleared through the CMU are either immobilised or dematerialised, and transfer of title is effected in computer book-entry form Other service providers Credit/charge card operators Visa and MasterCard are the two largest credit card operators in Hong Kong. They provide the international network linkages through which the merchants, merchant acquirers and card issuers are connected. American Express and Diners Club International mainly operate in their charge card business on a standalone or vertical integration basis. That is, they perform the multiple roles of network provider, card issuer and merchant acquirer. In the case of JCB Card, apart from issuing cards and acquiring merchants on its own, it also receives membership royalty fees from other institutions for the issuance of JCB Cards in Hong Kong Other network operators Electronic Payment Services Company (HK) Ltd (EPSCO) EPSCO is the only network provider for POS debit card services, namely Easy Pay System (EPS; see Section 2.5.2). EPSCO also offers non-pos debit facilities, including Payment by Phone Service (PPS) and ETC bill payment. Founded in 1985, EPSCO is now co-owned by 36 member banks in Hong Kong. The 36 member banks do not issue separate cards for payment services because the functions are typically included in the bank automated teller machine (ATM) cards and credit cards with ATM functions. At present, there are about 10,000 participating retailers signed up for the EPS payment services. EPSCO acts on behalf of its 36 member banks as they do not negotiate business with the retailers on their own. EPSCO is therefore the sole merchant acquirer in the market to provide the POS terminals and payment processing services to the participating retailers. EPSCO provides services to all merchant applicants on a uniform basis regardless of their size, location and business volume. It provides the terminals for free and does not impose any minimum service charge on the participating retailers. Joint Electronic Teller Services Limited (JETCO) JETCO was first established in 1982 by a group of five banks. Its core business is to operate an interbank ATM network. Customers can access their accounts through JETCO s network of more than 1,600 ATMs in Hong Kong, Macau and two cities in mainland China (Zhuhai and Shenzhen). JETCO also provides electronic non-pos debit instruction services. Octopus Cards Limited (OCL) OCL, formerly known as Creative Star Limited, issues Octopus Card, which is a stored value card used primarily for the payment of transport services provided by the five transport operators that jointly own OCL (see Section 2.5.3) Role of other private and public sector bodies The Hong Kong Association of Banks (HKAB) The HKAB is a statutory body established in 1981 under the Hong Kong Association of Banks Ordinance to replace the Hong Kong Exchange Banks Association. All LBs are required to be members of the HKAB and to observe the rules set by the Association under the Ordinance. CPSS - Red Book 12 The main objectives of the HKAB, among others, are to further the interests of banks, to draw up rules for the conduct of the business of banking, to act as an advisory body to its members in matters concerning the business of banking, and to provide facilities for the clearing of cheques and other instruments DTC Association (DTCA) Established in 1981 under the Companies Ordinance, the DTCA was originally known as the Hong Kong Association of Restricted Licence Banks and Deposit-Taking Companies. Any RLB or DTC may join the DTCA. The objectives of the DTCA include furthering the general interests of RLBs and DTCs, serving as an intermediary between the government and members, and acting as a consultative body to the government on matters concerning the business of taking deposits in Hong Kong. 2. Payment media 2.1 Cash Cash is still by far the most common means of retail payment in Hong Kong. At the end of 2001, HKD notes and coins in circulation amounted to HKD 102 billion, representing 7.9% of GDP. Compared with the G10 economies, cash usage in Hong Kong is high, similar to Japan and Switzerland. Despite the significant growth of card-based or electronic means of retail payment in the past decade, the currency/gdp ratio in Hong Kong remains high, which could be mainly due to the significant amount of HKD notes and coins circulating in mainland China and Macau. The government, through the HKMA, has given authorisation to three commercial banks, HSBC, Standard Chartered Bank and the Bank of China, to issue currency notes in Hong Kong. Authorisation is accompanied by a set of terms and conditions agreed between the government and the three note-issuing banks. Banknotes are issued by the three banks, or redeemed, against payment to, or from, the government s Exchange Fund in USD, at the specified rate of USD 1 to HKD 7.80 under the linked exchange rate system. The note-issuing banks deposit the USD backing with the Exchange Fund in exchange for certificates of indebtedness, which are redeemed by the Exchange Fund upon redemption of issued banknotes. Hong Kong Note Printing Limited (HKNPL) prints the banknotes issued by the three commercial banks in Hong Kong. The government acquired the banknote printing plant with funds drawn from the Exchange Fund in April Subsequently, the three note-issuing banks each acquired 10% of HKNPL s issued shares from the government and became minority shareholders. Currency notes in everyday circulation are HKD 10, HKD 20, HKD 50, HKD 100, HKD 500 and HKD 1,000. The HKD 10 notes are gradually being phased out and replaced by the HKD 10 coin, a process which began in November The government issues coins of HKD 10, HKD 5, HKD 2, HKD 1, 50 cents, 20 cents and 10 cents. Until 1992 these coins were embossed with the Queen s Head. In 1993, a programme was initiated to replace the Queen s Head series with a new series depicting the Bauhinia flower. The first Bauhinia coins, the HKD 5 and HKD 2 coins, were issued in January New HKD 1, 50 cent and 20 cent coins were issued in October 1993, and a new 10 cent coin in May The HKD 10 coin, the last of the Bauhinia series of coins, was issued in November Since the beginning of the coin replacement programme in 1993, about 549 million coins of Queen s Head design have been withdrawn from circulation. The Queen s Head coins remain legal tender while the replacement programme continues. In early autumn 2002 a new HKD 10 note started to circulate in Hong Kong. The new note is issued by the government in recognition of continuing demand among the public for a HKD 10 note in addition to the HKD 10 coin. The HKD 10 coin and the existing HKD 10 notes will remain in circulation. Commemorative coins were issued to mark important events such as the establishment of the Hong Kong Special Administrative Region on 1 July 1997 and the grand opening of Hong Kong International Airport in July CPSS - Red Book 13 2.2 Cheques Corporations or individuals in Hong Kong often use cheques as a means of payment or funds transfer. As a means of retail payment, cheques are also often used in transactions where debit cards or credit cards are not accepted (eg for payment of large-value items such as motor cars or payment of deposit when purchasing property). Cheques are also used for some smaller-value items such as utility bills, but alternative electronic means of payment have become increasingly popular. The cheque clearing system in Hong Kong is operated by HKICL and overseen by the HKMA. Interbank money settlement of cheques in net terms takes place between 15:00 and 15:30 on the business day following deposit of a cheque. The cheque clearing system has an interface with the settlement accounts maintained by the banks with the HKMA. On average, about half a million cheques are cleared every day amounting to some HKD 20 billion. This is about 5% of the daily amount handled by the HKD RTGS interbank payment system Since January 1998, it has been possible for HKD cheques issued by banks in Hong Kong to be presented at banks in the Shenzhen Special Economic Zone and delivered back to Hong Kong for clearing. Good funds can be made available to the payee in Shenzhen on the afternoon of the next business day after presentation of the cheque. A similar service was extended to 19 cities in Guangdong province in October However, the value of such cross-border cheques cleared is minuscule compared with the daily cheque processing volume in Hong Kong. 2.3 Direct credit transfers Most credit transfers are standing order arrangements made by the originators with their bank. The payer instructs his bank to debit his account and transfer the funds to the payee. The bank then carries out the necessary transfers on a regular specific date, to a specific receiver and for a specific amount. Payroll crediting is the most common direct credit transfer. Individual instructions are processed together with the bulk credit instructions for that day and the net obligations between banks are settled in the RTGS interbank payment system. The number of credit transfers processed by HKICL in 2001 was nearly 18 billion for a value of HKD 443 billion. 2.4 Direct debit transfers Standing direct debit instructions are commonly used by households for executing such regular payments as utility bills and charges. In debit transfers, the payee instructs his bank to collect payment from the paying party, often on a recurring basis. Direct debit payments are preauthorised by the paying customer, who gives permission to his bank to debit his account upon receipt of instructions initiated by the specified originator. Similar to direct credit transfers, individual debit instructions are processed in bulk clearing by HKICL for that day and the net obligations between banks are settled in the RTGS interbank payment system. The number of credit transfers processed by HKICL in 2001 was over 37 billion for a value of HKD 55 billion. 2.5 Payment cards Credit cards The use of credit cards has become increasingly popular in recent years. According to the HKMA s survey on major card issuers, there were over 9 million credit card accounts involving some HKD 62 billion in outstanding receivables as at the end of The credit cards used in Hong Kong are Visa, MasterCard, American Express, Diners and JCB. Credit card payment involves credit provision by the card issuers to the cardholders. In a credit card transaction, the card issuer pays for the goods and services on behalf of the consumer, after charging the retailer a merchant discount fee. If a cardholder settles his account within the payment grace period offered by the card issuers (usually at least 30 days), the provision of credit is interest-free. This buy-now-pay-later benefit is strikingly different from other means of retail payment, and explains why credit cards have become so popular in Hong Kong. CPSS - Red Book 14 2.5.2 Debit cards The use of debit cards in Hong Kong is in the form of EPS. EPS links up consumers and merchants via banks electronic systems. Payments can be made with an ATM card at any outlet that displays the EPS logo. An EPS transaction involves direct transfer of funds from the bank account of the consumer to that of the retailer at the POS using bank ATM cards. It is in principle equivalent to payment by means of a credit transfer, except that the account of the payer is debited immediately at the POS but the account of the payee will only be credited by a batch run at day-end or early next day. (See also Section ) Other cards - stored value cards Stored value cards are at present still the least significant mode of retail payment in Hong Kong but have been growing very fast in the last few years. Unlike credit cards and debit cards, the operation of stored value cards by definition involves prepayment of funds by the cardholders to the card issuers. The aggregate of the stored values constitutes the float, giving rise to the question of float management, which is of prudential concern to the HKMA and the public alike. This is one of the considerations that led to the authorisation of OCL as a DTC (see below) to bring it under the regulatory regime of the HKMA. Octopus Card is a contactless stored value card issued by OCL (see Section ). The card scheme was launched in the third quarter of 1997, when it was exempted from the definition of multipurpose card under the Banking Ordinance because of its restricted range of services and because the risk of its use to the payment system and cardholders was considered slight. In April 2000, OCL, formerly known as Creative Star Limited, was authorised as a special purpose DTC under the Banking Ordinance. The authorisation of the company allows Octopus Card to be put to a wider range of uses, including non-transport-related, thus enhancing the convenience for cardholders. Its application now includes car parks, fast food outlets, bakeries, convenience stores, supermarkets, personal care stores, vending machines, photo booths, pay phones, photocopiers, cinemas and schools. Any extension of the multipurpose use of Octopus to non-transport service providers is subject to the conditions set down by the HKMA when it authorised the company as a DTC. By the end of June 2002, the number of Octopus Cards in circulation had reached 8.6 million, with daily transactions at over 7.2 million. There were over 130 service providers. 2.6 Electronic non-pos debit instructions Three electronic non-pos debit instruction services are available in Hong Kong, namely PPS, JET Payment and ETC bill payment. Fewer than 200 retailers in Hong Kong participate in the service networks of those three systems, and the usage so far is mainly for payment of utility bills and charges. EPSCO and JETCO are the only two network providers in the market. EPSCO operates PPS, which offers payment services over the phone and on the internet, and ETC bill payment, which is only available in the ETC ATMs (by using ETC ATM cards). On the other hand, JET Payment, a payment scheme operated by JETCO, is available in the JETCO ATMs (by using JETCO member banks ATM cards) and on the internet as well. Consumers prior registration is required for using PPS while it is unnecessary for the other two payment schemes (JET Payment and ETC bill payment). 3. Interbank settlement systems 3.1 The real-time gross settlement (RTGS) system for HKD The HKD RTGS system, which is known as the HKD Clearing House Automated Transfer System (CHATS), was launched on 9 December CPSS - Red Book 15 The design of the RTGS system is simple and robust. It uses a Y-shaped topology in which all participating banks have direct access to the system under a single-tier structure. All settlement account holders open and maintain HKD accounts with the HKMA and all interbank payments settled across the books of the HKMA are final and irrevocable. Payment instructions are settled immediately if there is a sufficient balance in the settlement account. Banks without sufficient credit balances in their settlement accounts have their payment instructions queued in the system. Alternatively, the banks can make use of the seamless interface between the Settlement Account Processor (SAP) and the book-entry debt securities clearing system (which is known as the CMU Processor or CMUP), to sell and repurchase their EFBNs during the day in the form of intraday repo transactions to obtain interest-free intraday liquidity from the HKMA. Diagram 1 Design of Hong Kong s RTGS system Bank A Bank B Interbank Fund Transfer Processor (IFTP) Settlement Account Processor (SAP) Central Moneymarkets Unit Processor (CMUP) Exchange Fund General Ledger Ownership The RTGS system for HKD is owned by the HKMA Participation All LBs in Hong Kong are required to maintain a settlement account with the HKMA. As stipulated in Section 3A(1) of the Exchange Fund Ordinance, the Financial Secretary may by notice require an AI in Hong Kong to open a settlement account with the HKMA. The account is required to be maintained and operated on the terms and conditions considered appropriate by the Financial Secretary. The Financial Secretary has delegated this power to the HKMA. The Chief Executive of the HKMA has served a notice to all LBs requesting that they open a settlement account to be maintained and operated on the terms set out in the conditions and the operating procedures attached to the notice and the relevant provisions in the clearing house rules. In May 2000, the HKMA announced that RLBs in Hong Kong were also allowed to access the HKD CHATS, provided that they have demonstrated a business need to do so. As at the end of December 2001, there were 136 settlement accounts maintained with the HKMA. CPSS - Red Book 16 3.1.3 Types of transactions RTGS transactions The name of the RTGS system for interbank transactions in Hong Kong is CHATS. HKD CHATS transactions are settled real-time on a gross basis and are across the books of the HKMA. The payments are final and irrevocable upon funds transfer across the books of the HKMA Clearing and settlement of paper cheques (CLG) CLG refers to paper cheques and other negotiable instruments drawn on member banks which are cleared through HKICL on a bulk clearing and multilateral netting basis. Paper cheques are settled on the next business day on a batch run basis. They are settled after the returned items have been identified and adjusted in order to eliminate the settlement risk related to the returned items. Cheques presented to HKICL on Day D are sorted and sent to the drawee banks overnight. The drawee banks check for sufficient funds in the drawees accounts and return all dishonoured cheques to HKICL on the next business day (Day D+1). Only cheques presented on Day D that are not returned are settled on day D Clearing and settlement of electronic items (ECG) The ECG is designed to handle low-value bulk-volume items, such as: EPS installed at POS and ATMs installed at particular bank groups. These items are generated by EPSCO and JETCO; funds transfers related to share transactions in the HKEx. The payment instructions are issued by CCASS; and autopay of other autocredit and autodebit items Operation of the system The computer operator for the RTGS system is HKICL. The system operates from 09:00 to 17:30 from Monday to Friday and from 09:00 to 12:00 on Saturday. During the above operating hours, banks can settle their interbank transactions. Customer-related transactions have to be handled before 17:00 from Monday to Friday and 11:30 on Saturday Settlement RTGS All RTGS transactions are settled real-time on a gross basis. When a payment has been settled across the books of the HKMA, it is regarded as final and irrevocable Bulk settlement Bulk settlement is designed to handle low-value bulk clearing items. All bulk clearing items are settled on the next business day and on a multilateral netting basis. Settlement occurs after any returned items have been identified and adjusted in order to eliminate settlement risk arising from returned items. Currently, the payment instructions related to stock market transactions, low-value bulk electronic payment items and cheques are settled on a bulk clearing basis at the following times: Clearing items Monday to Friday Saturday CCASS (ie stock market transactions) 09:30 09:30 EPSCO (EPS + autocredit items) 10:00 10:00 JETCO (Joint Electronic Teller Services) 11:30 09:00 Paper cheques + autodebit items 15:00 nil (09:00 on Monday for Friday items) 194 CPSS - Red Book 17 Delivery versus payment (DVP) With the establishment of the seamless interface between the SAP and CMUP in December 1996, the HKD RTGS system supports the real-time and end-of-day DVP facility for debt securities denominated in HKD that are lodged with the CMU. A similar seamless interface was established in May 1998 with CCASS. Market participants can make use of such linkages to arrange both a real-time and an end-of-day DVP facility for HKD-denominated shares which are listed on SEHK Payment versus payment (PVP) The HKD CHATS was linked with the USD CHATS (see Section 3.2 for information on the USD clearing system in Hong Kong) in September 2000 for settlement of USD/HKD foreign exchange transactions on a PVP basis. This PVP device (which is known as the Cross Currency Payment Matching Processor, or CCPMP) is the first known electronic foreign exchange PVP mechanism which ensures that both USD and HKD legs of the USD/HKD foreign exchange transaction are settled simultaneously to enable the elimination of Herstatt risk. With PVP settlement and the consequent elimination of Herstatt risk, the application of bilateral counterparty trading limits will no longer be relevant, and interbank liquidity may therefore improve as the traded currencies are put to immediate use in the respective clearing systems. Diagram 2 depicts the HKD/USD PVP mechanism. In this example Bank X is selling HKD to Bank Y in exchange for USD. On settlement day, Bank X sends a PVP payment transaction to Bank Y via the HKD RTGS system (i). Bank Y also initiates a mirror PVP payment transaction via the USD RTGS system (ii). The CCPMP for HKD and the CCPMP for USD will then communicate with each other and attempt to match the transaction (iii). After successful matching, the HKD RTGS system and USD RTGS system will respectively hold the HKD funding of Bank X and the USD funding of Bank Y in their own settlement accounts (iv). If both Bank X and Bank Y have sufficient funds, the two RTGS systems will transfer the funds to their respective counterparty simultaneously (v). Diagram 2 Operation flow of PVP settlement Bank X Hold fund (iv) (i) HKD CCPMP (iii) USD CCPMP (ii) Bank Y Hold fund (iv) (v) Bank Y Bank X (v) (v) (v) HKD RTGS USD RTGS message flow payment flow Risks and risk management measures The HKMA has introduced a number of risk management measures to ensure smooth processing in the HKD interbank payment systems: (a) Management of liquidity: under the RTGS environment, the availability of intraday liquidity is a crucial element in order to reduce the chance of payment gridlock in the system. In this regard, the HKD CHATS has various built-in system features to facilitate liquidity management for the banks. Banks are able to view the balance in their settlement accounts on a real-time basis. In addition, they receive advance payment receipts of the net amounts they need to pay (or receive) for each of the four bulk clearing runs that take place during the day. Banks also receive advance notice of the aggregate value of incoming payments from other banks after 17:00 (and 11:30 on Saturdays), which allows banks to assess precisely whether they have a surplus (or a deficit) of funds for meeting their payment obligations. CPSS - Red Book 18 (b) (c) (d) (e) (f) (g) (h) Repo facility: banks can arrange with the HKMA to obtain liquidity through a repo facility. Within the day, if a bank does not have a sufficient balance in its settlement account to effect an outgoing payment but has sufficient EFBNs in its intraday repo account, the system can automatically trigger an intraday repo transaction to generate the required amount of credit balance to cover the shortfall. A bank with excess liquidity in its settlement account may repay the repo at any time. In any case, the intraday repo can be repurchased before the close of business. Intraday repos that cannot be repurchased before the close of business will be rolled into overnight borrowing under the discount window, in which interest is charged by the HKMA. Apart from the above facility, banks can also arrange overnight repos with the HKMA through the discount window facility if required. Queuing mechanism: the HKD RTGS system is a credit transfer system. If a bank does not have a sufficient balance in its settlement account to effect payment, the transaction is queued in the system. Banks can make use of a resequencing function to move the selected transaction to either the top or the bottom of their queued payments. The queuing mechanism allows the banks to manage their own queues of payment instructions through cancellation, resequencing and amendments. Monitoring: to ensure smooth processing in the payment system, the HKMA closely monitors the payment condition of each bank on a real-time basis. The position of each bank, as well as each transaction detail, can be accessed by the HKMA. Throughput guidelines: in December 1996, the HKMA issued a guideline to banks on their CHATS throughput in order to encourage banks to make payments in a timely and orderly manner throughout the day. Each bank is required to release and settle its interbank payments whose aggregate value is not less than 35% and 65% (by 12:00 and 14:30 respectively) of the value of its total CHATS payments for the day. The HKMA closely monitors banks compliance with throughput targets and discusses with individual banks if they have underperformed. No overdraft: settlement account holders are not required to maintain a minimum amount or reserve in their settlement accounts with the HKMA. Nonetheless, the settlement accounts are not allowed to go into overdraft. Confidentiality: while a bank inputs the full details of its payment instructions, including customer information, into the IFTP, the instructions will be stripped so that only the settlement instruction - ie information on the amount, the paying bank and the receiving bank - will be passed onto the SAP. Liquidity adjustment window (LAW) facility: LAW is a contingent liquidity facility which allows banks to obtain intraday liquidity from the HKMA through repos of qualified eligible securities lodged with the CMU other than EFBNs. LAW is devised for the purpose of helping banks to settle time-critical bulk clearing obligations Pricing policies All expenses incurred by HKICL in providing, managing and operating the clearing house and the clearing facilities are borne by HKICL, which in turn recovers the expenses through charging the banks fees for use of the clearing facilities. The fees to be charged by HKICL require the approval of its board of directors Governance All banks are required to strictly adhere to the rules as stipulated in the HKD clearing house rules. In addition, all participants of the HKD CHATS are required to comply with the terms and conditions in the account opening form and other documents as specified by the HKMA and HKICL. 3.2 The real-time gross settlement (RTGS) system for USD The USD RTGS system in Hong Kong, which is known as USD CHATS, was launched on 21 August CPSS - Red Book 19 The purpose of the USD clearing system is to provide efficient settlement of USD transactions during Asian business hours. The USD is the single most widely used currency for the denomination of world trade in merchandise and financial products. Given Hong Kong s role as an international financial centre, and the fact that the HKD is linked to the USD, there is extensive holding of USD and a considerable trade in USD-denominated assets. These activities suggest that there is a business case for introducing improved mechanisms for settling USD payments in Hong Kong. In the course of examining options for implementing the USD clearing system in Hong Kong, we had widely consulted the banking sector and had been in dialogue with the Federal Reserve Bank of New York. They indicated a preference that the settlement institution be a commercial bank, and such a private sector solution is consistent with the recommendation by the Bank for International Settlements. Such practice is also in line with Hong Kong s tradition of adopting market-led solutions. After going through a vigorous selection process, the HKMA appointed HSBC to be the settlement institution for the USD clearing system in Hong Kong (see Section 3.2.1). In terms of system design, the USD CHATS is almost an exact replica of the HKD CHATS, except for the following characteristics: The settlement institution for the USD CHATS is a commercial bank. In this regard, each direct participant has to open and maintain a settlement account with the USD settlement institution and all transactions are settled across the books of the USD settlement institution. The USD CHATS has adopted a two-tier membership structure in which banks can join the system as either direct or indirect participants. The system also accepts overseas members as long as they are approved to join the system by the HKMA and the USD settlement institution. Unlike the HKD CHATS, the USD settlement institution provides a clean intraday overdraft facility to the direct participants in the system. Direct participants can enjoy an interest-free overdraft facility and interest-free intraday repos if they can repay HSBC s New York correspondent before the close of the New York CHIPS on that value day (ie 05:30 in summer, or 04:30 in winter, Hong Kong time, on Day D+1) Ownership The RTGS system for USD is owned by HSBC. HSBC was appointed by the HKMA as the settlement institution for a franchise period of five years starting from 1 August Participation Participation in the USD CHATS is not mandatory. Banks are free to join the system as either direct participants or indirect participants. The system also accepts overseas members as long as they are approved to join the system by the HKMA and the USD settlement institution Types of transactions RTGS transactions All USD CHATS transactions are settled real-time on a gross basis and are across the books of the USD settlement institution. The payments are final and irrevocable upon funds transfer across the books of the USD settlement institution Clearing and settlement of paper cheques (CLG) CLG refers to USD paper cheques and other negotiable instruments drawn on banks in Hong Kong which are cleared through HKICL on a bulk clearing basis. The establishment of a local USD cheque clearing system can reduce the settlement time to two days for those US paper cheques and other negotiable instruments drawn upon banks in Hong Kong and deposited locally. The detailed mechanics for the clearing process for USD cheques are similar to those for HKD cheques. CPSS - Red Book 20 Clearing and settlement of electronic items (ECG) The ECG is designed to handle low-value bulk-volume items for funds transfer related to USD-denominated share transactions in the HKEx. The payment instructions are issued by CCASS Operation of the system The operator of the USD CHATS is HKICL. The system operates from 09:00 to 17:30 from Monday to Friday and does not open on Saturdays. During the above operating hours, banks can settle their interbank transactions. Customer-related transactions have to be handled before 17: Settlement RTGS All RTGS transactions are settled real-time on a gross basis. When a payment is settled across the books of the USD settlement institution, it is regarded as final and irrevocable Bulk settlement Bulk settlement is designed to handle low-value bulk clearing items. All bulk clearing items are settled on a next day and multilateral netting basis. They are settled after any returned items have been identified and adjusted in order to eliminate the settlement risk arising from returned items. Currently, the payment instructions for stock market transactions and cheques denominated in USD are settled on a bulk clearing basis at the following times. Clearing items Monday to Friday CCASS (ie stock market transactions) 09:30 Paper cheques 15:00 (09:00 on Monday for Friday items) Delivery versus payment (DVP) In December 2000, the USD CHATS system linked up with the CMUP (ie the book-entry debt securities clearing system operated by the HKMA) to support the real-time and end-of-day DVP facility for debt securities denominated in USD that are lodged with the CMU. A similar seamless interface was established in August 2000 with CCASS. Market participants can make use of such a linkage to arrange both a real-time and an end-of-day DVP facility for USD-denominated shares which are traded on SEHK Payment versus payment (PVP) The USD CHATS was linked with the HKD CHATS in September 2000 for settlement of USD/HKD foreign exchange transactions on a PVP basis. This PVP device, which is known as the CCPMP, is the first known electronic foreign exchange PVP mechanism which ensures that both USD and HKD legs of the USD/HKD foreign exchange transaction are settled simultaneously to eliminate Herstatt risk. With PVP settlement and the consequent elimination of Herstatt risk, the application of bilateral counterparty trading limits may assume less importance, and interbank liquidity may therefore improve as the traded currencies are put to immediate use in the respective clearing systems Risks and risk management measures Various risk management measures have been instituted: (a) Management of liquidity: similar to the HKD CHATS, the USD CHATS has various built-in system features to facilitate liquidity management for the participating banks. Banks are able to view the balance in their settlement accounts on a real-time basis. In addition, they receive advance payment receipts of the net amounts they will need to pay (or receive) for 198 CPSS - Red Book Payment, clearing and settlement systems in Hong Kong SAR Payment, clearing and settlement systems in Hong Kong SAR CPSS Red Book 2012 199 Contents List of abbreviations... 203 1. Introduction... 205 1.1 The general institutional framework... 205 1.1.1 Payment). Financial infrastructure in Hong Kong. Foreword. Foreword Contents Page Foreword 28 Introduction 30 Importance of financial infrastructure 31 Strategic objective 32 Financial infrastructure in Hong Kong 34 Payment systems 35 Debt securities settlement Section 14 Money Settlement 14/1 Section 14 Money Settlement 14.1 SCOPE OF MONEY SETTLEMENT SERVICES 14.1.1 Scope of payments CCASS caters for settlement of transactions either on a DVP basis or FOP basis. However, settlement of Section 9 Overview of Clearing and Settlement in CCASS 9/1 Section 9 Overview of Clearing and Settlement in CCASS 9.1 TRANSACTIONS ACCEPTED FOR SETTLEMENT IN CCASS In brief, subject to the Rules, transactions in Eligible Securities accepted for settlement Becoming a Participant of Admission Criteria and Operational Requirements for Becoming a of Hong Kong Securities Clearing Company Limited Hong Kong Securities Clearing Company Limited (A wholly-owned subsidiary of Hong Kong Exchanges PRUDENTIAL SUPERVISION IN HONG KONG PRUDENTIAL SUPERVISION IN HONG KONG PREFACE Under the Banking Ordinance ( the Ordinance ), Chapter 155 of the Laws of Hong Kong, the Monetary Authority ( the MA ) is charged with the responsibility for CHAPTER I GENERAL PROVISIONS China Securities Depository and Clearing Corporation Ltd. Implementing Rules for Registration, Depository and Clearing Services under the Shanghai-Hong Kong Stock Connect Pilot Program Declaimer: For the MAIN CHARACTERISTICS OF PILOT PROGRAMME SECURITIES PILOT PROGRAMME FOR TRADING US SECURITIES Informational Documentation For Clients of Exchange Participants The material contained herein is for general information and investors should only consider participating Financial Services HONG KONG : THE FACTS HONG KONG : THE FACTS Financial Services Hong Kong is a major international financial centre, comprising an integrated network of institutions and markets which provide a wide range of products and services THE PAYMENT SYSTEM IN ZAMBIA THE PAYMENT SYSTEM IN ZAMBIA Table of Contents OVERVIEW OF THE NATIONAL PAYMENT SYSTEM IN ZAMBIA... 225 1. INSTITUTIONAL ASPECTS... 225 1.1 General legal aspects... 225 1.2 Role of financial intermediaries WORKING GROUP ON DISCLOSURE FRAMEWORK FOR SECURITIES SETTLEMENT SYSTEMS. CMU is located in Hong Kong. The time zone is GMT + 8 hours. WORKING GROUP ON DISCLOSURE FRAMEWORK FOR SECURITIES SETTLEMENT SYSTEMS I. Basic Information A. What is the Name of the SSS? The Central Moneymarkets Unit (CMU) B. Where and in What Time Zone is the SSS CCASS System Overview 2.1 INTRODUCTION: CCASS SYSTEM OVERVIEW CCASS is a system to cater for the book-entry settlement of transactions in listed securities between CCASS Participants, which includes (i) Direct Clearing Participants, Enriching knowledge series: Learn more about stock listing, bonds and funds investment Enriching knowledge series: Learn more about stock listing, bonds and funds investment June 2012 External Relations Rundown Part 1: New stock listing methods Break Part 2: Credit ratings and ibonds Part Interdependencies of payment and settlement systems: the Hong Kong experience Interdependencies of payment and settlement systems: the Hong Kong experience by the Financial Infrastructure Department Payment and settlement systems around the world have become more interdependent Stored value facilities and retail payment systems in Hong Kong: a proposed regulatory regime Stored value facilities and retail payment systems in Hong Kong: a proposed regulatory regime by the Financial Infrastructure Department The rapid development of innovative retail payment products and SECURITIES AND FUTURES ACT (CAP. 289) Monetary Authority of Singapore SECURITIES AND FUTURES ACT (CAP. 289) NOTICE ON RISK BASED CAPITAL ADEQUACY REQUIREMENTS FOR HOLDERS OF CAPITAL MARKETS SERVICES LICENCES Monetary Authority of Singapore Hong Kong Monetary Authority 28 May 2013 IMPORTANT INFORMATION Investors should read this Information Memorandum carefully (as amended or supplemented from time to time) and ensure that they fully understand the risks associated For operators of trading and clearing systems, Hong Kong remains Hong Kong: Developments in the Regulation of Trading and Clearing Systems Despite its status as a major financial centre, Hong Kong remains an emerging market for the trading and clearing of commodities EXCHANGE Traded Funds EXCHANGE TRADED FUNDS EXCHANGE Traded Funds Guide to listing on the Cayman Islands Stock Exchange Contents Introduction... 3 What CSX has to offer... 4 The listing process... 6 Conditions for listing... Chapter 1 GENERAL INTERPRETATION Chapter 1 GENERAL CHAPTER 1 INTERPRETATION For the avoidance of doubt, the Rules Governing the Listing of Securities on The Stock Exchange of Hong Kong Limited apply only to matters related to those securities Common Types of Consumer Credit. 23 June 2014 Common Types of Consumer Credit 23 June 2014 Disclaimer for presentation This presentation is intended to provide a general overview for information and educational purposes only and is not a comprehensive Business Facilitation Advisory Committee Wholesale and Retail Taskforce. Regulatory Regime for Stored Value Facilities and Retail Payment Systems Business Facilitation Advisory Committee Wholesale and Retail Taskforce Regulatory Regime for Stored Value Facilities and Retail Payment Systems WRTF Paper 37 Purpose This paper provides members with information CHAPTER 11 NOMINEE SERVICES 11/1 CHAPTER 11 NOMINEE SERVICES 1101. Scope and extent of nominee and similar services Subject to the Rules, applicable laws and applicable regulatory approval, in respect of Eligible Securities deposited, Fees and Charges. 華 僑 永 亨 銀 行 股 份 有 限 公 司 OCBC Wing Hang Bank Limited Fees and Charges 華 僑 永 亨 銀 行 股 份 有 限 公 司 OCBC Wing Hang Bank Limited Table of Contents 1. Deposit Service 1.1 Current Account 4 1.2 Savings / Statement Savings Account 4 1.3 Deposit Cheque 4 1.4 Deposit INDUSTRY OVERVIEW THE STOCK MARKET IN HONG KONG. History of the Stock Exchange This overview contains information derived from publicly available government or official sources referred to in this prospectus. The Company believes that the sources of such information are appropriate Bank of England Settlement Accounts Bank of England Settlement Accounts November 2014 Contents Foreword 3 1. Payment systems and the role of the central bank 4 Payment systems 4 Settlement in central bank money 4 Intraday liquidity 4 Use Hong Kong Monetary Authority 29 January 2014 IMPORTANT INFORMATION Investors should read this Information Memorandum carefully (as amended or supplemented from time to time) and ensure that they fully understand the risks associated 1921 A second stock exchange, The Hong Kong Stockbrokers Association, was incorporated. HONG KONG STOCK MARKET HISTORICAL EVENTS 1891 The Association of Stockbrokers in Hong Kong, the first formal stock exchange in Hong Kong was formed. 1914 The name of the Association of Stockbrokers in Hong Kong Monetary Authority 9 Ma 2016 IMPORTANT INFORMATION Investors should read this Information Memorandum carefully (as amended or supplemented from time to time) and ensure that they fully understand the risks associated with Issues and corporate actions in the book-entry system Decision of Euroclear Finland s CEO. To: Issuers Account operators Issuer agents Issues and corporate actions in the book-entry system Decision of Euroclear Finland s CEO To: Issuers Account operators Issuer agents Reference to the Rules: 2.1.13, 3.1.11, 3.1.12,3.1.13, 3.1.14, 3.1.19 HONG KONG SECURITIES MARKET. Brief history. Regulatory framework. Hong Kong securities market. Fact Book 1999 1 Hong Kong securities market HONG KONG SECURITIES MARKET Brief history Records of securities trading in Hong Kong date back to 1866. In 1891 when the Association of Stockbrokers in Hong Kong was established, The Bermuda Securities Depository (BSD) Participants User Guide WEB VERSION The Bermuda Securities Depository (BSD) Participants User Guide WEB VERSION November 2001 Table of Contents INTRODUCTION...1 LEGAL STRUCTURE...5 PARTICIPANTS...7 SYSTEM...10 ACCOUNTS...12 TRADING...17 Bangladesh Payment and Settlement Systems Regulations, 2009 Bangladesh Payment and Settlement Systems Regulations, 2009 Payment Systems Division Department of Currency Management and Payment Systems Bangladesh Bank Table of Contents 1. Short Title and Commencement... General Services. Account services General Services Account services Category Autopay (HKD, RMB, and USD) 1. Debit/credit a BEA account - Manually HK$1.5 (minimum HK$100 per instruction) - By MAS Nil 2. Debit/credit an account held at another 3. Securities Services Local Securities (The following service fees are charged according to the transaction currency) 3. Securities Services Local Securities (The following fees are charged according to the transaction currency) 2015.08 Services Item Rate Fee Min Charge Max Charge Remarks Scrip hling settlement related INDUSTRY OVERVIEW THE HONG KONG SECURITIES MARKET. History Certain information provided in this section is derived from various public official or government sources. The Company and the Sponsor have exercised reasonable care in reproducing such information from The Role of Exchange Settlement Accounts Reserve Bank of Australia Bulletin March 1999 The Role of Exchange Settlement Accounts Introduction Exchange Settlement (ES) Accounts provided by the Reserve Bank play an important role in the Australian THE RULES OF THE CENTRAL SECURITIES CLEARING SYSTEM THE RULES OF THE CENTRAL SECURITIES CLEARING SYSTEM THE RULES DEFINITIONS Article 1 Unless the context requires otherwise, for all purposes of these Rules: "CSCS" means Central Securities Clearing System The CMU Fund Order Routing and Settlement Service The CMU Fund Routing and Settlement Service by the Financial Infrastructure Department Investment funds have become an increasingly important international financial intermediation channel in addition Securities Services Charges (Applicable to Personal Customers) Securities Services Charges (Applicable to Personal Customers) CHARGES FOR TRADE-RELATED, SCRIP HANDLING & SETTLEMENT-RELATED, NOMINEE SERVICES & CORPORATE ACTIONS AND OTHER SERVICES (A) Local Securities General Banking and Sunflower Service Charges Member CMB Group General Banking and Sunflower Service Charges Effective on 17 August 2015 Enquiry Hotline: 230 95555 - Table of Content - Part 1 - Deposit Service Charges 2 Part 2 Top 50 Banking Interview Questions Top 50 Banking Interview Questions 1) What is bank? What are the types of banks? A bank is a financial institution licensed as a receiver of cash deposits. There are two types of banks, commercial banks REGULATORY OVERVIEW REGULATIONS AND SUPERVISION OF THE SECURITIES BUSINESS IN HONG KONG REGULATORY ENVIRONMENT IN HONG KONG This section sets out summaries of certain aspects of the regulatory environment in Hong Kong, which are relevant to our Group s business and operation. (A) REGULATIONS 1 2. TRADING PASSWORD Table of Content Page 1. LOGIN PASSWORD 1 2. TRADING PASSWORD 1 3. LOGIN 1 4. PLACE ORDER 3 5. TRANSACTION STATUS 4 6. CHANGE ORDER (REDUCE QUANTITY) 5 7. CANCEL ORDER 6 8. FUND DEPOSIT 7 9. FUND WITHDRAWL/TRANSFER Payment, clearing and settlement systems in Korea Payment, clearing and settlement systems in Korea CPSS Red Book 2011 205 Contents List of abbreviations...209 Introduction...211 1. Institutional aspects...211 1.1 The general institutional framework...211 DEFINITIONS. In this document, the following expressions have the following meanings, unless the context requires otherwise: In this document, the following expressions have the following meanings, unless the context requires otherwise: AMS the Stock Exchange s Automatic Order Matching and Execution System AMS/2 an upgraded NBB-SSS Securities settlement system of the National Bank of Belgium. Regulations October 2012 English translation - for information purposes only NBB-SSS Securities settlement system of the National Bank of Belgium Regulations October 2012 English translation - for information purposes only National Bank of Belgium, Brussels All rights reserved. Real Time Gross Settlement Systems: An Overview Real Time Gross Settlement Systems: An Overview RTGS Project Management Office State Bank of Pakistan The development of RTGS systems started as a response to the growing awareness of the need for sound PAYMENT AND SETTLEMENT SYSTEMS IN SPAIN Francisco Linares Payment Settlement Systems Unit Manager Jesús Pérez Bonilla Securities Settlement Systems Unit Manager Payments Week 2005 Madrid 20 June 2005 PAYMENT SYSTEMS DEPARTMENT AGENDA PAYMENT A member of BOCHK Group 2/5 General Banking Services Charges Return cheque Insufficient funds -- HKD/USD cheque -- RMB cheque Customer's technical errors -- HKD/USD cheque -- RMB cheque CURRENT ACCOUNT RMB cheque amount exceeding LEGISLATIVE COUNCIL BRIEF. Clearing and Settlement Systems (Amendment) Bill 2015 File Ref: B&M/2/1/20C LEGISLATIVE COUNCIL BRIEF Clearing and Settlement Systems Ordinance (Chapter 584) Clearing and Settlement Systems (Amendment) Bill 2015 INTRODUCTION At the meeting of the Executive Deposit Protection Scheme Bill Deposit Protection Scheme Bill by the Banking Development Department The Hong Kong Monetary Authority has developed a set of proposals on how the proposed deposit protection scheme in Hong Kong should CHAPTER 12 MONEY SETTLEMENT SERVICES 12/1 CHAPTER 12 MONEY SETTLEMENT SERVICES 1201. Participants to have Designated Bank Accounts Each Participant shall maintain a General Purpose Designated Bank Account in its own name and denominated in First Chapter - What is Shanghai-Hong Kong Stock Connect? Shanghai-Hong Kong Stock Connect is a securities trading and clearing links programme to First Chapter - What is Shanghai-Hong Kong Stock Connect? Shanghai-Hong Kong Stock Connect is a securities trading and clearing links programme to be developed by Hong Kong Exchanges and Clearing Limited In these Rules, the following expressions have the meaning set out below, unless the context requires otherwise: 2. DEFINITIONS In these Rules, the following expressions have the meaning set out below, unless the context requires otherwise: Expression Act announcement applicant approved settlement facility the Corporations - 1 - Hard copies of the return (Chinese and English) are available at the Licensing counter for collection. - 1 - s about 1. Where can the FRR be found? It is available on the SFC s website at under the Update for Intermediaries section. Alternatively, it is included in the 20.4 Gazette and HONG KONG DOLLAR DEBT MARKET DEVELOPMENTS IN 2001 HONG KONG DOLLAR DEBT MARKET DEVELOPMENTS IN 2001 New issuance of Hong Kong dollar debt declined in 2001 amid economic downturn and a poor external environment, despite easing interest rates. All classes March Guidance on Client Asset Regulations For Investment Firms March 2015 2012 Guidance on Client Asset Regulations For Investment Firms 2 Revision History Date Version Description 30 March 2015 1.0 Final Contents 1. Purpose and Effect of the Guidance 1 2. Structure Payment systems in the United States Payment systems in the United States Table of contents List of abbreviations... 431 Introduction... 433 1. General institutional framework... 433 1.1 General legal framework... 433 1.1.1 Cheques... 434 Securities Services Charges (Applicable to Company Account Customers) Securities Services Charges (Applicable to Company Account Customers) CHARGES FOR TRADE-RELATED, SCRIP HANDLING & SETTLEMENT-RELATED, NOMINEE SERVICES & CORPORATE ACTIONS AND OTHER SERVICES (A) Local Securities Payment, clearing and settlement systems in Australia Payment, clearing and settlement systems in Australia CPSS Red Book 2011 1 Contents List of abbreviations...5 Introduction...7 1. Institutional aspects...9 1.1 The general institutional framework...9 Asia Market Intelligence Thailand Asia Market Intelligence Thailand Presence HSBC s history in Thailand dates back to 1865, when the Bank appointed an agent in Bangkok. In 1888, a representative office opened in Bangkok, becoming the first Chapter 37 DEBT SECURITIES DEBT ISSUES TO PROFESSIONAL INVESTORS ONLY. Introduction Chapter 37 DEBT SECURITIES DEBT ISSUES TO PROFESSIONAL INVESTORS ONLY Introduction 37.01 This Chapter deals with debt issues to Professional Investors only. It sets out the qualifications for listing, CHAPTER 1 INTERPRETATION CHAPTER 1 INTERPRETATION 101. In these Rules, unless the context otherwise requires:- Access Card means an electronic security access card issued by the Exchange to an Exchange Participant to gain admission Application of Banking License in Hong Kong Application of Banking License in Hong Kong Highlights of the Regulatory Requirements July 2012 1. Introduction Under the Hong Kong regulatory regime, institutions which intend to carry Public Finance and Expenditure Management Law Public Finance and Expenditure Management Law Chapter one General provisions Article one. The basis This law has been enacted in consideration of Article 75, paragraph 4 of the Constitution of Afghanistan THE PAYMENT AND SETTLEMENT SYSTEMS 2007 THE PAYMENT AND SETTLEMENT SYSTEMS 2007 1. INTRODUCTION The payment and settlement system was upgraded in 2007 with the introduction of a Real Time Gross Settlement (RTGS) system known in Israel as Zahav,
http://docplayer.net/1897125-Payment-systems-in-hong-kong.html
CC-MAIN-2017-51
en
refinedweb
Overview of Design Patterns for Beginners Introduction. What are Design Patterns? A real world software system is supposed to solve a set of business problems. Most of the modern languages and tools use object oriented design to accomplish this task of solving business problems. Designing a software system is challenging because it not only needs to meet the identified requirements but also needs to be ready for future extensions and modifications. A software design problem may have more than one solution. However, the solution you pick should be the best in a given context. That is where design patterns come into the picture. There may be multiple routes that take you to the top but not all will do so in the same amount of time and effort. Some routes might be tough to walk but they might take you to the top in less time. Some other routes might be easy to walk but they may take too much time. Which one is the best route? There cannot be a single answer to this question. Depending on your physical strength, available time and external conditions, such as weather, you need to compromise on a route as the "best". In the preceding example, you were unaware of any of the routes leading you to the top. What if somebody gives you a detailed map along with all possible routes, along with pros and cons of each? Obviously, you will be in a much better position to begin your journey. Let's try to draw a parallel in terms of a software system designing. Suppose you have been given a software design problem to solve. As a developer you may attempt to solve it based on your own skills and experience. However, if you start the design from scratch you may end up spending too much time and effort. Additionally, you may not know whether your design is the best possible design in a given context. That is where design patterns come into picture. Simply put a design pattern is a proven solution to solve a design problem. A design pattern provides a template or blueprint for solving a software design problem at hand. Why are design patterns better than a "from scratch" solution? That's because thousands and thousands of developers all over the world have used them successfully to solve a design problem. Thus they are proven solutions to recurring design problems. Additionally, since design patterns are well documented you have all the information needed to pick the right one. A design pattern is usually expressed by the following pieces of information: - Name : A design pattern usually has a name that expresses its purpose in nutshell. This name is used in the documentation or communication within the development team. - Intent : Intent of a pattern states what that pattern does. Intent is usually a short statement(s) that captures the essence of the pattern being discussed. - Problem : This refers to a software design problem under consideration. A design problem depicts what is to be addressed under a given system environment. - Solution : A solution to the problem mentioned above. This includes the classes, interfaces, behaviors and their relationships involved while solving the problem. - Consequences : Tradeoffs of using a design pattern. As mentioned earlier there can be more than one solution to a given problem. Knowing consequences of each will help you evaluate each solution and pick the right one based on your needs. Note that the above list is just the minimum information needed to describe a pattern. In practice, a few more aspects may also be needed to be expressed. As an attempt to catalog popular design patterns Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides documented around 23 design patterns in their book titled "Design Patterns: Elements of Reusable Object-Oriented Software". These design patterns are the most popular and commonly used patterns today. These patterns are often termed as Gang of Four (GoF) patterns since they are documented by these four authors. What are the Benefits of Using Design Patterns? The discussion about design patterns should have given you some idea about their benefits. Let's quickly identify and summarize these benefits here: - Design patterns help you to solve common design problems through a proven approach. - Design patterns are well documented so that there is no ambiguity in the understanding. - Design pattern may help you reduce the overall development time because rather than finding a solution you are applying a well known solution. - Design patterns promote code reusability and loose coupling within the system. This helps you deal with future extensions and modifications with more ease than otherwise. - Design patterns promote clear communication between technical team members due to their well documented nature. Once the team understands what a particular design pattern means, its meaning remains unambiguous to all the team members. - Design patterns may reduce errors in the system since they are proven solutions to common problems. Classification of Design Patterns Now that you understand what design patterns are and what their benefits are, let's see how they are classified. The GoF design patterns are classified into three categories namely creational, structural and behavioral. - Creational Patterns : Creational design patterns separate the object creation logic from the rest of the system. Instead of you creating objects, creational patterns create them for you. The creational patterns include Abstract Factory, Builder, Factory Method, Prototype and Singleton. - Structural Patterns : Sometimes you need to build larger structures by using an existing set of classes. That's where Structural Patterns come into picture. Structural class patterns use inheritance to build a new structure. Structural object patterns use composition / aggregation to obtain a new functionality. Adapter, Bridge, Composite, Decorator, Facade, Flyweight and Proxy are Structural Patterns. - Behavioral Patterns : Behavioral patterns govern how objects communicate with each other. Chain of responsibility, Command, Interpreter, Iterator, Mediator, Memento, Observer, State, Strategy, Template method and Visitor are Behavioral Patterns. Example of a Design Pattern - Singleton Now that you know the purpose and benefits of design patterns, let's conclude this article by looking into a simple example of design patterns. In this example you will use Singleton design pattern. As mentioned earlier, Singleton is a creational design pattern. The intent of the Singleton design pattern is as follows: To ensure that only one instance of a class is created and to provide a global point of access to the object. The above intent tells us that Singleton is useful when you wish to create one and only one instance of a class in the system. Consider a case where you are building a class that represents some costly system resource. Since the resource is expensive you wish to ensure that the users of the class create only one instance of the class to avoid any wastage of the resources. Singleton pattern can be used to enforce such a functionality. Consider the following class written in C# that uses Singleton pattern. public class CostlyResource { private CostlyResource() { //nothing here } public DateTime LastRequestedOn { get; private set; } private static CostlyResource instance = null; public static CostlyResource GetInstance() { if(instance==null) { instance = new CostlyResource(); } instance.LastRequestedOn = DateTime.Now; return instance; } } The above code creates a class named CostlyResource. The constructor of the class is made private so that you cannot instantiate it from outside of the class. The CostlyResource class has just one public property, LastRequestedOn, that stores the DateTime at which the last request for the object instance was made. Then the code creates a static instance of CostlyResource class and sets it to null. This variable is static because it needs to be shared with multiple calls. It then creates a static method named GetInstance(). The GetInstance() method is responsible for creating an instance of CostlyResource and stores it in the instance variable. It also sets the LastRequestedOn property to the current DateTime value. Finally, instance is returned to the caller. The CostlyResource class can be used in some other part of the system like this: static void Main(string[] args) { CostlyResource obj1 = CostlyResource.GetInstance(); Console.WriteLine(obj1.LastRequestedOn); System.Threading.Thread.Sleep(2000); CostlyResource obj2 = CostlyResource.GetInstance(); Console.WriteLine(obj2.LastRequestedOn); if(Object.ReferenceEquals(obj1, obj2)) { Console.WriteLine("obj1 and obj2 are pointing to the same instance!"); } Console.ReadLine(); } The above code creates a variable of type CostlyInstance (obj1) and points it to the instance returned by GetInstance() static method. It then outputs the LastRequestedOn property on the Console. The code then halts the execution for a couple of seconds by calling the Sleep() method of Thread class. It then creates another variable of type CostlyInstance (obj2) and calls GetInstance() again to obtain the instance. To prove that object pointed to by obj1 and obj2 is one and the same the code uses ReferenceEquals() method of the Object class. The ReferenceEquals() method accepts the two instances to compare and returns true if they are pointing to the same object. If you run the above code you will get an output as shown below: Pointing to the Same Instance Summary Design patterns offer proven solutions to recurring design problems. GoF design patterns are widely used by developers and are classified into three categories - creational, structural and behavioral. This article presented a quick overview of design patterns. It also discussed Singleton design pattern along with its C# implementation.
https://www.developer.com/design/overview-of-design-patterns-for-beginners.html
CC-MAIN-2017-51
en
refinedweb
When you first create an instance of an object there is often the need to initialize it before using it. The obvious thing to do is to write an “init” member function and remember to call this before going on. The trouble with this idea is that you could forget to use init before getting on with the rest of the program. To help avoid this Java and most other object oriented languages use special functions called constructors that can be used to automatically initialize a class. A constructor has to have the same name as the class and it is called when you create a new object of the class. For example, if we add the constructor public Point(int a, int b) { X=a; Y=b;} public Point(int a, int b) { X=a; Y=b; } to the point class you can create and initialize a new object using Point Current=new Point(10,20); Notice the way that the values to be used in the initialization are specified. In fact you can think about the use of Point(10,20) as being a call to the constructor. As well as an initialization routine, you can also specify a clean up routine called finalize() that is called whenever an object is about to be deleted. This isn’t used as much as the constructor but it is still handy. If you don’t bother to write a constructor the Java system provides a default constructor that does nothing and takes no parameters. If you define a constructor of any sort the system doesn’t provide a default. So if you define a constructor that takes parameters, e.g. point(x,y), don’t forget to also define a parameter-less constructor, i.e. point(), if you also want to create objects without any initialization or with a default initialization. It is worth knowing at this early stage that the whole subject of initializing objects is a complicated one that has lots of different clever solutions. For the moment knowing about constructors is probably enough. The previous section introduces a new idea. You can define any number of constructors as long as each one has a different set of parameters. This means you have functions i.e. the constructors all with the same name, the name of the class, differing only by the parameters used to call them. This is an example of overloading. Although it doesn’t actually have anything directly to do with object oriented programming, and so rightly should be left until some other time, overloading is too useful to postpone. The general idea is that you can have any number of functions with the same name as long as they can be distinguished by the parameters you use to call the function. In more formal terms the “signature” of a function is the pattern of data types used to call it. For example, void Swap(int a, int b) has a signature int,int and void Swap(float a,float b) has the signature float,float, and void Swap(int a, float b) has the signature int,float and so on… When you call a function for which there is more than one definition the signature is used to sort out which one you mean. In this sense you can consider that the name of a function is not just the name that you call it by but its name and types of parameters. So Swap is really Swapintint, Swapfloatfloat and Swapintfloat and so non. But notice that this is just a handy way to think about things you can't actually use these names. For example, given the multiple “overloaded” definitions of Swap given earlier, the call Swap(x,y) where x and y are ints would call the first Swap, i.e. the one with signature int,int. As long as the function used can be matched up with definition with the same signature everything will work. What function overloading allows you to do is create what looks like a single function that works appropriately with a wide range of different types of data. You can even define functions with the same name that have different numbers of parameters. For example, Swap(int a, int b, int c) has a signature int,int,int and thus is a valid overloaded form of Swap. As already mentioned of the most common uses of overloading is to provide a number of different class constructors. For example, you could create a point(int a) constructor which only initializes the x data member or the point(float r, float t) constructor which sets x and y from a radius and an angle. Notice a function’s return type is not part of its signature. A class defines a collection of methods and properties that represents a some sort of logically coherent entity. For example a geometric point. However you soon begin to notice that some types of entity form a family tree of things. The key thing that should alerts you to this situation is when you find yourself saying an A IS A B with some extra features. In object oriented terms what this is saying is that the class that represents A is essentially B with some additional methods and properties. Overall an A behaves like a B but it has some extras - it is a bit more than a B. To accommodate this situation we allow a new class to be derived from an existing class by building on its definition. For example we have a class that represents a 2D Point and perhaps we want a 2D Point that also specifies a color. Now you could say that PointColor is just a Point with a color property. In this sense we could say that PointColor extends the idea of a Point and in Java this would be written - class PointColor extends Point{} class PointColor extends Point{ What this means exactly is that the definition of the PointColor class starts off from all of the definitions in the Point class. It is as if you have just cut and pasted all of the text in the Point definition into the PointColor definition. In more technical terms we say that PointColor inherits from Point and this is a specific example of class inheritance. If you did nothing else at all an instance of the line class would be exactly the same as the point class - it would have setX, setY, getX, getY and so on as members. In short all of the members of Point are members of PointColor. What is the good of inheritance? Well the key lies in the use of the term “extends”. You don’t have to stop with what you have inherited, you can add data and function members to the new class and these are added to what has been inherited. That is the class that you are extending acts as a basis for your new class - the class you are extending is often called the base class for this reason but the terminology varies. The key idea to get is that when a class extends another it is everything that class is and more. For example, in the case of the PointColor class we need one more variables to store the color and some get/set methods. public class PointColor extends Point { private int Color; void setColor(int color) { Color = color; } int getColor() { return Color; }} Now the new PointColor class as all of the methods of Point and a new Color property complete with get/set methods. You can use all of the methods of PointColor with no distinction between what is inherited and what newly defined. For example, PointColor PC=new PointColor();PC.setX(10);PC.setColor(3); Notice that you can't directly access the property variables such as X,Y and Color directly only via the set/get methods. This is also true for the code within PointColor. That is PointColor cannot directly access X and Y, the variables that belong to Point. They are just as much private from PointColor as they are to the rest of the world. PointColor can't get at the internals of Point. The relationship between PointColor and its base class Point is simple but it can become more complicated. Before classes and inheritance programmers did reuse code by copy and pasting the text of the code. This worked but any changes made to the original code were not passed on to the copied code. With inheritance if you change the base class every class that extends it are also changed in the same way. The inheritance is dynamic and this is both a good and a bad thing. The problem is that changing a base class might break the classes that extend it. It is for this reason that how a base class does its job should be hidden from the classes that extend it so that they cannot make use of something that might well change. This is the reason that extending classes cannot access the private variables and functions of the base class.
http://www.i-programmer.info/ebooks/modern-java/4570-java-inheritance.html?start=1
CC-MAIN-2017-51
en
refinedweb
A WPanel provides a container with a title bar. More... #include <Wt/WPanel> A WPanel provides a container with a title bar. The panel provides a container with an optional title bar, and an optional collapse icon. Usage example: Returns the central widget. Collapses the panel. When isCollapsible() is true, the panel is collapsed to minimize screen real-estate. Signal emitted when the panel is collapsed. Signal emitted when the panel is collapsed. The signal is only emitted when the panel is collapsed by the user using the collapse icon in the tible bar, not when calling setCollapsed(bool). Collapses the panel. When isCollapsible() is true, the panel is expanded to its original state. Signal emitted when the panel is expanded. Signal emitted when the panel is expanded. The signal is only emitted when the panel is expanded by the user using the expand icon in the title bar, not when calling setCollapsed(bool). Returns if the panel is collapsed. Returns if the panel can be collapsed by the user. Sets an animation. The animation is used when collapsing or expanding the panel. Sets the central widget. Sets the widget that is the contents of the panel. When a widget was previously set, the old widget is deleted first. The default value is 0 (no widget set). Sets the panel expanded or collapsed. When on is true, equivalent to collapse(), otherwise to expand(). The default value is false. Makes the panel collapsible. When on is true, a collapse/expand icon is added to the title bar. This also calls setTitleBar(true) to enable the title bar. The default value is false. Sets a title. The panel title is set in the title bar. This method also makes the title bar visible by calling setTitleBar(true). The default value is "" (no title). Shows or hides the title bar for the panel. The title bar appears at the top of the panel. The default value is false: the title bar is not shown unless a title is set or the panel is made collapsible. Returns the title. Returns if a title bar is set. Returns the title bar widget. The title bar widget contains the collapse/expand icon (if the panel isCollapsible()), and the title text (if a title was set using setTitle()). You can access the title bar widget to customize the contents of the title. The method returns 0 if titleBar() is false. You need to call setTitleBar() first.
http://www.webtoolkit.eu/wt/wt3/doc/reference/html/classWt_1_1WPanel.html
CC-MAIN-2017-51
en
refinedweb
Object Libraries and Namespaces for Visual Basic 6.0 Users Namespaces may at first seem to be a new concept in Visual Basic 2008. However, conceptually, they are similar to object libraries in Visual Basic 6.0. In Visual Basic 6.0, various libraries contain objects that are used to build an application. For example, the Visual Basic library contains the form objects and intrinsic control objects that were the foundation of a Windows-based application. You could see the objects contained in a given library by viewing it in the Object Browser. In Visual Basic 2008, objects are contained in assemblies that are part of the .NET Framework class library. Each assembly represents a namespace. For example, form and control objects are contained in the System.Windows.Forms namespace. As in Visual Basic 6.0, namespaces can be inspected in the Object Browser.
https://msdn.microsoft.com/en-us/library/t1txwd74(v=vs.90).aspx
CC-MAIN-2015-32
en
refinedweb
System freezes when using SDK 3.0 and an extended (open) menu of Commands I am developing a midlet for a touch screen phone and using more Commands than there are softkeys, and so they form a menu. When I use the Java(TM) Platform Micro Edition SDK 3.0 for the emulator (as is recommended), and in particular the Default FX Phone 1 (which is a touch screen phone emulator) the system keeps crashing. If I leave the menu extended, then after a small period of time, say a few minutes,( it varies), then the whole thing freezes and I have to close down and then restart my computer and reload netbeans. I haven't tried every phone in the list but at least four of them give the same results. When I use Sun Java(TM) WirelessToolkit 2.5.2_01 for CLDC for the emulator and in particular the DefaultColorPhone, the problem doesn't occur. This is not an emulator for a touch phone and so I cannot really test without going on to my target device. I have included a bare bones version of an app that will illustrate what happens. import com.sun.lwuit.Display; import com.sun.lwuit.Command; import com.sun.lwuit.Form; import com.sun.lwuit.events.ActionEvent; import com.sun.lwuit.events.ActionListener; import javax.microedition.midlet.MIDlet; public class Cash_Balance extends MIDlet implements ActionListener { public void startApp() { Display.init(this); Form TitleForm = new Form(""); TitleForm.addCommand(new Command("EXIT",0)); TitleForm.addCommand(new Command("ONE",1)); TitleForm.addCommand(new Command("TWO",2)); TitleForm.setCommandListener(this); TitleForm.show(); } public void actionPerformed(ActionEvent ae) { Command cmd = ae.getCommand(); switch (cmd.getId()) { case 0: notifyDestroyed(); // case 1: } } public void pauseApp() { } public void destroyApp(boolean unconditional) { } } Is this a problem with SDK 3.0 or is there something else that I can do to get around it.
https://www.java.net/node/703584
CC-MAIN-2015-32
en
refinedweb
Skip navigation links java.lang.Object com.tangosol.util.Base com.tangosol.util.AbstractKeyBasedMap com.tangosol.util.AbstractKeySetBasedMap com.tangosol.net.cache.SerializationMap com.tangosol.net.cache.AbstractSerializationCache com.tangosol.net.cache.SerializationPagedCache public class SerializationPagedCache A version of SerializationMap that implements an LRU policy using time-based paging of the cache. This implementation uses a BinaryStoreManager to create a current BinaryStore which is used to store all objects being placed into the cache. Once the specified "current" time period has elapsed, a new current BinaryStore is created, and the previously current page is closed, which means that it will no longer be used to store objects that are being placed into the cache. This continues until the total number of pages (the one current plus all of the previously current pages) exceeds the maximum number of pages defined for the cache. When that happens, the oldest page is evicted, triggering the related events, and the BinaryStore for that page is destroyed. Note that cache items can be accessed out of and removed from closed pages, but cache items are never written to closed pages. To avoid a massive number of simultaneous events, the eviction of a closed page can be performed asynchronously on a daemon thread. public SerializationPagedCache(BinaryStoreManager storemgr, int cPages, int cPageSecs) storemgr- the BinaryStoreManager that provides BinaryStore objects that the serialized objects are written to cPages- the maximum number of pages to have active at a time cPageSecs- the length of time, in seconds, that a 'page' is current public SerializationPagedCache(BinaryStoreManager storemgr, int cPages, int cPageSecs, java.lang.ClassLoader loader) storemgr- the BinaryStoreManager that provides BinaryStore objects that the serialized objects are written to cPages- the maximum number of pages to have active at a time cPageSecs- the length of time, in seconds, that a 'page' is current loader- the ClassLoader to use for deserialization public SerializationPagedCache(BinaryStoreManager storemgr, int cPages, int cPageSecs, boolean fBinaryMap, boolean fPassive) storemgr- the BinaryStoreManager that provides BinaryStore objects that the serialized objects are written to cPages- the maximum number of pages to have active at a time cPageSecs- the length of time, in seconds, that a 'page' is current fBinaryMap- true indicates that this map will only manage binary keys and values fPassive- true indicates that this map is a passive cache, which means that it is just a backup of the cache and does not actively expire data public XmlElement getConfig() public void setConfig(XmlElement xml) xml- the XML configuration for the object java.lang.IllegalStateException- if the object is not in a state that allows the configuration to be set; for example, if the object has already been configured and cannot be reconfigured public void clear() clearin interface java.util.Map clearin class AbstractSerializationCache public java.lang.Object get(java.lang.Object oKey) getin interface java.util.Map getin class SerializationMap oKey- the key object Map.containsKey(Object) public java.lang.Object put(java.lang.Object oKey, java.lang.Object oValue) putin interface java.util.Map putin class AbstractSerializationCache oKey- key with which the specified value is to be associated oValue- value to be associated with the specified key public void putAll(java.util.Map map) putAllin interface java.util.Map putAllin class AbstractSerializationCache map- map of entries to be stored in this map public java.lang.Object remove(java.lang.Object oKey) removein interface java.util.Map removein class AbstractSerializationCache oKey- key whose mapping is to be removed from the map protected boolean removeBlind(java.lang.Object oKey) removeBlindin class AbstractSerializationCache oKey- key whose mapping is to be removed from the map protected void eraseStore() eraseStorein class SerializationMap public void evict() AsynchronousPageDeactivationproperty. evictin class AbstractSerializationCache public java.lang.String toString() toStringin class SerializationMap protected java.lang.String getDescription() getDescriptionin class AbstractSerializationCache public boolean isVirtualErase() protected void setVirtualErase(boolean fVirtualErase) fVirtualErase- true if the erase should not go to the underlying store, but simply remove the key from the cached list of keys managed by that store; false to pass all erase requests down to the underlying store public int getLockDelaySeconds() public void setLockDelaySeconds(int cSecondsLockDelay) This will not take effect until the next lock is requested and unless debug mode is turned on. cSecondsLockDelay- the number of seconds to wait for a lock protected ConcurrentMap getLockMap() protected BinaryStoreManager getBinaryStoreManager() This object is intended for use only by the createBinaryStore, and destroyBinaryStore methods. protected void setBinaryStore(BinaryStore store) setBinaryStorein class SerializationMap store- the BinaryStore to use protected SerializationPagedCache.PagedBinaryStore getPagedBinaryStore() Note: This implementation assumes that the BinaryStore is only being modified by this Map instance. If you modify the BinaryStore contents, the behavior of this Map is undefined. protected java.util.List getBinaryStoreList() protected java.util.Iterator iterateBinaryStores() public int getMaximumPages() public long getPageDuration() protected void setPageDuration(long cPageMillis) cPageMillis- the time in milliseconds that a page remains current protected long getCurrentPageTime() protected long getPageAdvanceTime() public boolean isPassivePagedBackup() protected void setPassivePagedBackup(boolean fPassiveBackup) fPassiveBackup- true if this cache is just a backup for a paged cache public boolean isAsynchronousPageDeactivation() public void setAsynchronousPageDeactivation(boolean fAsync) This will not take effect until the next page is deactivated. fAsync- pass true to specify that a daemon should be used for page deactivation, or false to block all other threads while a page is fully deactivated public static boolean isDebug() public static void setDebug(boolean fDebug) fDebug- true to set the cache into debug mode, false to set it into normal runtime mode protected void lockInternal(java.lang.Object oKey) oKey- the key to lock protected boolean lockInternalNoWait(java.lang.Object oKey) oKey- the key to lock protected void unlockInternal(java.lang.Object oKey) oKey- the key to unlock protected BinaryStore createBinaryStore() protected void destroyBinaryStore(BinaryStore store) store- a BinaryStore returned previously from createBinaryStore protected void checkPage() protected void advancePage() protected void runTask(java.lang.Runnable task) task- the Runnable object to run on a separate thread protected void deactivatePage(SerializationPagedCache.WrapperBinaryStore store) store- the "page" to deactivate protected SerializationPagedCache.PagedBinaryStore instantiatePagedStore(int cPages) cPages- the maximum number of pages to have active at a time protected SerializationPagedCache.WrapperBinaryStore instantiateWrapperStore(BinaryStore store) store- the BinaryStore to wrap protected SerializationPagedCache.FakeBinaryStore instantiateFakeBinaryStore() Skip navigation links
http://docs.oracle.com/html/E22843_01/com/tangosol/net/cache/SerializationPagedCache.html
CC-MAIN-2015-32
en
refinedweb
Summary editA Starkit Description editStarkits are interpreted by kitsh, a Tcl package that is a component of a Tclkit - a single file Tcl/Tk interpreter.A starkit can be run either via a Tclkit: mytclkit mystarkit.kitor, as detailed below, by unpacking the starkit and running the included main.tcl filewhatwA Tclkit and a Starkit can be combined into a Starpack.The source code is part of Tclkit Documentation edit - Beyond TclKit - Starkits, Starpacks and other *stuff - presented by Steve Landers at the Tcl/Tk 2002 conference in Vancouver - Anatomia di uno StarKit ,by giorvio_v - a short summary of starkits, in Italian Community edit - official mailing list - to discuss tclkit/starkit/starpack ideas, development, use. Available Starkits editStarkits that have been published by various parties: Building a Starkit editBuilding a Starkit is very easy. Tclkit provides all the infrastructure, and sdx provides the tools. - Build Your First Starkit - how to convert a single-file application into a starkit - A Simple Multi-File Starkit Example - how to create a multi-file starkit - Complex Pure Tcl Starkit Example - bundling a large application into a starkit - Starkits with Binary Extensions - create cross platform, compiled applications - Starkits containing Tcl only extensions - So You Want to Use Starkits, Eh? - contains another good example Run a Starkit with any tclsh editA Tclkit is not strictly necessary to execute a starkit. To execute a starkit with any tclsh: - obtain the tclkit source code - add <path to tclkit sources>/kitsh.vfs/lib/vfs to TCLLIBPATH - unpack the .kit file using sdx - invoke <path to unpacked .kit file>/main.tcl Commands editCould someone list here the various starkit namespace procs that are available for use?For instance, there is: - starkit::pload - use like this: package ifneeded Mk4tcl 2.4.9.2 "package require starkit [list starkit::pload $dir Mk4tcl Mk4tcl]" - ... Adding Encodings editDavid : - Unpack your starkit if is not already done. - Create this directory : .../appname.vfs/lib/tcl8.4/encoding - Copy the needed encoding(s) .enc file(s) in this directory (from an existing tcl installation or from tcl CVS repository here : if you using the last tclkit). - re-wrap your starkit with these new files. - Add the following path to your application before wrapping and put the encodings in: (appname).vfs/lib/tcl8.5/encoding - Wrap with the standard starkit See Also edit - Demonstrating Starkits - ideas on how to present Starkits to the uninitiated - Sexy Starkits - by Ro - TclApp - a part of ActiveState's Tcl Dev Kit. - Adding a splash screen to a Starkit - Adding help to scripted documents - Creation of multi-platform Starkitted binary packages - execx2 - simplifies the invocation of executable files contained in starkits/starpacks, and virtual filesystems in general - Inspecting Your New Starkit - No magic or policy in starkits - Starkit - How To's - Starkit Meet Zip - Writing to your starkit - How to create my first starpack - Starting effective starkit-based pure-Tcl development: the starkit::* namespace - Differences between tclkit and tcl - starsite - Starkit is a mechanism not a file format - Starkit boot sequence - what has happened by the time main.tcl runs - Create starkit.ico for windows starpack History editStarkits were originally called Scripted DocumentsScripted Documents Are Obscure Misc editWhen.kitorWhen I start an ActiveTcl tclsh, and do a package names, I see: ActiveTcl TclInside MDD: I've noticed a strange problem with tclkit 8.4.6 under QEMU.I've got a Kanguru Zipper 4GB USB hard drive
http://wiki.tcl.tk/3661
CC-MAIN-2015-32
en
refinedweb
ASP.NET Tip: Create a BasePage Class for All Pages to Share Something I've been doing with my ASP.NET applications is creating a shared Page class from which all my Web pages inherit. This allows all the pages to share common functions, settings, and so forth without having to do much work. In ASP.NET 1.1, I used the BasePage class to help control the rendering of the page, but the Master Page in ASP.NET 2.0 eliminated this requirement. To create a BasePage, you just add a class to your project. In 2.0, this class will go into the App_Code folder. It will inherit from System.Web.UI.Page, so the beginnings of the class will look like the following (C#): public class BasePage : System.Web.UI.Page { } Once you have the class, you'll need to change your ASP.NET pages to inherit from this class instead of directly from System.Web.UI.Page. Your revised code-behind file for a Web page will resemble the following: public partial class MyPage : BasePage { protected void Page_Load(object sender, EventArgs e) { } } Now that you have the inheritance set up between the two files, you can add common elements to the BasePage class and make them available to the Web page. For instance, a database-driven Web application might want to automatically open a database connection on each page. You can add this code to the OnInit event of the BasePage class, like so: public class BasePage : System.Web.UI.Page { public SqlConnection ActiveConnection; protected override void OnInit(EventArgs e) { ActiveConnection = new SqlConnection(...); ActiveConnection.Open(); } } The variable named ActiveConnection will now be a live database connection available to any Web page. You'll also want to add the corresponding Close code to the OnUnload event if you open the connection in this manner. The other thing to remember is that any variables declared as private in the BasePage won't be visible to the Web page, and any variables declared as protected in the BasePage won't be available to the ASPX portion of the Web page, but they will be available to the code-behind part of the ASPX page. Future tips will discuss properties that instantiate objects when they are first requested rather than every time like this code does. You may have pages where a database connection isn't always required, so this adds time that isn't necessary.. Great idea!Posted by Neo An on 12/05/2012 07:35am Thanks for sharing this! Your instruction is very clear!Reply
http://www.codeguru.com/csharp/.net/net_asp/webforms/article.php/c11939/ASPNET-Tip-Create-a-BasePage-Class-for-All-Pages-to-Share.htm
CC-MAIN-2015-32
en
refinedweb
By now, you probably are aware that you can dynamically load XAP files using the Managed Extensibility Framework (MEF) within your Silverlight applications. Have you been scratching your head, however, and wondering how on earth you would actually test something like that? It is possible, and here’s a quick post to show one way you can. First, we need a decent deployment service. You’re not really going to hard-code the download and management, are you? I didn’t think so. If you need an example, look no further than the sample code I posted to Advanced Silverlight Applications using MEF. Here’s what the interface looks like: public interface IDeploymentService { void RequestXap(string xapName, Action<Exception> xapLoaded); AggregateCatalog Catalog { get; } } This keeps it simple. Request the xap file, then specify a delegate for a callback. You’ll either get a null exception object (it was successful) or a non-null (uh… oh.) Now, let’s focus on testing it using the Silverlight Unit Testing Framework. The first caveat is that you cannot use it on the file system. This means that your project will not work if you run it with a test page rather than hooking it to a web server (local or not). Doing this is simple. In your ASP.NET project, go to the Silverlight tab and add your test project. When you are adding it, there is an option to generate a test page. I typically have one “test” web project with all of my test Silverlight applications, so I will have multiple test pages. To run a particular test, you simply set your ASP.NET web project as the start up project, then the corresponding test page (we’re talking the aspx, not the html) as the start page. I usually delete the automatically generated HTML pages. Now we need to give MEF a test container. The caveat here is that, without a lot of work, it’s not straightforward to reconfigure the host container so you’ll want to make sure you test a given dynamic XAP file only once, because once it’s loaded, it’s loaded. This is what my application object ends up looking like: public partial class App { public AggregateCatalog TestCatalog { get; private set; } public App() { Startup += Application_Startup; InitializeComponent(); } private void Application_Startup(object sender, StartupEventArgs e) { // set up a catalog for tests TestCatalog = new AggregateCatalog(); TestCatalog.Catalogs.Add(new DeploymentCatalog()); var container = new CompositionContainer(TestCatalog); CompositionHost.Initialize(container); // now set up the unit testing framework var settings = UnitTestSystem.CreateDefaultSettings(); RootVisual = UnitTestSystem.CreateTestPage(settings); } } Here, I haven’t composed anything, just set up the container. Now I’m going to add a simple dynamic XAP for testing. I add a new Silverlight application and wire it to the test web site but do NOT generate a test page. I blow away the App.xaml and MainPage.xaml resources, and add a simple class called Exports. Here is my class: public class Exports { private const string TESTTEXT = "TestText"; [Export(TESTTEXT, typeof(string))] public string TestText { get { return TESTTEXT; } } } Yes, you got it – just a simple export of a string value. Now let’s write our test. I create a new test class and decorate it with the TestClass attribute. I am also running asynchronous tests, so it’s best to inherit the test from SilverlightTest which has some base methods for asynchronous testing. Let’s take a look at the set up for my test: [TestClass] public class DeploymentServiceTest : SilverlightTest { private const string DYNAMIC_XAP = "DynamicXap.xap"; private const string TESTTEXT = "TestText"; private DeploymentService _target; [Import(TESTTEXT, AllowDefault = true, AllowRecomposition = true)] public string TestString { get; set; } public DeploymentServiceTest() { CompositionInitializer.SatisfyImports(this); } [TestInitialize] public void TestInit() { if (Application.Current.Host.Source.Scheme.Contains("file")) { _target = null; } else { _target = new DeploymentService(); ((App) Application.Current).TestCatalog.Catalogs.Add(_target.Catalog); } } } So right now I’m simply setting up my targets. The property is key – by composing imports on construction, I register my test class with the MEF system. Right now, however, I haven’t loaded anything, so it won’t be able to satisfy the import. By using AllowDefault true, however, I tell it I’m expecting something later and setting it to null is fine. The recomposition is what will trigger an update once the catalogs change. I also reach out to the test catalog I set up in the main application and add the catalog from my deployment service to it. Note that if I am running on the file system, I don’t bother setting up my service. Next, I can add a stub to determine if I can even test this. If I am running from the file system, the deployment service is never set up. I created a helpful method that asserts an “inconclusive” when this is the case: private bool _CheckWeb() { if (_target == null) { Assert.Inconclusive("Cannot test deployment service from a test page. Must be hosted in web."); return false; } return true; } Now we can write our main test. First, we check to make sure we are in a web context. Then, we load the xap, and once it is loaded, confirm there were no errors and that our property was successfully set: [Asynchronous] [TestMethod] public void TestValidXap() { if (!_CheckWeb()) { return; } Assert.IsTrue(string.IsNullOrEmpty(TestString), "Test string should be null or empty at start of test."); _target.RequestXap(DYNAMIC_XAP, exception => { Assert.IsNull(exception, "Test failed: exception returned."); Assert.IsFalse(string.IsNullOrEmpty(TestString), "Test failed: string was not populated."); Assert.AreEqual(TESTTEXT, TestString, "Test failed: property does not match."); } } And that’s pretty much all there is to it – of course, I am also adding checks for things like contract validation (are you passing me a valid xap name?) and managing duplicates, but you get the picture. MEF, Silverlight, unit testing
http://www.wintellect.com/devcenter/jlikness/unit-testing-dynamic-xap-files
CC-MAIN-2015-32
en
refinedweb