id
stringlengths
50
55
text
stringlengths
54
694k
global_01_local_0_shard_00000017_processed.jsonl/22879
Tesco’s Opening Gambit Editor’s Note: Kevin Coupe, founder of MorningNewsBeat.com, went to Hemet, Calif., to check out the inaugural Fresh & Easy Neighborhood Market store. Here are his on-the-scene observations. Finally. They’re open. I thought that if Tesco was going to go to all this trouble to travel 6,000 miles from home to open a new chain of food stores, the least I could do is travel 3,000 miles to the small city of Hemet, Calif., to see what they’re up to. Let me answer the essential question first: If they opened a Fresh & Easy Neighborhood Market near me, would I shop there? The answer is: I don’t know. But I certainly would give Fresh & Easy every opportunity to earn my business, mostly because the offering is just so different from those of most traditional supermarkets. You walk in, and one of the first things to strike you is the general sparseness of the facilities. It isn’t stark or unattractive, but it is utilitarian at almost every turn, much more so than the U.K. Tesco Express or Tesco Metro stores I’ve visited. When Fresh & Easy merchandises fresh produce, it is almost all prewrapped, except for the bananas and a few melons, in a style that is reminiscent of what Tesco has done back home. The prepared meals—whether sandwiches or ready-to-heat burritos and soups—all come in clear plastic packaging with a simple declarative label—this is a Fresh & Easy product, not available anywhere else. The packaged grocery comes in cut cases, so that replenishing stock is simple, and the low-cost image is reinforced in the same way that a membership club does it, though the sizes tend to be medium—not as small as in a c-store, and not as big as in a club. The place is loaded with help, which probably is an opening-week gambit as opposed to how it will be staffed a few months from now, and the folks working there are engaging and helpful. There’s lots of sampling. The front end consists of nine self-checkout lanes. The store is roughly 50% private label, with national brands sprinkled in where they will offer credibility, such as in cookies and cereal. But mark my words—if Fresh & Easy pans out and is as successful as Tesco wants it to be, you’ll see a diminishing selection of national brands. In fact, that’s my sense of the whole enterprise. I suspect that these are just Phase 1 in a much longer-range plan. I know people who believe Tesco will have 5,000 of these things in five years; I know others who think they just won’t work. I’d guess the reality will be somewhere in the middle—they’ll work, but they are part of a broader strategy for how and where Tesco wants to engage with U.S. customers. What else can I tell you? Well, the pre-wrapped grapes that I ate were crisp and neither too tart nor too sweet, so score one for Fresh & Easy. And the sushi was excellent—always a good indicator of whether a store is getting the freshness thing right. Fresh & Easy also is offering its own version of Two Buck Chuck—several wines going for $1.99. One final thought: Tesco leadership should be complimented for doing something different. Will it be the right formula to attract U.S. consumers, especially in the vastly different markets where it plans to put the stores? I have no idea. But Tesco is nothing if not crafty and innovative, and I suspect that as its strategy and tactics unfold, it will prove to be interesting to consumers and challenging to its competitors.
global_01_local_0_shard_00000017_processed.jsonl/22885
Bruce Foxton There are tickets on sale for this artist! Bruce Foxton (born 1 September 1955, Woking, Surrey) is an English rock and roll musician who is best known as the bass player in punk rock bands The Jam and Stiff Little Fingers. In The Jam, he and drummer Rick Buckler played behind singer, guitarist, and songwriter Paul Weller. However, Foxton contributed greatly to transform recordings of Weller''s compositions from what was presented to the group in demo form with his melodic and innovative bass parts, which have influenced many players[who?].[citation needed]initially joining as guiatrist (while Weller played bass ) , the pair switched positions following the departure of guitarist Steve Brooks . Foxton also took lead vocals on a few tracks, most notably the singles "David Watts" (a cover of a Kinks track) and "News of the World", which was his own composition. Foxton also penned other tracks, possibly the most notable being "Smithers-Jones", done as a straightforward rock take for the B-side of "When You''re Young" and later reworked with strings for the Setting Sons album. Stiff Little Fingers would regularly perform the song live after Foxton joined.
global_01_local_0_shard_00000017_processed.jsonl/22888
From Sun Sep 17 18:19:44 2000 Return-Path: Mailing-List: contact; run by ezmlm Delivered-To: mailing list Received: (qmail 41454 invoked from network); 17 Sep 2000 18:19:44 -0000 Received: from ( by with SMTP; 17 Sep 2000 18:19:44 -0000 Message-ID: <> Received: from [] by; Sun, 17 Sep 2000 11:19:17 PDT Date: Sun, 17 Sep 2000 11:19:17 -0700 (PDT) From: Krassimir Dimov Subject: Setting Tomcat to use Cocoon for *.xml requests To: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-Spam-Rating: 1.6.2 0/1000/N Hello, I am trying to set up Tomcat to use Cocoon for .xml handling. I built cocoon, set up the classpath for Tomcat to reference cocoon.jar, xalan.jar, xerces.jar & fop.jar files. The only problem I'm having is the file - for initialization of cocoon. Tomcat can not find it. I coppied the file in the C:\tomcat\conf directory, and editted the web.xml file, located in the same directory with the following entry: org.apache.cocoon.Cocoon org.apache.cocoon.Cocoon properties ..\conf\ org.apache.cocoon.Cocoon *.xml This is what I am getting when I try to run Cocoon.xml (http://localhost:8080/Cocoon.xml) I know that my problem is in referencing the file. Unable to open resource: ..\conf\ java.lang.NullPointerException at org.apache.cocoon.Cocoon.init( at org.apache.tomcat.core.ServletWrapper.initServlet( at org.apache.tomcat.core.ServletWrapper.handleRequest( at org.apache.tomcat.core.ContextManager.service( at org.apache.tomcat.service.http.HttpConnectionHandler.processConnection( at at Source) Any suggestions? Your help will be appreciated. Krassimir __________________________________________________ Do You Yahoo!? Yahoo! Mail - Free email you can access from anywhere!
global_01_local_0_shard_00000017_processed.jsonl/22889
Subject: Re: questions... To: None <> From: Chris G Demetriou <[email protected]> List: amiga-dev Date: 12/03/1994 20:26:23 > > _all_ dynamically linked programs complain about it?! could you find > > out what objects in your libc use it, and tell me which they are? > Yes ... all of them complained about it. > crazytrain% grep semsys /local/src/lib/libc/gen/*.c > /local/src/lib/libc/gen/semconfig.c:/* return (semsys(3, cmd, p1, p2, p3)); */ > /local/src/lib/libc/gen/semctl.c:/* return (semsys(0, semid, semnum, cmd, &semun)); */ > /local/src/lib/libc/gen/semget.c:/* return (semsys(1, key, nsems, semflg)); */ > /local/src/lib/libc/gen/semop.c:/* return (semsys(2, semid, sops, nsops, 0)); */ > Obviously the /* */ came from me.. "those files don't exist any more." i.e.: 24 [sun-lamp] gen % pwd 25 [sun-lamp] gen % echo sem* echo: No match. those were the libc stubs that have since been removed and implemented differently (as syscalls, rather than wrappers around a syscall). 8-) sounds like your libc sources are out of date.
global_01_local_0_shard_00000017_processed.jsonl/22890
Subject: Re: Executing Files To: NetBSD Amiga <[email protected]> From: Matthias Scheler <> List: amiga Date: 12/19/1995 00:39:43 Hi Ola, you wrote in <>: > How do I set up a script file to be executable? chmod +x FILENAME > On IRIX I add the appropriate 'header' followed by the command strings, > chmod to executable, then run 'hash' (and of course, stick it in the > appropriate path). Do the same for NetBSD and it will work. > Also, how do I mount an Amiga Dos partition as writable so I can > transfer files from NetBSD? No chance, the AmigaDOS FS doesn't support writing yet. > Lastly, when using 'bash', what is the equivalent to 'setenv' ... > ...and/or how do I set up the appropriate .login/ ... > ... .bashrc to reflect paths, etc. Maybe you should grab the source of the shell you use under IRIX (tcsh?), compile and install it? Matthias Scheler
global_01_local_0_shard_00000017_processed.jsonl/22892
Subject: Win'95 attributes support To: None <[email protected]> From: Wolfgang Solfrank <> List: netbsd-announce Date: 11/29/1995 16:13:27 NetBSD-current now has support for Win'95 long filenames and Win'95 separate creation/modification/access timestamps. If used, the behaviour is similar to the way Win'95 handles them. I.e., the names are written with mixed case, but case is ignored on lookup. Timestamps are written into previously reserved areas. There are three possible ways now to mount your DOS filesystem: 1. "mount -t msdos -o -l /dev/xxx /dir" will force support of Win'95 long filenames. I.e., files on the filesystem with long names will be displayed with this long name, and new files will be written with long names (as if created under Win'95). 2. "mount -t msdos -o -s /dev/xxx /dir" will not display any long filenames on files from the filesystem, and will not generate entries having long names. Note, that contrary to the old behaviour, this includes a translation from/to DOS-850 code page to/from ISO-8859-1. 3. "mount -t msdos -o -9 /dev/xxx /dir" works similar to the second version. In addition, it will ignore any Win'95 long filenames even when deleting or renaming files and also will not write separate timestamps. On filesystems having long filenames, this latter option will result in dangling long filename entries. There is a procedure in place (using a one byte checksum) to detect matching long/shortname entries (the same as is used in Win'95), but as one might guess from the size of the checksum, this is far from perfect. The Win'95 scandisk routine can repair the damage, if the checksum doesn't match. Otherwise, the long name will be attached to the new entry that happens to be placed into the directory slot of the deleted file. To bring this into perspective relative to different DOS/Windows versions, 1. is equivalent to running ordinary Win'95, 2. is equivalent to run the DOS mode of Win'95 (DOS Version 7.0, not in a DOS window), and 3. is equivalent to running any older version of DOS/Windows. If you mount a msdos filesystem without specifying -l, -s or -9, msdosfs will try to figure out whether there are any long filename entries in the root directory. If one is found, it will use -l, otherwise -s. This procedure obviously results in empty directories being populated with short names only. To force long filenames, you have to use the -l option.
global_01_local_0_shard_00000017_processed.jsonl/22896
Subject: Re: X To: None <> From: Frederick Bruckman <> List: port-mac68k Date: 10/27/1999 01:02:34 On Tue, 26 Oct 1999, Salvatore Mancini wrote: > I am trying to get x running on a IIci with 20 mes o ram and 1.8 gig > drive > this is waht I get when I 'startx' > _X11TansSocketUNIXConnect: Can't connect: errno = 61 > .. > xinit says > Connection refused errno 61 unable to connect to X server > No such process errno 3 server eorror All that means is the server didn't start. If you would try "startx 2>&1 >startx.log", you might be able to catch an error message are two from the X server itself. Commonly, you'll see "... resolution not supported", as the stock server requires that you've booted into monochrome mode. If you want color, has on overview of the solutions available, and links to them.
global_01_local_0_shard_00000017_processed.jsonl/22899
To: Hans Petter Selasky <> From: Hubert Feyrer <> List: tech-kern Date: 03/01/2006 12:09:54 On Tue, 28 Feb 2006, Julian Elischer wrote: > if NetBSD has accepted Hans's rewritten drivers, then that takes away one of > the few remaining obstacles > for having them in FreeBSD, which is the fact that we were trying ot maintain > compatibiolity with > NetBSD. While I can't technically judge the details, what I wonder is what the benefits of this new USB stack is, and what drawbacks the existing one is supposed to have that justifies replacing a working piece of software. And some estimate if it's really easier to replace than to fix. - Hubert
global_01_local_0_shard_00000017_processed.jsonl/22905
Social Media Google's Gmail Motion Prank Turned Into Reality [VIDEO] Remember Gmail Motion, the new feature that lets you use body gestures to compose and send emails in Gmail? It was obviously an April Fools' joke, but now it's also real, courtesy of the folks from the Institute of Creative Technologies. The technology is jokingly dubbed SLOOW - Software Library Optimizing Obligatory Waving - and it uses a Microsoft Kinect camera to control Gmail. The same team used the technology, which is actually called Flexible Action and Articulated Skeleton Toolkit (FAAST), to play World of Warcraft using only body motions in December 2010. Amazingly enough, it works pretty much as Google had intended: You can type text into Gmail by using body gestures and send an email by "licking" the stamp and slapping it onto an imaginary envelope. Check out the video below for a demonstration. Load Comments The New Stuff The Next Big Thing What's Hot
global_01_local_0_shard_00000017_processed.jsonl/22911
Take the 2-minute tour × We know that $\phi$, the golden ratio, is algebraic. Is it known whether $\log(\phi)$ is algebraic? Thank you! PS. I am not in number theory, so I apologize in advance if this is obvious. share|improve this question You mean we know that $\phi$ is algebraic? –  Qiaochu Yuan May 27 '11 at 8:20 Certainly. Typo. Thanks! Fixed it. –  William May 27 '11 at 8:28 add comment 1 Answer up vote 12 down vote accepted $\log (\phi)$ is transcendental. The Lindemann–Weierstrass theorem implies that if $\alpha$ is a nonzero algebraic number, then $e^\alpha$ is transcendental. So since $\phi$ is algebraic, $\log (\phi)$ is transcendental. share|improve this answer Whoops. The statement of Lindemann-Weierstrass is slightly stronger than I remember. –  Qiaochu Yuan May 27 '11 at 9:04 The trick is to have forgotten the statement entirely, so you have to look it up. –  Chris Eagle May 27 '11 at 9:09 I see. Thank you. –  William May 27 '11 at 9:22 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/22913
Suppose that X1,...Xn form a random sample from a uniform distribution on the interval [0,theta], and that the following hypothesis are to be tested: null: theta greater than or equal to 2 alternative: theta less than 2 Let Yn=max(X1,...Xn) and consider a test procedure such that the critical region contains all the outcomes for which Yn is greater than or equal to 1.5 Determine the power function of the test Determine the size of the test
global_01_local_0_shard_00000017_processed.jsonl/22919
Here's my problem: I know I've posted something on Stack Overflow about a particular topic. I know how to search all my answers with a given keyword, and that feature works well. But, if I don't find anything, I assume that it must have been a comment that I posted on the subject, and I'd like to be able to find it, and the question it belongs to. I have read: this question, this question, and this question on Meta. Basically, all the responses to those are about 3 years old, so it seems reasonable (to me) to ask this again ... I've also tried just searching on Google with the keyword(s), my SO username (which I guess unfortunately is (a) not that unusual, and (b) a substring of other words, like unfortunately!), and, without much success. Is there a Stack Exchange API or query that I could run to search my comments, using a keyword or keywords? I did see one response by Jeff Atwood in one of the questions linked to above indicating that the (2009) inability to do this was annoying him, too, so hopefully I'm not the only one who's tried to do this. share|improve this question +1 @Nate I didn't see yours when I posted mine. But I also asked for some limited retag/folder features in my question. –  bonCodigo Jan 7 '13 at 14:15 Something is very wrong with this system when I post this question 4 months ago, it gets closed almost immediately (despite having upvotes, with no downvotes), and then finally is allowed to be re-opened (as originally posed) and voted on months later. But, hey, at least I should be happy that I finally have enough rep points to actually participate here and downvote a few of the posts I disagree with :( ... eagerly awaiting this comment to be removed, like my last comment here. –  Nate Jan 7 '13 at 22:38 See if the answer here helps. –  yorkw Jul 19 '13 at 1:59 @yorkw, yes, it definitely does! Although I'd love it to be built into the stack exchange web application itself, your query seems to do what I wanted. If you'd like to post your answer again here, I will accept it as a solution. Thanks. –  Nate Jul 19 '13 at 5:33 add comment 2 Answers I am adding this answer based on the discussion I had after posting a very similar question. It's truly a great feature if we can have it. I proposed the feature based on SO inbox and comments tab. Each time we want to find some interesting questions/asnwers that we encountered/interacted, currently there is only one way. Which is very tedious. 1. Go one by one each page of the comments 2. Go one by one each page of inbox 3. Go one by one for each vote... It can consume lots of time obviously. So along the most basic keyword search on comments/responses/inbox I suggest having following bonuses as well:- • Let user categorize them with their own tags (limited number of tags allowed for user based on reps) • Add into folders (with limited numbers of folders allowed per user based on reps) like in Google inbox ... share|improve this answer I think the only thing right now that we can't search in our inbox is comments, so once we get that it will cover everything. –  Lance Roberts Jan 8 '13 at 14:56 @LanceRoberts you are a generous, supportive meta Stacker. :-) –  bonCodigo Jan 8 '13 at 15:08 @bonCodigo nice thought you have given. +1 from my side! –  Arup Rakshit Jan 8 '13 at 17:58 add comment I have created a data explorer query to search a post by comment texts and userId: SELECT Id AS [Comment Link], Score, Text FROM Comments WHERE UPPER(Text) LIKE UPPER('%##CommentText##%') AND UserId = '##UserId##' I have also created a query to search for multiple keywords: You can edit the query to customize it for your use. share|improve this answer add comment You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
global_01_local_0_shard_00000017_processed.jsonl/22920
According to Konrad's post on meta, And I know I seen some of his post with high votes deleted (I admit that I also involved), so I searched some of my saved data back in my pc. Here is I found so far. When I see it again now, I feels that all of those posts are valid, So can we undeleted those back, and/or merge it, (lock if needed?), if possible? I feel guilty, since I involved deletion of some of those questions. Some of those answers really really good, If I look again now. Do you think those are appropriate to delete? (feels free to downvote to show disaggrement) share|improve this question Also see why-are-we-deleting-instead-of-merging? –  Simon P Stevens May 20 '10 at 11:33 Another one that I don't think should have been deleted. –  HAL 9000 May 20 '10 at 12:09 add comment 3 Answers up vote 3 down vote accepted I think deleting posts should be done as a last resort or for really bad posts. This was right to be deleted it doesn't add anything and is just spam. There are of course a lot of other examples. Something like this should (IMHO) have been left undeleted or merged with one of the duplicates. It wasn't a great question and the answers weren't anything amazing, but I remember googling for a similar sort of thing 6 years ago. Even if it is a duplicate, the question is asked differently to the duplicates and so will probably come up for a different google search terms. If this is to become a better resource for programmers, duplicates should be left open for google to index. One thing I have noticed is there is no FAQ entry that I could find that explains when questions should be deleted. share|improve this answer +1 for "needs a FAQ entry explaining delete powers" - Also see this question –  Simon P Stevens May 20 '10 at 11:31 add comment You can flag deleted questions for moderator attention, if you believe they should be merged. share|improve this answer I personally shouldn't decide, so I am posting here :) –  YOU May 20 '10 at 11:41 add comment First of all, thanks for finding these links – awesome. I’ve had a look at the now and some of these threads are actually incredibly good, even the subjective/argumentative question on the virtues of VB. The bottom line is that this has made me aware once again of the existence of the “tools” tab and I vow to use it more frequently from now on, to take active part in the deletion (and hopefully more common undeletion) of questions. (So something good has come out of my rep loss.) EDIT Three of these questions (plus one mentioned in the comments) have now been merged and one (the last) is probably not a very big loss. I’d count that as a big success. share|improve this answer 500 out of 64,000 is massive? –  Gnome May 20 '10 at 11:51 @The Cat: All right – tiny. I meant it in relation to adjustments done in a rep recalc. I don’t have statistics but 500 seems quite a lot for a mere recalc. –  Konrad Rudolph May 20 '10 at 12:43 In the recalc a few months ago, some users lost 30-50% of their rep. (Others gained, too.) Less than 1% doesn't seem like much to me. *shrug* –  Gnome May 20 '10 at 13:39 @The Cat: Right, I didn’t know that. True, compared to that 500 is tiny. –  Konrad Rudolph May 20 '10 at 13:55 add comment You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
global_01_local_0_shard_00000017_processed.jsonl/22921
In this question the solution to was discovered after some discussion in the comments fields. Should the answer be updated/edited to reflect the correct solution? Or should another answer be posted? share|improve this question add comment 1 Answer up vote 5 down vote accepted The answer should be edited to reflect the correct solution. Usually when you mark an answer as accepted, the normal tendency is to only read the answers and most often, you skip the comments (my own experience, need not be true always). Hence I think the answers should reflect the correct solution. The above solution would be assuming the comments are on the answers given to that question. However, if the correct answer is discussed in the comments in question itself, then a new answer should be posted by the one who gave the answer. If not, then the OP can himself give the correct answer mentioning the person who gave it in the comments to really give him due credit. share|improve this answer add comment You must log in to answer this question. Not the answer you're looking for? Browse other questions tagged .
global_01_local_0_shard_00000017_processed.jsonl/22943
New York Times Reporter Explains How to Break Bad Work Habits Charles Duhigg breaks down how habits work and what can be done to change them. Many in the workforce may suffer career setbacks due to bad work habits. But as Charles Duhigg, author of the New York Times best-seller "The Power of Habit: Why We Do What We Do In Life and Business," explains to U.S. News, destructive behaviors don't have to be a permanent fixture in your work life. Habits are driven by rewards, he notes. To change a habit, Duhigg says, "you really have to pay attention to what triggers that behavior and, most importantly, what reward it provides you." During the discussion, Duhigg dished about how he refashioned some of his habits to become a more productive worker. But, as he points out, you shouldn't put a timeline on trouncing your toxic behavior. "There's this sort of old wives' tale – it takes 21 days to change a habit. Unfortunately, there's no scientific evidence that's true," he says.
global_01_local_0_shard_00000017_processed.jsonl/22950
CNN Anchor Goes Personal on Vitter, Who Responds 'The Good News Is...It's Not Up to CNN' to Pick GOP Nominee On Monday's Early Start, CNN co-host Ashleigh Banfield insisted to Sen. David Vitter (R-La.) "I got to" bring up his 2007 prostitution scandal, so she could ask how Newt Gingrich could "manage the ... Pornographer Exhibits Higher Journalistic Standards than CNN Anchor Senator Confesses 'Sin,' Accepts Responsibility Syndicate content
global_01_local_0_shard_00000017_processed.jsonl/22951
Take the 2-minute tour × Consider the interval D3 G3. This would form a perfect fourth. In a sequential interval, say a quarter note D3 followed by quarter note G3, this still "sounds" like a perfect fourth so using that interval label seems fine. However, suppose instead the sequential interval is D3 followed by G2, thus the second note is lower pitch rather than higher. Would it be more appropriate to label this interval as perfect fifth instead of a perfect fourth? In other words, should an interval be labelled based on the lowest pitch, regardless of whether it comes before or after another note in time? share|improve this question add comment 2 Answers up vote 5 down vote accepted The interval label is always going to be based on the number of half-steps between the two pitches. Since there are 5 half-steps between D3 and G3, that is a perfect fourth. Thus, the 7 half-steps between D3 and G2 make it a perfect fifth. share|improve this answer @Peter S admittedly, I thought the same thing you mentioned in your question for a long time before being taught this concept =) Good question! –  jadarnel27 Aug 28 '11 at 3:02 Indeed. Intervals are based purely on the absolute difference between notes. "Negative" intervals (high note to low note) are no less natural than "positive" (low to high) intervals. So D3->G2 is the same as G2->D3, which is obviously a fifth and not the same as the D3->G3 fourth. –  Matthew Read Aug 28 '11 at 3:13 All very helpful, thanks! –  Peter Skirko Aug 28 '11 at 5:14 add comment A fifth and a fourth are two sides of the same coin, like an implication and its contrapositive. "If it is a real piano, it has strings inside" means the same thing as its contrapositive "If it does not have strings inside, it is not a real piano." It's the same but different. Likewise D3->G3 amd D3->G2 are "the same" because they both have a D and G and can be used against the same chords, but "different" in that one goes up a fourth and the other goes down a fifth. share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/22970
Skip to main contentAccess keys helpA-Z index [an error occurred while processing this directive] watch listen BBC Sport BBC Sport Low graphics|Help Last Updated: Thursday, 22 March 2007, 11:13 GMT Lawrie bemoans Open recognition Paul Lawrie Lawrie won a three-way play-off for the Open in 1999 Former Open champion Paul Lawrie insists he has never achieved the recognition he deserves for winning the title at Carnoustie in 1999. The Scot, Europe's last major winner, won the Open after beating American Justin Leonard and France's Jean van de Velde in a play-off. The event, though, will be remembered for Van de Velde squandering a three-shot lead on the final hole. "I'm a major winner and I've never had that respect," Lawrie told Five Live. "It doesn't sit kindly with me and it's difficult sometimes. But we don't play tournaments over 71 holes, we play them over 72 holes and the Claret Jug [a replica] is in my sitting room at home. "It has my name on it and I look at it every day, and no matter what anyone says, it's not leaving." If I never win another tournament or never play golf again, I'll still be proud of what I achieved that week Paul Lawrie Lawrie, 38, will return to the scene of his triumph this July when Carnoustie again hosts the Open. Eight years ago, he carded a final-round 67 for six over and watched as Van de Velde, in the final group, amassed a triple-bogey seven via the rough, the grandstand and the burn in front of the green to join a play-off. Lawrie and Leonard then shared bogey fives to Van de Velde's six on the 15th, the first of four extra holes, while the trio parred the 16th. Lawrie birdied 17, as did Van de Velde, but the Scot also birdied 18 to lift the title. Jean van de Velde Van de Velde went paddling in the Barry Burn in 1999 "I suppose the way it happened didn't help, but I remember Ray Floyd handing Nick Faldo a Masters and I didn't read that Floyd gave him the Masters, I read that Faldo won it," said Lawrie. "Maybe you've got to win more than one of them before they give you credit for it. "You only read Faldo is a six-time major winner, which he is, he's been a great champion and I respect him for his career, but you don't see the press writing that he was handed two or three out of his six. "But that's life. It doesn't affect me any more. I get on with it, I do my job and I go home to my kids. And it won't get any different until I win another one. I hope that day's coming soon." Lawrie won the Qatar Masters the same season and triumphed in the 2001 Dunhill Links Championship and the Wales Open in 2002, the last of his five European Tour victories. But his best Open finish has been 42nd in 2001, though he was 15th at the Masters in 2003. "I'll always be extremely proud of what I achieved that week and if I never win another tournament or never play golf again, I'll still be proud of what I achieved, so nobody will ever take that away from me," he said. City plans Lawrie tribute 20 Jul 99 |  UK News Lawrie wins dramatic Open 19 Jul 99 |  Golf The BBC is not responsible for the content of external internet sites
global_01_local_0_shard_00000017_processed.jsonl/22971
Apache's lead over Microsoft's IIS goes poof Technology writer Glyn Moody notes that open-source software developer Apache's lead in Web servers over Microsoft's Internet Information Services (IIS) is at its skinniest ever: 10 percent. Apache continues to gain (1 million sites last month). But Microsoft's IIS is also growing--and at a faster clip (3 million sites last month). As Glyn suggests, it may not matter: Apache's job may well be done in proving the viability of open-source projects and paving the way for many more. I doubt, regardless, whether Microsoft is resting on its laurels. If you click through to the Netcraft page, … Read more Eclipse, a new model for open-source innovation As I told Mike Milinkovich, executive director of the Eclipse Foundation, my understanding of Eclipse is several years old. I called him today to get an update on Eclipse, and learn what all the fuss is about. As it turns out, quite a bit. Eclipse may be the most important open-source "project" that people outside the industry, and even some within it, have never heard of. Here's why.… Read more The unthinkable happens: Apache gives way to Microsoft's IIS Open source @ IBM: Savio Rodrigues speaks Ask a simple question, get a simple, but subtle answer. I asked Savio Rodrigues, who replaced me on the Open Sources blog but originally blogged here, to comment on the state of open source at IBM. He gave me a bit more than that. You know, IBM, the company that essentially carried open source into the enterprise on its back in 2000 when it pledged $1 billion to fund Linux. Lately, though, IBM's has been less flashy with its commitment to open source though, as Savio points out, no less involved. As Savio reports, however, IBM's commitment to open source is broader than source code. Open source without open standards isn't of much interest to Big Blue. In this fifth installment of the Open Source @ Series on The Open Road, Savio gives us much to think about in terms of the power of open source...and what it means in the absence of standards. Savio writes... Matt Asay asked the question: What is the State of Open Source at IBM? Our answer? Excellent!… Read more In the trenches with...Brad Nicholes of Novell Back when I was asked by Chris Stone (then Novell's vice chairman) to join the Linux Business Office at Novell, I honestly could count the number of employees on one hand that had any understanding and experience of open source. Brad Nicholes was one of them. Brad is an understated guy - he's not the sort of person to volunteer to write for this In the Trenches series. No, I had to go to him and solicit his involvement. I suspect even then he preferred to write code, but he agreed to do it, anyway. I have a tremendous amount of respect for Brad. He was the voice of experience on Novell's Open Source Review Board, having earned the distinction of "member" with the Apache Software Foundation. He provided the best insight as to how Novell's forays into open source would be interpreted. Now, of course, the company has become very active in the open source world, but Brad continues to provide expert guidance with the OSRB and elsewhere within Novell. If you get the chance to meet Brad, you'll like him as I do. He's a great person, and a great asset to Novell. Some of the insight below is among the best we've had on The Open Road. Name, company, title, and what you actually do Brad Nicholes, Senior Software Engineer, Novell. I'm currently working on the Data Center Automation product. In reality, I do a lot of different things. I have spent a lot of time over the last 6 years porting and maintaining the Apache HTTP server on the NetWare platform and I am a member of the Apache Software Foundation. I have started and managed a few smaller Open Source projects and contributed to others. I have given presentations at various conferences about Apache and Open Source in general. I am also a member of Novell's Open Source Review Board which is primarily responsible for reviewing Open Source usage and licensing issues within Novell. I have found that by participating in all of these activities, my job ranges from ?in the trenches? software design and coding to project administration to having to understand and consult with management about corporate policy and procedures as well as how legal matters can affect software development (especially in the Open Source world). … Read more The Open Source CEO: Gianugo Rabellino, Sourcesense (Part 16) Nearly every CEO profiled in this series has several years of experience, and comes from a prominent open source company. I wanted to change lanes a little with this next one, so as to get the perspective of a new CEO with a freshly-born startup. Bonus points were given for finding someone outside the United States. Therefore, for this sixteenth installment of the Open Source CEO Series, I reached out to Gianugo Rabellino, CEO and Co-founder of Sourcesense. Gianugo had been an early critic of my company, Alfresco, challenging our bona fides as an open source company. I credit Gianugo, in part, with helping us make the shift to a 100% GPL model (though he probably would have prefered we move to an Apache license, given his affiliation with the Apache Software Foundation :-). Name, position, and company of executive Gianugo Rabellino, CEO and Co-founder of Sourcesense.… Read more The Open Source CEO: Mark Brewer, Covalent (Part 15) Covalent was one of the pioneers in commercial open source. Unfortunately, Covalent suffered through the dot-com bubble, along with the rest of the industry. Today, Covalent lives on under the guidance of Mark Brewer (as well as in Hyperic, which spun out of Covalent several years ago). I caught up with Mark for our fifteenth installment of the Open Source CEO Series, hoping to glean some lessons from an open source company that rose, then fell, and is rising again. I met him in 2003/04 to discuss a possible investment, but Mark and team opted to bootstrap their way back to profitability, and have done exceptionally well for themselves. Name, position, and company of executive Mark Brewer, CEO, Covalent Technologies.… Read more WSO2 releases Synapse-based open-source ESB Open-source start-up WSO2 on Monday released an open-source enterprise service bus based on any Apache Synapse project. Called WSO2 ESB, the server software is designed to integrate different applications by translating between different protocols and converting different XML formats. The product is based on Synapse, an open-source ESB done at the Apache Foundation with the participation of WSO2 employees. The company adds additional features on top of Synapse including a Web-based administration console and a registry and repository, said Paul Fremantle, WSO2 co-founder and its vice president of technical sales. There are several open source ESB product in the marketplace, … Read more
global_01_local_0_shard_00000017_processed.jsonl/23000
First Image Results tagged “secret messages to strangers” November 27, 2012 On Thanksgiving, I spent the day drawing messages on bits of paper. I saw a quote David Mack shared on Twitter. He was paraphrasing Die Antwoord, something that Yo-Landi said to Ninja: "Imagine your most awesome future version of yourself. Now be that person." The image stuck with me. I grabbed a sharpie and drew the message over and over, sitting on my floor. I made 20? 30? Of them? And I decided I was going to fold them into paper airplanes, go to the movies, and throw them at people. My best friend and I went off to the theater, and it was desolate. The trick then, was to throw the paper airplanes places people would find them. We hit elevator buttons and tossed the paper airplanes into empty elevators, as the doors closed. We sailed them down hallways. We stalked the places we left them, and saw how people subconsciously stepped over them on the floor, never looking down. Knowing enough something was on the ground, enough to avoid it, but not interested in whatever it was. The first person I saw pick one up was a small girl, maybe seven years old. We said after, that this was probably the best possible person to get lost in imagining their best future self. The next morning, we went to eat, and I had an airplane on the table next to me. I accidentally nudged it, and both myself and the waitress looked down just as it landed on the floor between us. "Where did THAT come from?" She asked, delighted. I shrugged, "I have no idea," smiling. The waitress picked it up, laughing, and asked the wait staff if they'd been throwing paper airplanes. She drew her hand back in the air and let it sail into the air above us, into the kitchen.
global_01_local_0_shard_00000017_processed.jsonl/23009
Changing only a few words in the first paragraph in Jonathan Medved's "An SEC Rule Change Opens a New Era for Crowdfunding" (op-ed, Oct. 10) will make clear the major problems with the SEC rule change: Potential suckers will soon begin seeing scam solicitations pop up in their Facebook news feeds and in their email inboxes thanks to a major rule change from the Securities and Exchange Commission. In September, the agency removed the decades-old ban on public solicitation for dubious investments. This means bad investments can now be marketed to the naive public, which will allow fraudsters to reach a much broader audience than securities law used to allow. Thomas Cusick University at Buffalo Buffalo, N.Y.
global_01_local_0_shard_00000017_processed.jsonl/23034
Take the 2-minute tour × So, I've recently been reading up on Schwarzschild wormholes and I've learned that they cannot exist becuase they violate the 2nd Law of Thermodynamics. What I'm asking is: Why do they violate the Law? I probably sound like an idiot, but I just can't understand why they violate Thermodynamics. share|improve this question Perhaps you could include a reference or synopsis for the fact that these wormholes violate the 2nd law? –  BebopButUnsteady Jul 20 '11 at 3:33 add comment 1 Answer up vote 0 down vote accepted I don't know why they should violate thermodynamics either, but they don't exist because they're static. They cannot be created at any finite time - they must have existed since the beginning of time and will exist forever. The physically realistic Schwarzschild solution is created from collaps and does not have the second asymptotic region. share|improve this answer are all dynamic solutions ruled out? –  lurscher Jul 20 '11 at 16:15 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23035
Take the 2-minute tour × Can the findings of the Physics Nobel Laureates of 2011, namely the overpowering existence of dark energy (vacuum energy) have any implications in the quest the combine Quantum Mechanics and General Relativity? Maybe toward a theory of Quantum Gravitation? share|improve this question If anybody had done this successfully, chances are we would have heard about it. We haven't, but I don't know of be reason why it's impossible. So I would have to say that nobody knows. –  Peter Shor Dec 10 '11 at 20:03 add comment 2 Answers YES. The cosmological constant is extremely fine tuned. In a nonsupersymmetric world, bosons contribute enormous zero-point energies to the cosmological constant while fermions contribute an enormous negative amount. For both contributions to cancel in one part in 10 to the 123 is nothing short of miraculous. No other mechanism than unbroken symmetry appears to explain such fine-tuning needed for the evolution of life. Increase the cosmological constant by a few orders of magnitude and sufficient structure formation of galaxies and stars won't happen. This points to the anthropic principle giving a special role to consciousness and needs a multiverse of pocket universes with different laws of physics. This fits in very nicely with the landscape of compactifications in string theory, and the theory of eternal inflation. In string theory, any vacuum with a positive cosmological constant has to be metastable, and if so, our phase will have to decay to a more stable vacuum in the future. As long as we remain in our current phase, the maximum entropy of our causal patch of the universe is bounded by the holographic bound of 10 to the 123. share|improve this answer add comment Dark energy certainly can have implications for unifying QM and GR, in the sense that if we do develop a proper theory of quantum gravity, it should (probably) explain dark energy along with everything else. So any candidate theory that does give the correct density for dark energy becomes much more appealing than one that does not. So in this sense we can use dark energy as a "filter" for candidate QG theories. However, I doubt that the knowledge of dark energy can lead directly to a quantum theory of gravity. If that were possible without being too difficult, someone would probably have done it already, and as Peter Shor posted in a comment, we probably would have heard about it. share|improve this answer I didn't say that if it were possible, it's likely that someone would have done it. I said that if somebody had done it, we would likely know. Thus, since nobody has done it, we can't tell whether it's possible (although I agree it is unlikely). –  Peter Shor Dec 11 '11 at 4:04 @Peter my mistake, sorry about that. I've edited to fix it. –  David Z Dec 11 '11 at 4:06 What about us living in de Sitter space rather than AdS? I wouldn't be surprised if string theorists argued that ST predicts AdS if we didn't have the observed dark energy. –  JollyJoker Jan 10 '12 at 5:41 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23036
Take the 2-minute tour × Hopefully this is a simple question, I just can't seem to get my mind around it. I'm to take the limit of the Fermi-Dirac distribution for $T \rightarrow 0$. In this limit the chemical potential is equal to the Fermi energy $\mu = \epsilon_F$, and all states of energy below the Fermi energy is occupied, while all states above are empty. Following this argument I would say, that the Fermi-Dirac distribution tends to a step-function with argument $\epsilon_F - \epsilon$, such that $$ f \rightarrow \Theta(\epsilon_F - \epsilon) \quad \text{for} \quad T \rightarrow 0, $$ which is one for $ \epsilon < \epsilon_F $ and zero for $ \epsilon > \epsilon_F $. My problem is that I have found the results stated in a textbook and a couple of other cases, where it's stated as $$ f \rightarrow \Theta(\epsilon - \epsilon_F) \quad \text{for} \quad T \rightarrow 0. $$ Can someone tell me which result is correct and maybe explain why the second result is correct if it is so. share|improve this question Which textbook did you find it in? Your expression is correct by the way. –  Olaf Dec 13 '12 at 17:39 Thank you.. I believe I found it in Quantum Theory of the Electron Liquid by Giuliani & Vignale and afterwards stated the same way at least 1 or 2 places on the internet (physics forums), but I'm not a 100 % sure. I will check up on it, when I get back after Christmas. Either way if both of you agrees with my intuition, I'll stick with that. –  Rasmus Søgaard Christensen Dec 22 '12 at 22:23 add comment 1 Answer up vote 2 down vote accepted If we neglect the possibility of negative temperature, then OP is right: The Fermi-Dirac distribution $$f_{FD}(\epsilon) ~\longrightarrow ~\Theta(\epsilon_F - \epsilon) \qquad \text{for}\qquad T ~\longrightarrow ~0^{+}, $$ where $\Theta$ is the Heaviside step function. share|improve this answer Why the wikipedia link? –  Magpie Apr 28 '13 at 23:10 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23039
Pokémon Wiki Ruin Maniac 7,403pages on this wiki Revision as of 16:33, March 24, 2012 by Crimsonnavy (Talk | contribs) A Ruin Maniac is a Trainer Class that was introduced in Generation III. They appear as a stereotypical explorer, shown in Generation III as old archaeologists whose outfit resembles that of a Hiker's and shown in Generation IV as an old safari tourist and as a young archaeologist with a large green backpack. Ruin Maniacs use a several types of Pokémon, including Ground, Rock, and Steel-type Pokémon. They can usually be found near caves and tunnels in-game. RSE Battle Sprite FRLG Battle Sprite DPPt/HGSS Battle Sprite Ruin ManiacRSEsprite Ruin ManiacFRLGsprite Ruin ManiacDPPtsprite Advertisement | Your ad here Around Wikia's network Random Wiki
global_01_local_0_shard_00000017_processed.jsonl/23049
Take the 2-minute tour × The company I work at recently hosted a web service in Windows Azure and announced that. Now trade online magazines say lots of meaningless stuff like "company X moves to the cloud", "company X drops desktops for the cloud", etc. Looks like there're lots of materials out there (starting with Wikipedia) that are very lengthy and talk a lot about "services" and "low entry price" and other stuff but I've read all that and don't see how they could be helpful for a layman in drawing a line between a service in a cloud and Stack Exchange that is also a service but is run on brick-and-mortar servers in a colocation. Now from my experience with Windows Azure the real difference is the following. With a cloud the service owner rents hardware, network bandwidth and right to use the middleware (Windows 2008 that is used in Azure roles for example) on demand and also there's some maintenance assistance (like if the computer where a role is running crashes another computer is automatically found and the role is redeployed). Without a cloud the service owner will have to deal with all that on his own. Will that be the right distinction? share|improve this question add comment 5 Answers up vote 28 down vote accepted Yes, pretty much. With the "cloud" (as in "cloud providers"), you are renting the diskspace, bandwidth, CPU and memory owned by the provider and the means to use them from your software. They give you the infrastructure and you don't own the hardware. There are other forms of cloud computing that don't involve these providers, where you (the organisation) owns the hardware as well. In either regard, this mostly means that your software is running on a distributed network of computers, available on the Internet. share|improve this answer +1 for being clear, concise, and cutting through all the hyped industry BS. –  maple_shaft Nov 10 '11 at 13:39 They've also provided the means to pull it all together and make it work. –  JeffO Nov 10 '11 at 13:44 @ThomasOwens - come on. The context of the question is clear that the OP is asking about cloud providers, not the Internet as the "cloud". –  Oded Nov 10 '11 at 13:54 @ThomasOwens you're still renting the resources, only now you're renting them from your internal IT services group rather than a 3rd party (small companies won't have the resources to host private clouds that offer any of the supposed cloud advantages, which all demand physically separated hosting centers). –  jwenting Nov 10 '11 at 14:22 While other answers make some useful and meaningful points regarding cloud computing, this answer cuts straight to the practical, pragmatic heart of what people are generally talking about when they use that desperately over-loaded term, cloud computing. +1 –  Adam Crossland Nov 10 '11 at 14:38 show 3 more comments Cloud computing says absolutely nothing about who owns the resources. Cloud computing is an architecture for developing distributed, network-based applications. There are a number of cloud computing service providers out there, such as Azure Services Platform, Amazon Web Services, Google App Engine, and a number of others. However, using someone else's service is not a prerequisite for developing a cloud computing infrastructure. The idea behind cloud computing is that you put services and applications on networked devices. You could utilize a hosting service, which would shift maintenance and support to other entities. You could also create your own infrastructure for cloud computing. In addition, there is nothing that says that cloud computing must be public. Yes, you can put your applications and services on the public Internet (with the appropraite security for your applications), but you can also create private clouds within your organization. In the end, with cloud computing, you don't know where or what you are accessing. You see a service or application without any knowledge of what is behind that service or application. The entire cloud is of no consequence to clients - you know that things that you can use exist, are accessible, and use them. They could be in a "server room", or you could be accessing a distributed grid of sensors and workstations. It really doesn't matter. share|improve this answer what is the difference between a private cloud and "the server room"? –  Bob Nov 10 '11 at 14:20 @Bob Typically, location, but that isn't a requirement for cloud computing. You might have several distributed server farms. Or you might have individual devices located around a building, city, country, globally, or in some cases extraterrestially. However, you can still create a cloud platform with a single "server room" by producing systems (applications and services) that are consumed by distribued clients via some network connection. The driving factor is that services and applications (and associated data) are available for consumption/use over a network. –  Thomas Owens Nov 10 '11 at 14:44 You have just stated that they are the same, (save location) and have not provided any differences. "cloud computing" does not mean the same thing "have access to a server". It's more about having access to a server which you have reduced responsibility in. –  Bob Nov 10 '11 at 14:53 @Bob Only the rented cloud computing services lead to reduced responsibility. I was actually part of a team that was working on developing and maintaining a private and secure cloud computing platform for the US Department of Defense and services/applications that run on this cloud. The goal was to not reduce responsibility, but to improve access to data, services, and applications. What was accomplished was breaking down information silos and producing a number of services and applications accessible to any clients with access to the cloud. –  Thomas Owens Nov 10 '11 at 15:03 @Bob: A private cloud can span multiple server rooms, and can use all or only some of the machines in any given room. All the resources are aggregated and exposed as "services", so you don't know whether your app is running in your building or across campus or in another state. But you can't just go home and connect to it, you need a VPN or some other way to join the network the private cloud is on. –  TMN Nov 10 '11 at 15:41 add comment No. Cloud computing is not merely a way to rent resources. Cloud is all about services that: • are delivered over the network (possibly the Internet) • are fully controlled by API • are fully automatable and automated • require no human interaction for control • are delivered as a commodity • are billed like a utility: for measured usage • require no capital expenditure or up-front payment • have seemingly infinite capacity • permit at-will immediate allocation of arbitrarily many units of the service • permit at-will immediate disposal of arbitrarily many units of the service NIST has a full definition of what a cloud service is. share|improve this answer "Billed like a utility" and "require no capital expenditure or up-front payment" only apply to services that you are purchasing from a provider, not when you are establishing a private cloud or creating a self-managed cloud infrastructure. However, I generally agree with how NIST defines cloud computing. –  Thomas Owens Nov 10 '11 at 14:13 @ThomasOwens, even if the organization that owns the service(s) also maintains the hardware, there's usually some form of accounting for usage. Real money doesn't have to change hands, but you do have to keep track of who is using what resources so that you know when to by more machines, what services are most popular, and so on. –  Caleb Nov 10 '11 at 15:37 @Caleb That would account for billing like a utility, but not "no capital expenditure or up-front payment" since the company is incurring the cost, up-front, of establishing the infrastructure. –  Thomas Owens Nov 10 '11 at 16:12 @ThomasOwens, That's a fair point, but OTOH if you already have cloud infrastructure there's no additional expenditure to add a new service. If you work for Amazon, say, and deploy a new service you don't have to worry about procuring servers and all that. I'm sure we agree here -- I'm just pointing out that even when an org is its own cloud provider, service owners will tend to see the cloud as a utility, something that's always there. Building a cloud is a whole other thing. –  Caleb Nov 10 '11 at 16:27 When an org is its own provider, the provider wing of the org incurs capital expenditures to build the cloud service. However, usage of the cloud service requires no capital expenditure. Orgs often do internal billing, where if division A wants services from division B, division A pays division B for it internally. The phrase "billed like a utility" applies to these orgs. Orgs which provide free cloud services to user divisions do not bill, so that item applies but is overridden by "services which are free"). –  yfeldblum Nov 10 '11 at 17:04 add comment While it's hyped as something new, cloud computing really a new marketing twist on the time-sharing distributed computing model emerged in the mid-to-late 1960's. Of course, there are huge technical improvements but, when you look at it closely, it's not too much different from hooking up to a mainframe via an acoustic coupler and a teletype terminal to access applications and data. These systems were huge moneymakers back in their day but the Apple II and IBM PC put an end to it. Now, through cloud computing, this business model is seeing a renaissance. share|improve this answer add comment Cloud computing begins with Renting hard disks to servers. However, it goes beyond that much more. This is not to say that there isn't any hype about it; but i am trying to define what is the key difference between being in cloud and not so! In my office we have a set of servers, which i can access from anywhere. Does this qualify to be a cloud? NO! And so is true for many data centers as is. The core element that forms cloud computing is of course, the hardware infrastructure (servers and disk space) used exclusively through public internet. However, what is important is how this being managed. A critical infrastructure element (though i doubt if people would disagree if you say must) is the visualization. In, (what i think) a real cloud, all these servers are combined to become pool of resources tied together over a framework where virtual machines are created. One can create, archive and delete machines. Transfer hard disk space from one machine to another like how you mount them in real machines. These technologies allows that one can shift the data and OS of these machines to move from one physical server to another seamlessly and it comes with various redundancy options and management consoles for services. Understand, in good old days (as well as today), one used to get personal homepages and company websites - on hosting space. This isn't quite a cloud. Though, i agree that now-a-days anyone who got a static ip - think he has created a cloud- and indeed the word cloud has been misused to an extent that there is no real definition to it now! share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23050
Take the 2-minute tour × First, some background, we are in the process of moving all of our project teams over to using git and are in the process of laying down the guidelines for how the repositories should be organized so that certain branches can also be monitored for continuous integration and automatic deployment to the testing servers. Currently there are two models that are developing: 1. Heavily influenced by the nvie.com article on successful branching with the master branch representing the most stable code, a development branch for the bleeding edge code, and an integration branch for code that is ready for QA testing. 2. An alternate model in which the master branch represents the bleeding edge development code, an integration branch for code that is ready for QA testing, and a production branch for the stable code that is ready for deployment. At this point, it is partly a matter of semantics in regards to what the master branch represents, but is doing active development on the master branch actually a good practice or is it not really that relevant? share|improve this question I like the worfklow that Scott Chacon uses to develop GitHub: scottchacon.com/2011/08/31/github-flow.html –  user16764 Feb 3 '12 at 14:51 As described it seems to me to be more of a semantic problem than not - any organisation is going to evolve their own processes and in some respects the names need to reflect your workflow. Generically the key seems to be that somewhere you define something such that "the source code of HEAD always reflects a production-ready state". What you choose to call that is less important but both git-flow and the GitHub workflow focus on that separation and on controlling when you push to the production-ready "thingy" –  Murph Feb 3 '12 at 18:16 @Murph - True, but since we are doing some of this from scratch I thought it would be best to more or less follow common conventions so that new developers that are hired don't have a steep learning curve due to unusual internal practices. –  rob Feb 3 '12 at 18:39 Then you've answered your own question (-: To be honest even by asking the question you're way ahead of the curve... –  Murph Feb 3 '12 at 22:56 add comment 3 Answers up vote 7 down vote accepted No, it's not advisable, even in the beginning before you've gone to QA. As a best practice, the pattern for development should be consistent from start to finish. Your master branch should start out empty, you should branch your development branch off and begin adding files, merge into your integration branch, then subsequently to your master. While no one may care during development that the master branch doesn't build, it lends itself to bad habits early on. The master should always build, and for major feature releases it also wouldn't be a bad idea to have archived branches of major builds so that stable release points can be returned to if necessary. share|improve this answer Isn't it better to do version tracking via tags? –  Adonis K. Oct 20 '13 at 21:56 @AdonisK.: I fail to see the relevance of your question. –  Joel Etherton Oct 20 '13 at 22:19 add comment The only real defining feature of the master branch is that it's the default for some operations. Also, branch names only have meaning within a specific repository. My master might point to your development, for example. Also, a master branch is not even required, so if there's any confusion about which branch it should be, my advice is usually to leave it out altogether. However, in my opinion, the best way to think of it is as the default for pushing to. Most any online tutorials your developers read are going to assume that. So, it makes a lot of sense to have master be whatever branch is most often pushed to. Some people think of it as the pristine copy that is untouchable to developers except after the strictest of scrutiny, but using it that way removes a lot of the helpful defaults git provides. If you want that kind of pristine branch, I would put it in a completely separate repository that only some people can write to. share|improve this answer +1. And because the "production ready" code is the important code, it should also live in a branch with a name highlighting this importance. "master" as the default branch name surely doesn't fulfill that request, as it is also used in every other repository for whatever intentions. –  Bananeweizen Jun 18 '13 at 9:29 add comment I haven't been using Git for very long, but everything I've read so far says "no, not really". This article talks about a recommendation for setting up Git branching; in his model, the master branch is only used for release versions, while a separate development branch is used for everyday commits. share|improve this answer That's actually the same article I linked to in the question. :) So far everyone I've talked to has said the same thing with the only exception being when you are doing early development and haven't started QA testing yet. –  rob Feb 3 '12 at 14:27 Ah poo. That's what I get for not reading carefully :( –  Andrew Arnold Feb 3 '12 at 14:31 That's the branching model that we have adopted in the currently small, but expanding, development team here and it's working very well for us. –  Julian Feb 3 '12 at 15:37 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23051
Take the 2-minute tour × Today I was asked this question during an interview: What's gonna happen if we do not override hashcode method for our class, then add it to HashTable and then try to get objects? Could you help me answer that? share|improve this question Read this book if you want to get a job programming java. –  Martin Schröder Apr 17 '12 at 6:48 @MartinSchröder I have a job already) –  VextoR Apr 25 '12 at 7:22 add comment 8 Answers up vote 18 down vote accepted The idea with a HashTable when you try retrieving an object is that the data structure computes the hash code of the object using the GetHashCode() method and then goes through a list using the Equals() method. With default GetHashCode() implementation, two perfectly similar objects might end up yielding different hash codes which means that if you do not use the exact same instance, you will never find the your object in the HashTable. In general, you want to make sure of two things when implementing hash codes: • If A.Equals(B) then A.GetHashCode()==B.GetHashCode() • Try to get a distribution of hash codes properly spread to get the maximum efficiency from the hash table (if too few hash codes are possible, you'll end up searching a list). share|improve this answer The method is hashCode(), not GetHashCode(). –  Spencer Kormos Apr 16 '12 at 15:59 I believe @SRKX was using C# in his example, where hashCode is the Java equivalent. The answer is valid in both languages. –  DevSolo Apr 16 '12 at 16:02 @SpencerK: yeah as DevSolo said, I was trying to explain the concept not to get into language-specific details which is, I hope, what the interviewer is looking for. –  SRKX Apr 16 '12 at 17:06 add comment I would only add, that all of these concepts have to with identity and comparison. There is a contract with hash code: The upshot is that if the hash codes are the same the entries in the table overwrite each other and that could be surprising to some... Hope this helps. share|improve this answer add comment It depends on what "adding to HashTable" means. Java's Hashtable doesn't have any add method. The interviewer probably meant put method, which takes in a key and a value. The value can be anything (could be even null in HashMap, which is the present-day version of Hashtable). Nothing special happens regardless of whether or not you override the value object's hashcode, or any other method. The interviewer probably meant that the key object's hashcode wouldn't be overridden. Only then the object identity issues, as pointed out in other answers, come into play. Even then, you don't necessarily have to override the key's hashcode. For example, if you use Strings as keys, they already have an appropriate hashcode implementation in them. Besides, they can't be subclassed. Furthermore, if you do override hashcode but don't override equals, you may get some amazing behavior... If the question really was exactly what you wrote, I would have teased the interviewer with these questions. A good programmer doesn't assume that the interviewer probably meant this or that. He asks instead. share|improve this answer Hashtable does not support null. –  Spencer Kormos Apr 16 '12 at 15:57 @Spencer K: Thanks, fixed. –  Joonas Pulakka Apr 16 '12 at 18:13 add comment If the hashcode method is not overridden, the answer to this question really depends on if the same key object which was used to "put" is going to used for "get" as well: a) If the same key object is used - "get" will find the value. Because it will locate the bucket using the same "key" and hence will find the value object. b) If some another "equivelent" key object is used - Since possibly the hashcode is going to be different due to default implementation of the hashcode method in Object and hence it might get into a different bucket and might not be able to get the value object. share|improve this answer add comment The answer is "nothing bad unless you have overridden equals()". The general point is that if two objects compare equal i.e. if then they must have the same hashcode i.e. a.hashCode() == b.hashCode() also if two objects have different hash codes, they must not compare equal. This is especially relevant when you put objects in a hash table. This is because a hash table is an array of lists (called buckets usually). The hash bucket is indexed using the hash code, typically you use hashCode % arraySize. So when you put an object in the hash table, you take the hash code of the key and use it to determine the bucket. You then put a key-value pair in the bucket. When you want to retrieve an object from the hash table, you take the hash code of the key to find the bucket and test the key of all the key-value pairs in the bucket with .equals() to determine which object is the one you want. So if you have two key objects which compare equal but have different hash codes and you use one as a key in a hash table, you won't be able to search for it using the other key object because you'll be looking in the wrong bucket. The implementation of equals() in Object only returns true if the two objects are actually the same object and hashCode() returns the object reference. However, if you override equals() (e.g. String does so that different strings containing the same character sequence compare equal) then you must override hashCode() share|improve this answer Good answer, but I think the first sentence is misleading. If you are using a HashMap or Hashtable, you should override hashCode() and equals(). The first sentence is only true if you really intend to use object identity as equality e.g. you really want to compare objects with == instead of equals(). –  scarfridge Apr 16 '12 at 19:14 @scarfridge: I kind of agree that just using object identity to identify the keys is pretty much useless. I was thinking nothing bad in terms of (for example) having the value of hashCode() change while the object is being used as a key in a map. –  JeremyP Apr 17 '12 at 8:59 add comment you won't find your object if you get with a different object that is equal to the object you put or to give an example: MyClass obj1 = new MyClass(1); MyClass obj2 = new MyClass(1); assert obj1.equals(obj2); assert obj1.hashcode()!=obj2.hashcode(); //this is wat happens if you don't inclde hashcode table.get(obj2) // will likely return null but that is a gamble table.get(obj1) // but this will return the object passed in the reason for this is that HashTable (and HashMap) will use the hashcode to limit the space it has to search through to find the object and that relies on the assumption that if obj1.equals(obj2) then obj1.hashcode == obj2.hashcode() share|improve this answer add comment Assuming your class extends only Object, then the hashCode() implementation of your class will depend on the object identity. I.e. that the hash code of two different instances will (almost certainly) be different, even if they hold the exact same value. This means that you will most likely not find the object again in the Map (you might find it by chance, however). share|improve this answer add comment The answer is: similar objects (all fields having equal values) would not create the same hash code, so you would need exactly the same (identical) object that was used for put to retrieve from hashtable, which is not feasible in most cases. share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23052
Take the 2-minute tour × The "Verbally Readable !== Quicker Comprehension" argument on http://ryanflorence.com/2011/case-against-coffeescript/ is really potent and interesting. I and I'm sure others would be very interested in evidence arguing against this. There's clear evidence for this and I believe it. People naturally think in images, not words, so we should be designing languages that aren't similar to human language like English, French, whatever. Being "readable" is quicker comprehension. Most articles on Wikipedia are not readable as they are long, boring, dry, sluggish and very very wordy. Because Wikipedia documents a ton of info, it is not especially helpful when compared to sites with more practical, useful and relevant info. Languages like Python and CoffeScript are "verbally readable" in that they are closer to English syntax. Having programmed firstly and mainly in Python, I'm not so sure this is really a good thing. The second interesting argument is that CoffeeScript is an intermediator, a step between two ends, which may increase the chance of bugs. While CoffeeScript has other practical benefits, this question specifically requests evidence showing support for the counter-case of language "readability" share|improve this question As a Python developer I feel insulted by you using CoffeeScript and Python in the same sentence :p –  ThiefMaster Oct 6 '12 at 9:57 Dropbox's response to that article (TLDR: they looked at it, thought about it and went with CoffeeScript anyway): tech.dropbox.com/?p=361 –  user16764 Oct 6 '12 at 16:32 On the subject of comprehension: what exactly is your question? –  Peter Taylor Oct 6 '12 at 19:53 As a JavaScript developer I'm just offended by the existence of CoffeeScript. Also, I don't see how Python more closely resembles English than any other language aside from the logical operators which seem kind of sensible to me. I thought the like-English thing was supposed to be VB's schtick. –  Erik Reppen Oct 12 '12 at 1:10 add comment 5 Answers I think the Article has a point as far as JavaScript vs. CoffeScript is concerned. I personally find JavaScript quite readable and I just do not see the point of sticking another layer of syntax on top. I have similar experiences with Java/Groovy, Groovy is just great: highly expressive, cuts out a lot of useless tedious typing compared with Java, the "extras" like native SQL support are really worth having. BUT the last time I used it debugging was painful, you end up stepping through endless obscure internal Groovy classes before you get back to your own code. Python on the other hand is a complete self supporting environment, it is it's own language and is not tacked on top of another language (although Python itself is written in C and has excellent integration with anything written in C or C++). It has its own debugger so for the most part you are debugging the python code you wrote. The designers of Python obsess over the expressiveness of the language and consistency of syntax. Once you get the hang of it it is very readable. You genuinely write much less code in Python compared with using other languages to solve the same problem, and, well written Python code is clear and unambiguous. The only downsides are that in common with most dynamic languages it does not play well with IDEs, and, all that lovely high level expressiveness is not interpreted into a lean mean execution. share|improve this answer "Python itself is written in C" - techically CPython is only one implementation, and there are more (like IronPython for .NET). There's also PyPy. –  Kos Oct 6 '12 at 9:40 The point of CoffeeScript is not primarily to make JavaScript more readable, but to remove the boilerplate (which has increased readability as a side effect). –  tdammers Oct 6 '12 at 10:07 “Once you get the hang of it it is very readable.” — But only once you get the hang of it. The same applies to Perl, but getting the hang of it is a steep initial investment. No wonder, therefore, that newbies gravitate toward something like PHP which seems on the surface easier to “get the hang of”. –  Timwi Oct 7 '12 at 6:29 If you know JavaScript, JavaScript can be used to eliminate boilerplate in JavaScript. –  Erik Reppen Oct 12 '12 at 1:18 "Debugging Groovy is painful, you end up stepping through endless obscure internal Groovy classes before you get back to your own code." That's the breaks when using dynamically-typed languages on the JVM. –  Vorg van Geir Apr 3 '13 at 14:43 add comment I find the argument about symbols and structure communicating more (and more quickly) than natural words quite compelling, but then my all-time favourite language is Scheme, so I would. share|improve this answer add comment I would answer "somewhat". To be fair I think we have to first factor out our personal bias (ie. familiarity with keywords, syntax, and constructs) and only judge languages that we have never seen before. (To judge the more popular languages we can only accept answers from new programmers as they are introduced to a the language for the first time.) Then, given an identical implementation in several languages, the question posed should be "which of these is easier to understand"? I would guess that the more there is a syntax hurdle the less likely a language will score well in this scenario, and that favors languages with a cleaner and more obvious syntax. Much more likely to communicate the programmer's intent will be the choice of variable names and of course the comments that were included, and that transcends nearly all languages. share|improve this answer add comment People naturally think in images The author of that article shouldn't speak for everyone. Some people naturally think in images, others are more verbal/symbolic. Besides, his examples are of verbal versus symbolic. Neither && nor and is a picture; both are symbolic rather than pictorial, and both are processed by the same side of the brain. Symbolic reasoning is much closer to verbal reasoning than it is to spatial reasoning. I for one am not a big fan of pictorial programming. It doesn't communicate to me. I never did like flow charts. I absolutely loathe UML diagrams as they inevitably bring out the worst in both worlds. Pictures make nice cartoons, great for presenting ideas to management. They're not so good for doing meaningful work in programming, mathematics, or physics (all of which are largely symbolic). Sometimes pictures just don't cut it. Symbols are more powerful than pictures. Try drawing a picture of an infinite dimensional Hilbert space. share|improve this answer In my work on a .NET obfuscator, I found it quite useful to visualise IL code as a flowchart. There is something to be said about seeing loops and conditional branches in a spatial way. I don’t know what it would be like to actually code in this though, since I’ve never really come across such a graphical programming environment. –  Timwi Oct 7 '12 at 6:34 (Wait, that’s not completely true :) I have written code in an arguably flow-chart-like language :) ) –  Timwi Oct 7 '12 at 6:40 I think APL established that there are at least limits to the degree that symbol usage is helpful. –  Erik Reppen Oct 12 '12 at 1:20 add comment I don't read code; I see patterns. Eventually this applies to all the languages I learn. The window where 'readable' matters is very short, a few days at most, even for idiomatic languages. share|improve this answer Exception: Perl still looks like gibberish even after several years of using it. ;) –  Steven A. Lowe Oct 7 '12 at 1:32 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23053
Take the 2-minute tour × What is the point of using DTO and is it an out dated concept? I use POJOs in the view layer to transfer and persist data. Can these POJOs be considered as an alternative to DTOs? share|improve this question But POJO can be DTO and DTO can be implemented with POJO. You are comparign apples and oranges. –  Euphoric Oct 26 '12 at 5:48 Why should good ideas become outdated? Look at Lisp. Apart from jokes, I agree with Euphoric: I normally implement DTO's using POJO's. I still find DTO's are very simple (KISS) and useful concept. –  Giorgio Oct 26 '12 at 5:59 add comment 3 Answers DTO is a pattern and it is implementation (POJO/POCO) independent. DTO says, since each call to any remote interface is expensive, response to each call should bring as much data as possible. So, if multiple requests are required to bring data for a particular task, data to be brought can be combined in a DTO so that only one request can bring all the required data. Catalog of Patterns of Enterprise Application Architecture has more details. Are Data Transfer Objects an out dated concept? No, since DTO is a fundamental concept. share|improve this answer you may find them under different names though, since everyone seems to be reinventing the wheel these days –  linkerro Oct 26 '12 at 10:25 Like "Value Object". –  Shuvo Oct 26 '12 at 10:53 @linkerro: True: I think lots of people should spend more time reading about stuff that has already been invented instead of re-inventing it themselves. Re-invented stuff will always be less mature. –  Giorgio Nov 1 '12 at 10:36 @Giorgio There's a lot of devs out there still running with ideas that should have never made it off the ground. I wish more devs questioned every idea they read about. –  Erik Reppen Jun 4 '13 at 22:41 add comment DTO as a concept (objects whose purpose is to collect data to be returned to the client by the server) is certainly not outdated. What is somewhat outdated is the notion of having DTOs that contain no logic at all, are used only for transmitting data and "mapped" from domain objects before transmission to the client, and there mapped to view models before passing them to the display layer. In simple applications, the domain objects can often be directly reused as DTOs and passed through directly to the display layer, so that there is only one unified data model. For more complex applications you don't want to expose the entire domain model to the client, so a mapping from domain models to DTOs is necessary. Having a separate view model that duplicates the data from the DTOs almost never makes sense. However, the reason why this notion is outdated rather than just plain wrong is that some (mainly older) frameworks/technologies require it, as their domain and view models are not POJOS and instead tied directly to the framework. Most notably, Entity Beans in J2EE prior to the EJB 3 standard were not POJOs and instead were proxy objects constructed by the app server - it was simply not possible to send them to the client, so you had no choice about haing a separate DTO layer - it was mandatory. share|improve this answer As a UI dev forced into a more generalist role I've definitely found the Mapper.Map phenomenon in our codebase stupifying. Why can't the DTO just map itself? –  Erik Reppen Jun 4 '13 at 22:39 add comment Absolutely not! Just recently I had my lessions learned about better using DTOs rather than your business object you use (possibly bound to your ORM mapper). However, just use them when they're appropriate to use and not just for the sake of using them because they're mentioned in some good pattern book. A typical example which just comes to my mind is when you expose some kind of interface to 3rd parties. In such scenario you'd like to keep the exchanged objects quite stable which you can usually achieve nicely with DTOs. share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23054
Take the 2-minute tour × I am desiging a file format and I want to do it right. Since it is a binary format, the very first byte (or bytes) of the file should not form valid textual characters (just like in the PNG file header1). This allows tools that do not recognize the format to still see that its not a text file by looking at the first few bytes. Any codepoint above 0x7F is invalid US-ASCII, so that's easy. But for Unicode it's a whole different story. Apart from valid Unicode characters there are private-use characters, noncharacters and sentinels, as I found in the Unicode Private-Use Characters, Noncharacters & Sentinels FAQ. What would be a sentinel sequence of bytes that I can use at the start of the file that would result in invalid US-ASCII, UTF-8, UTF-16LE and UTF-16BE? • Obviously the first byte cannot have a value below 0x80 as that would be a valid US-ASCII (control)character, so 0x00 cannot be used. • Also, since private-use characters are valid Unicode characters, I can't use those codepoints either. • Since it must work with both little-endian and big-endian UTF-16, a noncharacter such as 0xFFFE is also not possible as its reverse 0xFEFF is a valid Unicode character. • The above mentioned FAQ suggests not using any of the noncharacters as that would still result in a valid Unicode sequence, so something like 0xFFFF is also out of the picture. What would be the future-proof sentinel values that are left for me to use? 1) The PNG format has as its very first byte the non-ASCII 0x89 value, followed by the string PNG. A tool that read the first few bytes of a PNG may determine it is a binary file since it cannot interpret 0x89. A GIF file, on the other hand, starts directly with the valid and readable ASCII string GIF followed by three more valid ASCII characters. For GIF a tool might determine it is a readable text file. This is wrong and the idea of starting the file with a non-textural byte sequence came from Designing File Formats by Andy McFadden. share|improve this question Since it is a binary format, the first bytes of the file should not form valid textual characters - You should look at the magic file (/usr/share/magic, or /etc/magic on many unix systems) that shows how this application identifies file types. A PNG file starts out with \x89PNG\x0d\0a\x1a\x0a -- note the "PNG" in there, that's a raw string. The sequences \x89 and the like are non-printable bytes. –  MichaelT Mar 13 '13 at 15:29 @MichaelT Yes, since PNG is a binary format, the first byte does not form a valid textual character. That's what I meant. I fail to see your point? –  Virtlink Mar 13 '13 at 15:36 That was an example. A .gif starts out with GIF8. A SGI movi file starts out with MOVI. One style of zip archive file starts out with ZZ, the more popular pkzip format starts out with PK. The constraint that the first byte be an invalid text character does not seem to match what is found in the wild. I am curious why this is a requirement. –  MichaelT Mar 13 '13 at 15:51 Do you really care how other programs behave when they see a unknown file? To me, a signature sequence (like PNG files) is much more useful than a sentinel sequence - when the content is sent through a simple stream protocol, the receiver can immediately decide how to handle the following bytes. A Omani-sentinel sequence is next to no-sequence once everyone starts using it to identify their own format. –  Codism Mar 13 '13 at 16:51 @Virtlink, I dont particularly care what bytes you use in your file format. But you made an assertion that its 'wrong' to use ascii characters... yet I've not seen anything here that supports that claim, and there's plenty of empirical experience that shows it really doesn't matter (ie, the countless file formats that have been using ASCII characters without a problem for decades) –  GrandmasterB Mar 13 '13 at 19:49 show 7 more comments 3 Answers 0xDC 0xDC • Obviously invalid UTF-8 and ASCII • Unpaired trail surrogate in lead position regardless of endianess in UTF-16. It doesn't get more invalid UTF-16 than that. share|improve this answer But perfectly reasonable ISO-8859-1, and probably reasonable in any other character set that uses an 8-bit encoding. –  parsifal Mar 13 '13 at 23:06 +1 OP didn't ask for ISO 8859-1, just US-ASCII and UTF-*. –  Ross Patterson Mar 14 '13 at 0:00 @RossPatterson - true, but I suspect that's mostly because the OP hasn't really thought through the problem. Without any statistics to back me up, I'm willing to bet that a random "is this text" algorithm is more likely to give preference to ISO-8859-1 than UTF-16, simply because there's an enormous amount of 8-bit text in the world. –  parsifal Mar 14 '13 at 12:57 @parsifal Any binary is valid ISO-8859-1 so it doesn't need to be considered simply because it's impossible to make invalid ISO-8859-1. –  Esailija Mar 14 '13 at 13:05 @Esailija - valid, yes, but "text" files don't usually contain control characters (outside of the limited set of whitespace characters). –  parsifal Mar 14 '13 at 14:40 show 1 more comment • In UTF-8, the bytes C0, C1, and F5 - FF are illegal. The first byte must either be ASCII or a byte in the range C2-F4, any other starting byte is not valid UTF-8. • In UTF-16, the file normally starts with the Byte Order Mark (U+FEFF), otherwise applications have to guess at the byte order. Codepoints in the range D800-DBFF are lead bytes for a surrogate pair, and DC00-DFFF are the trailing bytes for a surrogate pair. Thus, I'd use the byte combo F5DC. These two values are: • Not ASCII • Not valid UTF-8 • Either interpreted as a UTF-16 trailing byte in a surrogate pair (not legal), or the codepoint U+F5DC, which is a private use character, but only by applications that stubbornly try to interpret this as UTF-16 even without a BOM. If you need more options, F5DD through to F5DF all have the same 3 properties, as do F6DC - F6DF, F7DC - F7DF and F8DC - F8DF, for a total of 16 different byte combos to pick from. share|improve this answer So, by Esailija's suggestion to use U+DCDC, 0xDC would be valid UTF-8? –  Virtlink Mar 13 '13 at 16:38 @Virtlink 0xDC is a UTF-8 lead byte for a 2-byte sequence. It must be followed by a 10xxxxxx continuation byte for it to be valid. 0xDC is not a valid continuation byte, so 0xDC 0xDC is not valid UTF-8. –  Esailija Mar 13 '13 at 16:40 @Virtlink: No, because the second byte is not valid, it would have to be in the range 80 - BF. –  Martijn Pieters Mar 13 '13 at 16:45 add comment If you're trying to use a non-printable character to indicate "not text," then you'll find it hard to beat 0x89: • It's outside the US-ASCII range • In ISO-8859-1 it's a non-printable character ("CHARACTER TABULATION WITH JUSTIFICATION "). Likewise with Shift-JIS, which I believe is still in common use. Other 8-bit encodings may, however treat this as a valid character. • In UTF-8 it's an invalid first-byte for a multi-byte sequence (top bits are 10, which are reserved for characters 2..N of a multi-byte sequence) Generally, when you form magic numbers, "non-text" is a minor point. I'll have to look up the reference, but one of the standard graphics formats (TIFF, I think) has something like six different pieces of useful information from its magic number. share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23055
Take the 2-minute tour × Is PHP-GTK widely used for desktop application? Is it recommended to use? Is there a big company who uses PHP GTK? share|improve this question The language is plain old PHP. It just as an extra library (and is presumably not hooked up to a web server). –  Donal Fellows Jun 23 '11 at 8:22 Define "widely". –  S.Lott Jun 23 '11 at 9:54 @S.Lott: doesn't matter, it's not widely used with any definition of "widely". –  vartec Jun 23 '11 at 10:28 @vartec: While that may be true, it doesn't improve the question, does it? –  S.Lott Jun 23 '11 at 11:10 add comment closed as not constructive by Yannis Rizos Apr 20 '12 at 14:59 2 Answers No, it is not. Quite the opposite, it's practically a dead project. There was almost no activity for 3½ years now. It had never really got any traction, as PHP is very much web oriented, and there was not much interest in GUI library for it. share|improve this answer thanks man! now it is clear for me that PHP-GTK is not recommended language anymore. –  adietan63 Jun 23 '11 at 8:28 add comment I've only seen it used in academic settings as an exercise in 'what not to do'. With regard to scripting language GUIs the industry appears to be heading pretty heavily towards Python and it's subsequent libraries like PyQT and PyGTK. They're much easier to use, much better supported and much more proliferant than PHP. share|improve this answer Plus they are probably much more readable. PHP-GTK made my eyes bleed –  TheLQ Jun 23 '11 at 14:47 add comment
global_01_local_0_shard_00000017_processed.jsonl/23056
441 reputation bio website location Philadelphia, PA age 39 visits member for 3 years, 5 months seen Dec 21 '13 at 14:56 I'm a software engineer that's been in the field for a little over 10 years now. I started out as a C programmer on "soft" embedded systems (cable set-top boxes FWIW). Since then I have moved to the other side of the client-server system and have been specializing in distributed systems development in C++. Over the past few years, I've moved from writing code every day into doing more systems design and architecture work. This user has not participated in any bounties
global_01_local_0_shard_00000017_processed.jsonl/23060
Using your phone's internet browser go to: Click and drag this link to the Home icon in your browser. Asked by nitin bahl, 13 Jul '07 02:10 am   Invite a friend  |     Save  |    Earn 10 points for answering Answer this question  Earn 10 points for answering     4000 characters remaining   Keep me signed inNew User? Sign up Answers (3) There is a great competition between India & china in all fields.The sleeping giant has woken up & it races ahead of even developed nations.But it will find hard to beat India in IT Technology as we are very strong in MATS. & English.But we must remain very alert.No complancy should creep in .Our Manufactured goods are more reliable than those manufactured in China. Answered by radhakrishnan, 13 Jul '07 02:16 am Report abuse Not Useful Your vote on this answer has already been received Yes it is true that in manufacturing China is far ahead of India. In many hypermarkets abroad we get even international brands assembled in China only. In IT also they are catching up with India. We should take it as an opportunity to improve ourselves. Answered by Ramachandran Nair, 13 Jul '07 02:28 am Report abuse Not Useful Your vote on this answer has already been received 65% of foreign programmers in USA are Indians. 60% of medical personal in USA are Indians. 12% scientists in USA are Indians. 36% of NASA scientists are Indians. 34% of Microsoft employees are Indians. 28% of IBM employees are Indians. 17% of INTEL scientists are Indians. 13% of XEROX employees are Indians. Notable Indians abroad: The Indian living abroad have excelled spectacularly in their chosen professions and fields by dint of their single-minded dedication and hard work. They have excelled in fields like the IT, medicine, venture capital, engineering, construction etc. to name a few. Who is the co-founder of Sun Microsystems? Vinod Khosla Who is the creator of Pentium CPU? Vinod Dahm Who is the founder and creator of Hotmail? Sabeer Bhatia Who is the president of AT & T-Bell Labs (AT & T-Bell Labs is the creator of programming languages C, C++, and OS Unix) Arun Netravalli Who is the GM of Hewlett Packard? Rajiv Gupta Who is the President of the Bell Labs ? Dr. Arun Ne ...more Answered by vsiva prasad, 13 Jul '07 08:53 am Report abuse Not Useful Your vote on this answer has already been received Ask a Question Get answers from the community 600 characters remaining Related Answer Mutual funds are appropriate for some and the wrong investment for a growing number of people.', Put another way, I would NOT invest in mutual funds i..more Answered by Mickey
global_01_local_0_shard_00000017_processed.jsonl/23063
In this frigid fragile capsule That allows you to fly south before the winter winds trap you from Cannibal Ox – Pigeon Lyrics on Rap Genius Vast is talking about the body (the “capsule” for the soul), building on a common theme from the album. Pigeons have bodies that let them fly to warm weather, but only for so long: like us, they’re “fragile,” mortal, destined to be trapped by the “winter winds” and become empty, “frigid” corpses.
global_01_local_0_shard_00000017_processed.jsonl/23064
Lil B – My Day Off Lyrics You pyonged “Lil B – My Day Off” Save Note No Thanks Caution: You are now annotating this song as Yo check this out, bitch ya hear me? I just copped that new ho ya bitch Ya know I just got that new Bentley, ya heard me nigga We do everything in the Gold House bitch Tiny shirts, pink shirts Tiny pants everything I don't wear nothing baggy no more Bitch I do everything now I ice my wrist This bitch ain't nothing, on this bitch Smoking a 150 inch TV My remote look like a DVD player In the library, I was reading books when you was fucking hoes in they ass I'm about that cash nigga green on greeter Fuck that oil, that ain't nothing Can't nobody fuck the BasedGod's bitch I... Bitch Mob, you feel me? I rep Bitch Mob cause that ho ain't shit And I ain't gone die for it cause that doe ain't shit Nigga play your cards right cause the doe ain't shit Nigga riding 4 door had to hold they clip Nigga open 2 door and that, ain't flip Hide back in drive through that bitch Got the mask up, I'm with them No help, I'm a one man army But I ride on 'em man I'm a veteran Put a 9 on your back like a letterman Now-a-days, I don't know the fake or real The fake ones, be the geeked up squad, ya feel me? You wanna speak you can speak a deal Yea, but what's the fucking issue Need some tissue, nigga save yourself Don't play yourself, Gold House mixtape Lil B back Bitch Mob So you know what's happening Did it for the world, mother fuck the rapping Now I know that you like me bitch It's Friday, my day off In the hood with a beautiful bitch Im that pretty bitch, that gangster shit Edit song description to add:
global_01_local_0_shard_00000017_processed.jsonl/23088
The Simpsons: Hit & Run review (GameCube) CNET Editors' Rating 4.0 stars Excellent Review Date: Average User Rating 0.0 stars No reviews. Write a review The Simpsons: Hit & Run borrows heavily from the Grand Theft Auto series and, in so doing, it brings the world of the Simpsons to life with proper justice. It's been over 14 years now since the weekly television debut of The Simpsons, and over that period of time, more than a dozen video games have carried the license of the seemingly inviolable Simpsons franchise. However, with perhaps the sole exception of Konami's 1991 arcade action game, The Simpsons, none of these games have ever managed to really capture the sharp humor and unique personalities of the Springfield universe, and none have really even proved to be much in the way of fun either. Thankfully, this trend has finally come to a close. The latest title to use the Simpsons name is The Simpsons: Hit & Run, an action adventure game that borrows heavily from the gameplay style and design of the recent entries in the Grand Theft Auto series and, in so doing, finally manages to bring the world of the Simpsons to life with proper justice. The Simpsons: Hit & Runscreenshot The Simpsons: Hit & Run lets you play as Homer, Bart, Lisa, Marge, and even Apu. The story of The Simpsons: Hit & Run is perhaps a bit convoluted in its design but seemingly in an intentional way. At the beginning of the game, Springfield is being overrun by mechanized bees, mysterious black vans and cars, and an insidious cola that is controlling the minds of the city's residents. In the game, you'll take control of the four core Simpsons family members (Homer, Bart, Lisa, and Marge) as well as resident Kwik-E-Mart proprietor, Apu, to investigate the origins of and motives for all these strange happenings. As the game follows the same structure as the GTA entries, much of Hit & Run involves a linear series of missions, with a number of exploration elements to boot. The game uses a basic level structure, with seven total levels, each with seven primary missions. Each level in the game is assigned to one specific character. There's one for Lisa, Marge, and Apu, while Bart and Homer get two apiece. Missions are assigned by interactions with the city's various characters. They generally involve collecting and delivering items to other people or locations, racing other characters, and even getting into full-on car combat situations. In actuality, practically every mission in the game is a direct clone of one of the GTA driving missions. However, the lack of originality in the game's mission structuring is more than made up for by the decisively original style of Simpsons humor. The end result is actually very fun. Each level also has one bonus mission and three non-story-related race missions. Completing both the bonus missions and race missions help you to unlock new and unique cars in the game. This is key, as vehicles are the most important aspect of Hit & Run's gameplay. Hit & Run has a bevy of different cars to choose from, ranging from the more generic and standard looking cars, to a host of different, episode-specific cars that fans of the show are bound to recognize. Each character in the game starts out with his or her own car. Marge has her road-rage-inspiring Canyonero, Homer drives the family sedan, and Bart drives Martin Prince's entry into the Springfield soap box derby, the Honor Roller. These are not the only cars that each character can drive, however, as certain missions will require them to purchase new cars from vehicle merchants located around Springfield. For example, one early mission requires Homer to destroy Mr. Smithers' car before he can get to the power plant to conduct Homer's employee review. So, to do this, Homer purchases the massive Plow King from his good friend Barney. In a pinch, characters can also borrow cars from passing motorists, though these cars are all pretty generic. However, if you can find one of any of the numerous phone booths strewn about the town, you can access any of the different cars in the game. There is a seemingly endless number of different cars to unlock and purchase in the game, from Homer's self-designed car (aptly titled "The Homer") to Comic Book Guy's beaten-down jalopy to Professor Frink's futuristic hovercar. All in all, the selection of cars should keep you plenty entertained as you navigate the streets of Springfield. The Simpsons: Hit & Runscreenshot The game's storyline has you uncovering a plot that involves mysterious vans, mind-controlling cola, and robotic bees with cameras in their heads. The game's mapping system is denoted by a GTA-esque circular street map that appears in the corner of the screen. It shows where you are in relation to where you need to be. Additionally, there's also a system that works by having arrows appear on the street, pointing out the path you need to follow. In many ways, it's actually a better arrow system than the one used in the GTA games, and it keeps you from ever really getting confused while driving. Driving isn't the only way to get around Springfield, as you can opt to just run and jump around town, sans vehicle, any time you please. Each character has basic jump, attack, and jump-attack functions. Each can be used to destroy enemies and objects or to explore hidden areas. You can even kick a passerby. Every character can also double-jump, and each also has a unique jumping slam attack. Homer, for instance, can land a pretty hefty butt-stomp when the occasion calls for it. The game also employs an action button, which can be used to activate different objects, like moving platforms and the like. Outside of the game's storyline and missions, there's plenty of stuff to discover and explore in Hit & Run. Items, money, and hidden gags can be found throughout the city of Springfield. Money comes in the form of coins, which can be obtained by destroying various pieces of the scenery, like soda machines, street signs, and lampposts, as well as the evil mechanical bees that pop up all over the place. Wanton destruction, however, is simply not permitted. Not for long. Cause too much chaos, and eventually you'll end up with the cops on your tail. Your level of rampage is monitored by a meter that appears in the lower corner of the screen. Once it fills up, the police will chase you down. If you're caught, it's a 50-coin fine, which isn't too bad. But if you're careful enough, you should be able to collect coins easily enough without having to constantly deal with the police. The Simpsons: Hit & Runscreenshot You can drive a bevy of different cars in the game, ranging from the basic Simpsons family sedan to Professor Frink's futuristic hovercar. Once you've got the cash, you can obviously buy cars from different characters; but that's not all. Each character also has a different set of outfits that can be purchased at different locations around town. Some of these outfits come directly into play with the game's storyline and are necessary to complete missions. Others are simply there for the fun of it. All the different outfits are throwbacks to episodes of the show, including Homer's muumuu from his brief flirtation with morbid obesity, Marge's cop uniform from her days as a member of Springfield's finest, and Apu's "American" outfit, complete with baseball jersey and oversized cowboy hat. Member Comments Add Your Comment Conversation powered by Livefyre Quick Specifications • Release date09/16/03 • ESRB Teen • Developer Radical Entertainment • Genre Driving • Elements Mission-based Driving • Number of players 1-4 Players
global_01_local_0_shard_00000017_processed.jsonl/23090
On Byways and Backlanes: The Philosophy of Free Culture "In this short paper I attempt to follow Heidegger (2000) in suggesting that the work of a philosophy of free culture is to awaken us and undo what we take to be the ordinary; looking beyond what I shall call the ontic to uncover the ontological (Heidegger 2000c: 28-35). In this respect we should look to free culture to allow us to think and act in an untimely manner, that is, to suggest alternative political imaginaries and ideas. For this then, I outline what I think are the ontological possibilities of free culture and defend them against being subsumed under more explicitly ontic struggles, such as copyright reform. That is not to say that the ontic can have no value whatsoever, indeed through its position within an easily graspable dimension of the political/technical the direct struggles over IPR, for example, could mitigate some of the worst effects of an expansion of capital or of an instrumental reason immanent to the ontology of a technological culture. However, to look to a more primordial level, the ontological, we might find in free culture alternative possibilities available where we might develop free relations with our technologies and hence new ways of being-in-the-world." Read On Byways and Backlanes: The Philosophy of Free Culture by David M. Berry, NOEMA. Originally posted on networked_performance by jo
global_01_local_0_shard_00000017_processed.jsonl/23101
José Diaz Seng > DBIx-Table-TestDataGenerator-0.001 > DBIx::Table::TestDataGenerator Annotate this POD Open  0 View/Report Bugs Module Version: 0.001   Source   Latest Release: DBIx-Table-TestDataGenerator-0.005 DBIx::Table::TestDataGenerator - Automatic test data creation, cross DBMS Version 0.0.1 use DBIx::Table::TestDataGenerator; my $generator = DBIx::Table::TestDataGenerator->new( dbh => $dbi_database_handle, schema => $schema_name, table => $target_table_name, #simple usage: target_size => $target_size, num_random => $num_random, seed => $seed, #extended usage handling a self-reference of the target table: target_size => $target_size, num_random => $num_random, seed => $seed, max_tree_depth => $max_tree_depth, min_children => $min_children, min_roots => $min_roots, #instantiation using a custom DBMS handling class dbh => $dbi_database_handle, schema => $schema_name, table => $target_table_name, custom_probe_class => $custom_probe_class_name, There is often the need to create test data in database tables, e.g. to test database client performance. The existence of constraints on a table makes it non-trivial to come up with a way to add records to it. The current module inspects the tables' constraints and adds a desired number of records. The values of the fields either come from the table itself (possibly incremented to satisfy uniqueness constraints) or from tables referenced by foreign key constraints. The choice of the copied values is random for a number of runs the user can choose, afterwards the values are chosen randomly from a cache, reducing database traffic for performance reasons. The user can define seeds for the randomization to be able to reproduce a test run. One nice thing about this way to construct new records is that at least at first sight, the added data looks like real data, at least as real as the data initially present in the table was. A main goal of the module is to reduce configuration to the absolute minimum by automatically determining information about the target table, in particular its constraints. Another goal is to support as many DBMSs as possible. Currently Oracle, PostgreSQL and SQLite are supported, further DBMSs are in the work and one can add further databases or change the default behaviour by writing a class satisfying the role defined in In the synopsis, an extended usage has been mentioned. This refers to the common case of having a self-reference on a table, i.e. a one-column wide foreign key of a table to itself where the referenced column constitutes the primary key. Such a parent-child relationship defines a rootless tree and when generating test data it may be useful to have some control over the growth of this tree. One such case is when the parent-child relation represents a navigation tree and a client application processes this structure. In this case, one would like to have a meaningful, balanced tree structure since this corresponds to real-world examples. To control tree creation the parameters max_tree_depth, min_children and min_roots are provided. Note that the nodes are being added in a depth-first manner. Return value: a new TestDataGenerator object Creates a new TestDataGenerator object. If the DBMS in question does not support the concept of a schema, the corresponding argument may be omitted. If a DBMS currently not supported by DBI::Table::TestDataGenerator is to be supported, or the behaviour of the current TableProbe class responsible for handling the DBMS must be changed, one may provide the optional custom_probe_class parameter. custom_probe_class being the name of a custom class impersonating the TableProbe role. Accessor for the DBI database handle. Accessor for the database schema name. Accessor for the name of the target table. Accessor for the name of a custom class impersonating the TableProbe role. This is the main method, it creates and adds new records to the target table. In case one of the arguments max_tree_depth, min_children or min_roots has been provided, the other two must be provided as well. Nothing, only called for the side-effect of adding new records to the target table. (This may change, see the section FURTHER DEVELOPMENT.) To install this module, run the following commands: perl Build.PL ./Build test ./Build install When installing from CPAN, the install tests look for the environment variables TDG_DSN (connection string), TDG_USER (user), TDG_PWD (password) and TDG_SCHEMA (schema) which may be used to test the installation against an existing database. If TDG_DSN is found, the install will try to use this connection string and the tests will fail if no valid database connection can be established. If TDG_DSN is not found, the installation creates an in-memory SQLite database provided for free by the DBD::SQLite module and tests against this database. The module has been tested on a Windows 7 32-bit machine, both on Windows using Strawberry Perl and on a VirtualBox image of Fedora-17-x86 running on the same Windows machine. A big thank you to all perl coders on the dbi-dev, DBIx-Class and perl-modules mailing lists and on PerlMonks who have patiently answered my questions and offered solutions, advice and encouragement, the Perl community is really outstanding. Special thanks go to Tim Bunce (module name / advice on keeping the module extensible), Jonathan Leffler (module naming discussion / relation to existing modules / multiple suggestions for features), brian d foy (module naming discussion / mailing lists / encouragement) and the following Perl monks (see the threads for user jds17 for details): chromatic, erix, technojosh, kejohm, Khen1950fx, salva, tobyink (3 of 4 discussion threads!), Your Mother. Jos\x{00E9} Diaz Seng, <josediazseng at> Please report any bugs or feature requests to bug-dbix-table-testdatagenerator at, or through the web interface at I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. You can find documentation for this module with the perldoc command. perldoc DBIx::Table::TestDataGenerator You can also look for information at: Copyright 2012 Jos\x{00E9} Diaz Seng. See for more information. syntax highlighting:
global_01_local_0_shard_00000017_processed.jsonl/23102
Philip Crow > Bigtop > docs/keyword_cookbook/controller/gen_uses/discussion Annotate this POD New  2 Open  1 View/Report Bugs If your generated controller module needs to load a module, include a gen_uses statement in your controller's block. gen_uses takes a comma separated list of modules to use. If you want to control their import lists, use pairs. For example, this will use the modules with default importing: gen_uses Your::Module, Some::Other::Module; Add pairs to get the imports of your dreams: gen_uses Your::Module => `qw( :everything :and then some )`; Note that the value will be used literally to produce this: use Your::Module qw( :everything :and then some ); So, qw is a good choice (as it usually is). See also stub_uses, uses, and plugins. The later is likely the only good choice, if the module you want to use is a Gantry plugin. Build the example with: bigtop -c example.bigtop all Look for Exotic in lib/Kids/GEN/ Notice how Your::Module lists imports explicitly. If you don't provide a list, all of the @EXPORT items will be explicitly listed. syntax highlighting:
global_01_local_0_shard_00000017_processed.jsonl/23114
Take the 2-minute tour × When remotely connecting to my server using Windows' Remote Desktop Connection application, I can save Connection settings in an RDP file and then easily edit it right clicking and selecting "edit". Also I can create an RDP file for a RemoteApp program in a RemoteApp Manager of my server. But it is impossible to edit the settings of the RDP file in the way as for the RDP file created from Remote Desktop Connection application. Why is that? What is difference between these two types of RDP files and what is the difference between these two types of remote desktop sessions? Is there any way to change the IP address of the Computer parameter of the RDP file, created from a RemoteApp Manager? share|improve this question add comment 1 Answer up vote 1 down vote accepted RDP files are (or at least used to be) plain text files. Try opening one in Notepad or your favourite text editor. share|improve this answer Thanks! Your info is really helpful. I easily opened both files with Notepad and so was able to edit any parameter. Additionally, I can see all parameters, RDP file consists of, and google them. +1 –  rem Jul 9 '10 at 5:03 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23115
Take the 2-minute tour × I have users that need to access a website with a different client certificate depending on what function they are trying to perform. The problem is that once the user logs in with one of the client certificates, the only way they can log in with a different certificate is to close every instance of Internet Explorer that is open first. Is there any way to force Internet Explorer to try to log in with a different certificate? I have tried having the users click the "Clear SSL state" button under Internet Options > Content, but that seems to only work about 25% of the time. share|improve this question add comment 1 Answer why dont you get a SAN certificate, that would work across multiple domains..by using a single certificate.. share|improve this answer We have no control over the certificates or the website. The website generates the certificates and we can only use the certificates that they generate. They give us one certificate per role but we have users that fill multiple rolls, which means they have to use more than one certificate. –  Adam Hughes Sep 1 '11 at 13:54 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23116
Take the 2-minute tour × I have written a script that I am using to push and deploy a new service to several machines under my control, and in order to execute the process I am using ssh to remotely start the process. Unfortunately, whenever I use SSH to start the process, the SSH command never seems to return, causing the script to stall. The command is specified as: ssh $user@$host "/root/command &". Whenever I run simple commands, such as ps or who, the SSH command returns immediately, however when I try and start my process it does not return. I have tried tricks like wrapping my process in a simple bash script that starts the process and then exits, however this also hangs the SSH command (even if the bash script echos a success message, and exits normally). Does anyone have any insight into what is causing this behaviour, and how I can get the SSH command to return as soon as the process has been started? Thanks for your insights! share|improve this question Post the exact command line you are using ... omit passwords/usernames/IPs –  Joseph Kern Jul 6 '09 at 15:26 ssh $SSH_USER@$HOST_ADDR "/root/AppName &" –  rmrobins Jul 6 '09 at 15:30 I should mention that I have set up SSH keys, so it is not necessary to use passwords to run commands on the remote systems. –  rmrobins Jul 6 '09 at 15:32 @rmrobins, a very good step -- setting public key auth. –  nik Jul 6 '09 at 15:34 add comment 5 Answers up vote 15 down vote accepted SSH connects stdin, stdout and stderr of the remote shell to your local terminal, so you can interact with the command that's running on the remote side. As a side effect, it will keep running until these connections have been closed, which happens only when the remote command and all its children (!) have terminated (because the children, which is what "&" starts, inherit std* from their parent process and keep it open). So you need to use something like ssh user@host "/script/to/run < /dev/null > /tmp/mylogfile 2>&1 &" The <, > and 2>&1 redirect stdin/stdout/stderr away from your terminal. The "&" then makes your script go to the background. In production you would of course redirect stdin/err to a suitable logfile. Just found out that the < /dev/null above is not necessary (but redirecting stdout/err is). No idea why... share|improve this answer Thanks, this is exactly what I was looking for - it didn't occur to me that the process would inherit the std* even when using & to launch it in the background. –  rmrobins Jul 6 '09 at 15:52 add comment Another alternative would be to fire up a detached screen(1), something like: ssh -l user host "screen -d -m mycommand" This will start a detached screen (which captures all the interaction inside of it) and then immediately return, terminating the ssh session. With a bit more ingenuity, you can solve quite complex remote command calls this way. share|improve this answer add comment This is useful if ssh is going to ask for passwords or passphrases, but the user wants it in the background. This remote site is with something like ssh -f host xterm. If the ExitOnForwardFailure configuration option is set to “yes”, then a client started with -f will wait for all remote port for‐ wards to be successfully established before placing itself in the share|improve this answer The issue with this method is that the SSH command is not actually terminated, it is simply hidden in the background, so the host running the script will have a large number of SSH commands in the background that never return because the SSH commands never return (i.e. the root problem I wish to solve) –  rmrobins Jul 6 '09 at 15:38 add comment You could try nohup. Man nohup for more details. ssh host "nohup script &" If you want to keep the output on the remote machine, here's a variant. ssh user@host 'export REMOTE=myname; nice nohup ./my-restart > logfile.log 2>&1 &' share|improve this answer add comment I think the correct way would be ssh user@host exec script.sh & share|improve this answer No, that does not work for me. Why should it? –  sleske Jul 6 '09 at 15:45 I have not used the background push over ssh, was trying without a term at hand. my bad. –  nik Jul 6 '09 at 16:01 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23122
American Apparel finally admits: Yes, they're all about sex Tales of public masturbation and other sexual harrassment scandals surround Dov Charney, the mad genius behind American Apparel (and trust me, he's laughing all the way to the bank). And who can ignore all the hyper-sexual, borderline pedophilia ads featuring young hipsters wearing cotton undies and photographed in (usually) lewd positions? Recently, American Apparel decided to embrace its sexual side further by peddling the Hitachi Magic Wand, a sex toy that some call the "Cadillac of Vibrators." So now, you can buy all your summer T's and um tease, in one place. How convenient! PS. Actually, according to Salon, there's one other store where you can pick up both a Magic Wand and a T-shirt, and that place is Wal-Mart.
global_01_local_0_shard_00000017_processed.jsonl/23132
Smithsonian Digital Repository > Browsing by Author Ramalisonina or enter first few letters:    Showing results 1 to 1 of 1 Issue DateTitleAuthor(s) 1997Environmental change, extinction and human activity: evidence from caves in NW MadagascarBurney, David A.; James, Helen F.; Grady, Frederick V.; Rafamantanantsoa, Jean Gervais; Ramalisonina; Wright, Henry T.; Cowart, James B. Showing results 1 to 1 of 1 DSpace Software Copyright © 2002-2010  Duraspace - Feedback
global_01_local_0_shard_00000017_processed.jsonl/23138
Forgot your password? Comment: Re:New Technology? (Score 1) 504 by Bu11etmagnet (#34371886) Attached to: How Apple Had a Spectacular Year No, no, no. Apple is a hardware company. That's where their money comes from. They have the highest profit return in the entire computer industry. They make software to lure people into buying Apple hardware. Think about BootCamp. Apple has no problem whatsoever if people run Microsoft software on Apple hardware. What they fought tooth and nail to prevent is people running Apple software (OS X) on non-Apple hardware. Comment: Re:Don't forget Red State Stupidity. (Score 1) 1088 by Bu11etmagnet (#33225394) Attached to: Obama Wants Allies To Go After WikiLeaks Wilson - Democrat - WW1 FDR - Democrat - WW2 (FDR went even further than most, he had the US Navy attacking German warships months before war was declared by Germany or authorized by the Congress) Truman - Democrat - Korea JFK/LBJ - Democrats - Vietnam And those are just the major wars. Jimmy Carter was the only Democrat president who didn't start a war. He wasn't re-elected. -- Joseph Heller, Good as Gold Comment: Re:Truth as a defense? (Score -1, Troll) 146 by Bu11etmagnet (#30533282) Attached to: A New Libel Defense In Canada; For Blogs Too I am, however, a bit disillusioned about free speech now. As far as I can tell, there isn't any. It's a lie. Wrong. Your wife wasn't shot, fired from her work, sent to prison, put into house arrest, forced to exile. She just had to supprort the consequences of her actions. You ought to be familiar with that concept.
global_01_local_0_shard_00000017_processed.jsonl/23141
Forgot your password? Comment: Re:Cold (Score 1) 94 You'd need battery warmers to weather the overnight cold, but assuming they'd be plugged in to charge during the worst overnight cold, the battery heater would be running during the lowest temperatures with the standard use procedure. The real question is what happens if you get a cold snap in the middle of the day. Since most suburban school districts stagger the school day for elementary, middle, and junior/senior high schools to minimize the size of their bus fleets, it's quite conceivable that these things would be on the road for close to 8 hours straight without a chance to recharge in between the morning and afternoon commutes. Comment: Re:Comfort with not knowing (Score 1) 113 by RightwingNutjob (#46414387) Attached to: Mathematicians Are Chronically Lost and Confused See, there's a difference between knowing what you don't know and living in a sea of ambiguity the way the OP seems to imply. In mathematics especially, there is a very tall and elaborate edifice of deductions and axioms from which all exploration takes place. For example, one of the more mind-bending exercises in undergrad abstract algebra is proving Peano's axioms for integers. On the one hand you could say "well, I thought I knew basic arithmetic, but now I have to question even that: I'm lost!" But on the other hand, when you go through that exercise, you have very powerful tools in your toolbox: deduction, group theory, ring theory, etc, which you spend time building up and exercising exhaustively before you attack the natural numbers. So you're not really "lost" as in at sea without a clue, but you're just approaching something from a new direction with very well-defined assumptions and rigid reasoning. And if I can hope to contribute to the religious debate without sparking too big of a flame war: maybe this same conflation between being completely lost and working in an unfamility coordinate system may be at play when Skeptics and scientists describe why they're athiests. Empirical evidence and deductive reasoning can peel away some scripture as obviously false, but when you're denying a higher power by an appeal to logic/reason/etc, you're still assuming the presence of this abstract thing called mathematical/empirical truth, and perhaps even Order with a capital 'O'. I'm sure I'm not at all speaking for any sort of majority view of believers or skeptics or deists, but why is it not valid to call that God and be comforted by its existence, as opposed to say chaos? Comment: Re:Tell me again... (Score 0) 538 by RightwingNutjob (#46377991) Attached to: U.S. Students/Grads Carrying Over $1 Trillion In Debt Let me propose an amendment: "higher education in ornamental subjects should be for people who can pay their own way" I'll take a wild stab and guess that people who take on monstrous loans to study useful things like math, physical science, or engineering put themselves in a place where they can actually pay off the loans in a reasonable amount of time without trouble. It's the people who spend four years boozing, partying, and "learning" about their humanities professors' favorite interpretation of reality that find themselves unable to meet their financial obligations. Comment: Re:Yes, but appropriate for age level grade 6-12 (Score 1) 313 by RightwingNutjob (#46365287) Attached to: Should programming be a required curriculum in public schools? Programming should be part of the math curriculum and grow in complexity with it. After teaching math to kids grades 0-12 for the past 150+ years, we've got a rough idea of where the cutoffs in complexity are, and at what level it doesn't stick to such-and-such a percentage of kids of a certain age. But if you can expect a 10 year old to compute the volume of a box, you should also expect him to compute the total volume of a list of boxes supplied in a flat comma-delimited text file with one L,W,H per line. And extra credit for proper handling of mixed length units indicated after the numbers. Now go for the gold and bring out those idiot counting cubes you used in 2nd grade, glue 'em together into boxes, make a list, and see who's program can compute the right answer measured against the weight of the boxes! Comment: Re:Not to state the obvious... (Score 1) 237 by RightwingNutjob (#46281735) Attached to: A New Car UI At work we recently got ourselves a smart tablet ink thing for one of the conference rooms so people can give 'chalk talks' electronically. It's a touch screen with a special pen like the one in the checkout counter that lets you doodle on your powerpoints. So 1k for the computer, probably 2-3k for the tablet, 1-2k for the projector, and God knows how much for the software licenses. The thing sort of works, but occasionally crashes, and takes a while to set up. Back in the cave days of the mid 1990's you'd use an overhead projector, and pay less than 2k for it. Comment: Re:Lego Mindstorms (Score 1) 876 by RightwingNutjob (#46204641) Attached to: Ask Slashdot: Why Are We Still Writing Text-Based Code? Most of the things you need to do for engineering applications *do* lend themselves to a data flow paradigm, but a lot of the things under the hood do not. Data acquisition, process control, and the like lend themselves very much to such thinking. Iterative solvers and fitters, less so, but it can be done because the data flows in strictly parallel or strictly serial paths. Applications where the data flow isn't 'laminar' and jumps around an awful lot, like learning algorithms, image segmentation, and pattern matching doesn't lend itself to data flow programming at all. That's not to say it can't be done, but the resulting diagrams will have crazy jumps and the LabView equivalent of global variables all over the place, because the data processing is not local, and the result at location x at time t doesn't only depend on the input in the neighborhood of x and time t. Comment: Re:Lego Mindstorms (Score 1) 876 by RightwingNutjob (#46200597) Attached to: Ask Slashdot: Why Are We Still Writing Text-Based Code? LabView is part of the problem, and the problem is a misapprehension of what problems programming solves. NI software sucks (and the constant excuses on the NI forums from the support people reek of a lack of technical know-how in the company's software people to fix them, which isn't surprising for a large codebase developed half a generation ago). The only reason LabView is used is that the graphical paradigm makes *some* limited applications so much better that it's worth it to deal with the NI clowns to use it. Trouble is that people who don't know better do those limited things with a graphical language and think they can do everything with it. by RightwingNutjob (#46200543) Attached to: Ask Slashdot: Why Are We Still Writing Text-Based Code? Motion control is a fairly restricted area of application where most of the thinking goes into the application of very mature mathematics derived from the solution of a small set of continuous differential equations. Add a small number of ifs and whiles to interlock the thing to behave safely and you're done. The information content of the resulting code is low compared to, say, a word processor, or an operating system, or a low-level device driver. Likewise, PLCs tend to control fairly simple individual things for which ladders are as good a paradigm for as independent lines or small numbers of lines of code in a procedural language, with very little program-spanning complexity. If you want your motion controller or automation system to do something special based complex transformations to other external inputs, that kind of logic is usually done outside the graphical programming language and feeds a single input block or pad or pin in the graphical paradigm, but has tens to hundreds of thousands of lines of C code behind it. The example I've got in my head is something like an aircraft's autopilot: Moving the control surfaces in response to the pilot's control input and the attitude gyro is something that can (and probably should) be coded up in a graphical language, both to provide checkability against the control theory math used in the design and to eliminate a possible point of coding error in the most critical inner loop of the code. Feeding the autopilot with a trajectory based on GPS, airspeed, navigation waypoints, weather radar, etc, should be done in a procedural language because the complexity (both lenght and interconnectedness) of the algorithms makes it very painful to implement them graphically rather than procedurally. Comment: Re:WTF? (Score 1) 138 by RightwingNutjob (#46096993) Attached to: U.S. Border Patrol Drone Goes Down, Rest of Fleet Grounded Hokay. 1 Predator = 12mil/27 flight hours. Subtract 3 hrs for takeoff/landing and getting on station for 24 hrs, so you get 2 hrs aloft/ 1 million = 500k/hr. 1 Cessna = 200k (or so). No brainer, right? Wrong: A Cessna has a range of (guessing) 1000km for about 5 hours aloft/fuel tank. Count the takeoff, etc, and now you're down to 2-3 hrs aloft. So that's 50k /hr. So if you want 24 hours of coverage, you need at least three Cessnas to overlap, so now you're up to 150k /hr. If you want to have the same service ceiling as the Predator, each plane probably will cost 500k for something beefier, so you've more than doubled the cost, and your 150k/hr for three planes turns into ~400k/hr. This is already close to a Predator B. Now let's add the fact that the Predator has a 3000lb optical surveillance package already built in. You're Cessna carries 4-6 passengers, depending on whether you've bought the 200k one of the 500k one, which is only (let's be generous) 1000lb of payload, not counting the pilot. And the you actually have to buy flight qualified surveillence equipment that you can bolt to the bottom/side of your plane without hosing its flight performance. Big optics are expensive. Infrared and night vision cameras are more expensive. Going from my own experience, a package like the one on a Predator B, even if you bought all the parts and built it yourself, can easily run upwards of 150k per plane, not including integration costs. And you need to pay for three of them (one per plane). So if you've paid 200k for the plane, you're up to 350k, and if you've paid 400k, your up to 550k for two flight hours. That's more expensive than a small manned airplane.
global_01_local_0_shard_00000017_processed.jsonl/23142
Forgot your password? Comment: Re:I know i'm going to get zapped by this but... (Score 1) 156 I think so but at the same time it reminds me of "old school" cable where people were buying the Mike Tyson fight and charging for it, i personally saw the Tyson and Holyfield fight (Tyson bit off his ear) and had to pay $20.00 for it and it lasted what? Fifteen seconds? Thank god the $20.00 paid for beer afterward. That seems to be the problem in my mind, the Companies concerned are not getting a cut of that revenue. Comment: I know i'm going to get zapped by this but... (Score 1, Interesting) 156 I found out about Popcorn time from Huffington post last week and used it 3 times. It was amazing. If you did not get the chance to see it then, too bad. Netflix sucks by comparison for something that lasted 4 day's. Now as for legality, I feel something might have been illegal about it (hehe) but i wish it were not. I am totally unashamed about what i did. It truly was something to see. + - NASA Offers Bounty for Improved Asteroid Detection Algorithms Submitted by Hugh Pickens DOT Com Hugh Pickens DOT Com writes "Dara Kerr reports at CNET that NASA is launching an "Asteroid Data Hunter" contest to reach out to people to help create algorithms that identify asteroids in images captured by ground-based telescopes and will give away $35,000 in awards to competition winners. The winning solution must increase the detection sensitivity, minimize the number of false positives, ignore imperfections in the data, and run effectively on all computer systems. "Current asteroid detection initiatives are only tracking one percent of the estimated objects that orbit the Sun," says Chris Lewicki. "We are excited to partner with NASA in this contest to help increase the quantity and knowledge about asteroids that are potential threats, human destinations, or resource rich." NASA's goal is to discover those unknown asteroids and then track and characterize them. For the contest, citizen scientists will be allowed to study images taken from ground-based telescopes to see if they can develop improved algorithms for identifying asteroids. If dangerous asteroids are found, NASA could determine if they'd be viable for a re-direction into a lunar orbit. “For the past three years, NASA has been learning and advancing the ability to leverage distributed algorithm and coding skills through the NASA Tournament Lab to solve tough problems," said Jason Crusan, NASA Tournament Lab director. "We are now applying our experience with algorithm contests to helping protect the planet from asteroid threats through image analysis.”" + - Both Genders Think Women Are Bad at Basic Math-> Submitted by sciencehabit sciencehabit writes "Think women can’t do math? You’re wrong—but new research shows you might not change your mind, even if you get evidence to the contrary. A study of how both men and women perceive each other's mathematical ability finds that an unconscious bias against women--by both genders--could be skewing hiring decisions, widening the gender gap in mathematical professions like engineering." Link to Original Source + - Scientists Build Thinnest Possible LEDs-> Submitted by minty3 minty3 writes "LEDs are commonly found in TV screens, computer monitors and light bulbs. While the light sources are known to be small, scientists have recently built the thinnest possible LEDS using tungsten diselenide. The nano-sized LEDs are arguably stronger and more energy efficient than their thicker counterparts." Link to Original Source Submitted by KentuckyFC + - New Diet, Sexual Attraction May Have Spurred Europeans' Lighter Skin-> Submitted by sciencehabit sciencehabit writes "Why do some humans have lighter skin than others? Researchers have longed chalked up the difference to tens of thousands of years of evolution, with darker skin protecting those who live nearer to the equator from the sun’s intense radiation. But a new study of ancient DNA concludes that European skin color has continued to change over the past 5000 years, suggesting that additional factors, including diet and sexual attraction, may also be at play. In particular, when blue eyes and blonde hair first arose, they may have been considered so unique--and desirable--that anyone who had them had a sexual advantage." Link to Original Source + - Ukraine May Have To Rearm With Nuclear Weapons Says Ukrainian MP-> Submitted by Anonymous Coward Link to Original Source Submitted by cartechboy + - NYT Op-Ed: Stop Glorifying Hackers-> Submitted by Geste Link to Original Source + - High yield urban vegetable gardening system with LED lighting-> 2 Submitted by Hallie Siegel Link to Original Source + - The Man Making Bank Off Tesla and SpaceX-> 1 Submitted by pacopico Link to Original Source + - One Bitcoin Miner is Mining $8 Million Each Month-> Submitted by DavidGilbert99 DavidGilbert99 writes "While most people are toiling in their bedrooms to try and optimise their graphics cards to most efficiently crack the complex mathematical equations needed to mine a bitcoin, one Seattle-based bitcoin enthusiast has taken things to a whole new level. Dave Carlson has two warehouses full of purpose built mining rigs running 24/7 and which are mining an estimated $8 million every month — though his electricity bill is a bit high...." Link to Original Source + - LABONFOIL: Portable Bond-Style Lab Promises Low-Cost Detection and Diagnosis-> Submitted by Zothecula Zothecula writes "A European project coordinated by Ikerlan and CIC microGUNE is developing a James Bond-style automated laboratory called "LABoratory skin patches and smart cards based ON FOILs and compatible with a smartphone" (LABONFOIL). Using lab-on-a-chip technology and smart patches to detect a wide variety of substances and diagnose diseases, the goal of the project is to create a cheap, portable laboratory that can interact with smart devices." Link to Original Source
global_01_local_0_shard_00000017_processed.jsonl/23144
Forgot your password? Comment: Re:They likely give the Chinese govt full access (Score 1) 46 by npridgeon (#46310845) Attached to: Why Is Dropbox Back On the Chinese Market? That's right, and I would imagine the people of China know this, and act accordingly. China's got the right idea when it comes to censorship and monitoring of the internet. They are a ?communist? country so they have the right, or even the responsibility to monitor how people use public utilities. As for America and the NSA, they claim to be an open society with a government of the people, for the people, yet they increasingly treat those same people as criminals and terrorists for doing simple things like making backup copies of movies or music that they may have purchased completely legally. Submitted by Antipater Link to Original Source Comment: Re:Use end to end encryption? (Score 1) 234 by npridgeon (#45247357) Attached to: Ten Steps You Can Take Against Internet Surveillance I can see this being the thing that pushes the next generation of processor development and operating system development. If companies can make encryption automatic, easy and invisible to the end users - and trustworthy, it will catch on. At first it will slow our computers which will drive demand for bigger/faster computers. Then, someday, it'll be ubiquitous and common practice. Comment: It happens (Score 1) 228 by npridgeon (#45228217) Attached to: The Boss Is Remotely Monitoring Blue-Collar Workers I work in a sawmill in Canada. All our cameras are routed to a central location and recorded. It's a great troubleshooting tool, but also a great way to make sure people are doing their jobs. If our internet connection were faster, this information would be available over the internet now. It is happening now. Comment: why open in a new page? (Score 1) 1191 by npridgeon (#45009407) Attached to: Come Try Out Slashdot's New Design (In Beta) Why not make it with a "show more" link that expands, even showing comments, and "show less" to go back to the topics? When I hit back, it shows the beta hints again. Interesting design. probably looks better on a tablet in portrait. Unfortunately, my monitor is in landscape mode. If the 50% wasted space is for ads, adblocker and noscript is gonna block them. Comment: Re:Short answer: (Score 1) 686 by npridgeon (#42080537) Attached to: Ad Blocking &ndash; a Coming Legal Battleground? People actively seek out and view the webpage Notice how you didn't say "I spam my oversized/bliking/popup banner all over other sites to get people to view my webpage"? People find your page because you provide information they need, not because they see your ads. Personally, I can honestly say that not once in my life have I read/viewed/purchased anything from clicking a banner. And yeah, I know the whole subconscious brand recognition shpeel... Still - I never buy anything on the brand name alone. Except for Sony, their products I don't buy specifically because of their brand name. But I digress. I don't have any mod points to mod you up, and you're already at 5 anyway, but... I agree completely with you about Sony. They could sell pure gold for a cent an ounce, and I wouldn't buy it. Worst company on the planet. A true evil corporation with no care for their customers, their customer's privacy or security.
global_01_local_0_shard_00000017_processed.jsonl/23145
Forgot your password? Comment: Re:In other news (Score 1) 255 This American can't even find a map anymore. I used to have 2 or 3 maps in the car, along with my Thomas Guide. Now, I have Google maps on the phone, gps in the car, and a compass in the dashboard; I had trouble even remembering the name of the Guide the other day. Of course my wife still tells me where to go... Comment: Re:Spreading the wealth... (Score 1) 109 by sysrammer (#46343313) Attached to: New Review Slams Fusion Project's Management Spreading the wealth pleases the folks on the left, and enhancing the military pleases the folks on the right. A win-win situation, as far as politics in the US is concerned. I suppose that another way of dealing with the politics of a complex project is the way Putin financed the Olympics. He spread the wealth too. Comment: Re:The summary is missing some new link (Score 1) 134 by sysrammer (#46249331) Attached to: Game Developers' Quest To Cross the Uncanny Valley To save Slashdotter's time, here's the "blogged about" part. The words "Uncanny Valley" are linked to the Wikipedia page. The blog is all about Skyrim, so all you 'Rimmers out there might like to check it out.
global_01_local_0_shard_00000017_processed.jsonl/23151
Gentoo Linux Bluetooth Guide Ioannis Aslanidis Douglas Russell Marcel Holtmann Shyam Mani Łukasz Damentko This guide will explain how to successfully install a host Bluetooth device, configure the kernel properly, explain all the possibilities that the Bluetooth interconnection offers and how to have some fun with Bluetooth. 2 2009-07-16 Introduction What is Bluetooth? Bluetooth is an industrial specification that provides users a way to connect and exchange information between devices like personal computers, PDAs or mobile phones. Using the Bluetooth technology, users can achieve wireless voice and data transmission between devices at a low cost. Bluetooth also offers the possibility to create small wireless LANs and to synchronize devices. About the content of this guide The first part of this guide explains how to configure the system kernel, identify the Bluetooth devices installed on the system and detected by the kernel and install the necessary basic Bluetooth tools. The second part covers how to detect remote devices and how to establish a connection from or to them by either setting up radio frequency communication (RFCOMM). The last part of the guide lists in detail applications that can take advantage of all the possibilities offered by the Bluetooth technology. Configuring the system Kernel Configuration As the latest Linux stable kernel is 2.6, the configuration will be done for these series of the kernel. Most Bluetooth devices are connected to a USB port, so USB will be enabled too. Please refer to the Gentoo Linux USB Guide. Networking ---> <*> Bluetooth subsystem support ---> --- Bluetooth subsystem support <M> L2CAP protocol support <M> SCO links support <M> RFCOMM protocol support [*] RFCOMM TTY support <M> BNEP protocol support [*] Multicast filter support [*] Protocol filter support <M> HIDP protocol support Bluetooth device drivers ---> <M> HCI USB driver [*] SCO (voice) support <M> HCI UART driver [*] UART (H4) protocol support [*] BCSP protocol support [*] Transmit CRC with every BCSP packet <M> HCI BCM203x USB driver <M> HCI BPA10x USB driver <M> HCI BlueFRITZ! USB driver (The four drivers below are for PCMCIA Bluetooth devices and will only show up if you have also selected PCMCIA support in your kernel.) <M> HCI DTL1 (PC Card) driver <M> HCI BT3C (PC Card) driver <M> HCI BlueCard (PC Card) driver <M> HCI UART (PC Card) device driver (The driver below is intended for HCI Emulation software.) <M> HCI VHCI (Virtual HCI device) driver (Move back three levels to Device Drives and then check if USB is enabled. This is required if you use a Bluetooth dongle, which are mostly USB USB support ---> <*> Support for Host-side USB --- USB Host Controller Drivers <M> EHCI HCD (USB 2.0) support [ ] Full speed ISO transactions (EXPERIMENTAL) [ ] Root Hub Transaction Translators (EXPERIMENTAL) <*> OHCI HCD support <*> UHCI HCD (most Intel and VIA) support < > SL811HS HCD support Now we'll reboot with our new kernel. If everything went fine, we will have a system that is Bluetooth ready. Your USB device may have two modes the default of which may not be HCI, but HID. If this is your case, use hid2hci to switch to HCI mode. Your system will not remember this change when you next reboot. (One way to check for the device) # cat /proc/bus/usb/devices | grep -e^[TPD] | grep -e Cls=e0 -B1 -A1 (The Cls=e0(unk. ) identifies the Bluetooth adapter.) T: Bus=02 Lev=02 Prnt=03 Port=00 Cnt=01 Dev#= 4 Spd=12 MxCh= 0 P: Vendor=0a12 ProdID=0001 Rev= 5.25 (Some might show up on lsusb from sys-apps/usbutils) # lsusb Bus 003 Device 002: ID 046d:c00e Logitech, Inc. Optical Mouse Bus 003 Device 001: ID 0000:0000 Bus 002 Device 002: ID 0db0:1967 Micro Star International Bluetooth Dongle BlueZ - The Bluetooth Stack Installing BlueZ Now that the device is detected by the kernel, we need a layer that lets applications communicate with the Bluetooth device. BlueZ provides the official Linux Bluetooth stack. The ebuilds that provide what we need are bluez-libs and bluez-utils. Devices that need Broadcom firmware files or the like may need bluez-firmware. # emerge net-wireless/bluez-libs net-wireless/bluez-utils BlueZ configuration and PIN pairing Now it's time to see if the Bluetooth device is being picked up correctly by the system. We start up the required Bluetooth services first. (Start up Bluetooth) # /etc/init.d/bluetooth start * Starting Bluetooth ... * Starting hcid ... [ ok ] * Starting sdpd ... [ ok ] * Starting rfcomm ... [ ok ] # hciconfig hci0: Type: USB BD Address: 00:01:02:03:04:05 ACL MTU: 192:8 SCO MTU: 64:8 RX bytes:131 acl:0 sco:0 events:18 errors:0 TX bytes:565 acl:0 sco:0 commands:17 errors:0 This shows that the Bluetooth device has been recognised. As you might have noticed the device is DOWN. Let's configure it so that we can bring it up. The configuration file is at /etc/bluetooth/hcid.conf. The required changes to the config file are shown below. For additional details please refer to man hcid.conf. (Recommended changes to be made to the file are shown) # HCId options options { # Automatically initialize new devices autoinit yes; (Change security to "auto") # Security Manager mode # none - Security manager disabled # auto - Use local PIN for incoming connections # user - Always ask user for a PIN security auto; # Pairing mode pairing multi; (You only need a pin helper if you are using <=bluez-libs-2.x and <=bluez-utils-2.x) (Change pin_helper to use /etc/bluetooth/pin-helper) # PIN helper pin_helper /etc/bluetooth/pin-helper; # Default settings for HCI devices device { (Set your device name here, you can call it anything you want) # Local device name # %d - device id # %h - host name name "BlueZ at %h (%d)"; # Local device class class 0x3e0100; # Inquiry and Page scan iscan enable; pscan enable; # Default link mode lm accept; # Default link policy lp rswitch,hold,sniff,park; (Leave as is, if you don't know what exactly these do) # Authentication and Encryption (Security Mode 3) #auth enable; #encrypt enable; After that, we have to configure the Bluetooth device PIN. That will help in pairing this device with another one. (Replace 123456 with your desired pin number.) This number (of your choice) must be the same in all your hosts with Bluetooth devices so they can be paired. This number must also be kept secret since anyone with knowledge of this number can essentially establish connections with your devices. Beginning with >=bluez-libs-3.x and >=bluez-utils-3.x, pin helpers have been replaced by passkey agents. There are a few different graphical passkey agents available to help manage your PIN, such as bluez-gnome and kdebluetooth. You can also use passkey-agent (found in bluez-utils) from the command line. Services configuration Now that we have concluded with the configuration of BlueZ, it's time to restart the necessary services. # /etc/init.d/bluetooth restart (We can also add it to the default runlevel.) # rc-update add bluetooth default * bluetooth added to runlevel default * rc-update complete. Let's be sure that the Bluetooth daemons started correctly. If we can see that both hcid and sdpd are running, then we configured Bluetooth the right way. After that, we can see if the devices are now up and running with the configured options. (Check to see if the services are running) # ps -ae | grep hcid 26050 ? 00:00:00 hcid # ps -ae | grep sdpd 26054 ? 00:00:00 sdpd # hciconfig -a hci0: Type: USB BD Address: 00:0A:0B:0C:0D:0E ACL MTU: 192:8 SCO MTU: 64:8 RX bytes:125 acl:0 sco:0 events:17 errors:0 Features: 0xff 0xff 0x0f 0x00 0x00 0x00 0x00 0x00 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: RSWITCH HOLD SNIFF PARK Link mode: SLAVE ACCEPT Name: 'BlueZ at bluehat (0)' Class: 0x3e0100 Service Classes: Networking, Rendering, Capturing, Object Transfer, Device Class: Computer, Uncategorized HCI Ver: 1.1 (0x1) HCI Rev: 0x1e7 LMP Ver: 1.1 (0x1) LMP Subver: 0x1e7 Manufacturer: Cambridge Silicon Radio (10) Detecting and Connecting to Remote Devices Detecting Bluetooth devices in other hosts At this point we are now ready to detect Bluetooth devices installed in other machines. This is independent of the host Operating System. We will make use of the hcitool command for the same. # hcitool dev hci0 00:01:02:03:04:05 # hcitool scan Scanning ... 00:0A:0B:0C:0D:0E Grayhat # hcitool inq Inquiring ... 00:0A:0B:0C:0D:0E clock offset: 0x5579 class: 0x72010c Now that we know the MAC address of the remote Bluetooth devices, we can check if we paired them correctly. # l2ping 00:0A:0B:0C:0D:0E Ping: 00:0A:0B:0C:0D:0E from 00:01:02:03:04:05 (data size 20) ... 20 bytes from 00:0A:0B:0C:0D:0E id 200 time 69.85ms 20 bytes from 00:0A:0B:0C:0D:0E id 201 time 9.97ms 20 bytes from 00:0A:0B:0C:0D:0E id 202 time 56.86ms 20 bytes from 00:0A:0B:0C:0D:0E id 203 time 39.92ms 4 sent, 4 received, 0% loss Setting up Radio Frequency Communication (RFCOMM) Please note that setting up radio frequency communication is optional. We can establish a radio frequency connection to another Bluetooth device using the rfcomm command. To make things a little easier especially for users with multiple devices that support Bluetooth, it is advisable to make a few changes to the default rfcomm config at /etc/bluetooth/rfcomm.conf. The whole segment of the config starting from rfcomm0 { and ending with } is the config for the device that will establish a connection at /dev/rfcomm0. In this case, we will only show one example, rfcomm0. You can add more devices as you see fit. (Only changes that might be needed are shown) rfcomm0 { # Automatically bind the device at startup (Creates the device node, /dev/rfcomm0 at start up) bind yes; # Bluetooth address of the device (Enter the address of the device you want to connect to) device 00:0A:0B:0C:0D:0E; After configuring RFCOMM, we can connect to any device. Since we've made the required settings to the /etc/bluetooth/rfcomm.conf file, we just issue the command shown below. In case you've not made changes to the config file, an alternative method is also shown in the code listing that follows (The 0 refers to the rfcomm0 in the config file) # rfcomm connect 0 Connected /dev/rfcomm0 to 00:0A:0B:0C:0D:0E on channel 1 Press CTRL-C for hangup (If you did not edit /etc/bluetooth/rfcomm.conf) # rfcomm connect 0 00:0A:0B:0C:0D:0E 1 Connected /dev/rfcomm0 to 00:0F:DE:69:50:24 on channel 1 Press CTRL-C for hangup The first parameter after the connect command is the RFCOMM TTY device node that will be used (usually 0). The second parameter is the MAC address of the remote device. The third parameter is optional and specifies the channel to be used. Please, note that in order to connect to a device, that device must be listening for incoming connections. To do that, we have to explicitly tell it to listen. We can cancel the communication at any moment by just hitting CTRL+C. # rfcomm listen 0 1 Waiting for connection on channel 1 In a similar way to the connect command, the listen command can receive two parameters. The first one explicits the RFCOMM TTY device node (usually 0) that will be used to accept a connection, while the second is the channel that will be used. Each time you call the rfcomm command, you can also specify the physical device you want to use. Below you can see a small example specifiying the physical device on the above two commands. # rfcomm -i hci0 listen 0 1 Waiting for connection on channel 1 (To listen to a determined device) # rfcomm -i hci0 connect 0 00:0A:0B:0C:0D:0E 1 (To use a determined device when connecting to another one) Desktop Applications for Bluetooth We have quite a few Bluetooth applications that run on the desktop and this chapter has been divided into 3 parts, one each for Gnome, KDE and Miscellaneous applications. For Gnome If you are a gnome user, you will most probably go with gnome-bluetooth. It provides the most basic yet most used functionalities, as you can see below. • gnome-bluetooth-manager: To manage Bluetooth remote devices. • gnome-obex-send: To send files to other devices. • gnome-obex-server: To receive files. # emerge gnome-bluetooth This adds menu entries under Applications > System Tools from where you can easily start up the manager or File sharing to transfer files between devices. To transfer files (the easy way): • From the Phone to the Computer - Send the file from the phone via Bluetooth and it will be picked up and saved to your /home always. gnome-phone-manager is a nifty app that you can use to send and receive messages to and from your phone, using only your system. You do not have to touch your phone to read or send messages since all that happens through the application. You are also notified of a new message on your screen if the option is enabled under Preferences. Installation is a breeze as always. # emerge gnome-phone-manager KDE makes use of kdebluetooth and provides more utilities than its Gnome counterpart as seen below. • kbluetoothd: Bluetooth Meta Server. • kbtsearch: Bluetooth device/service search utility. • khciconfig: KDE Bluetooth Monitor. • kioclient: KIO command line client. • qobexclient: Swiss army knife for obex testing/development. • kbtobexclient: A KDE Bluetooth Framework Application. • kioobex_start • kbtserialchat • kbemusedsrv: KDE Bemused Server. • kbtobexsrv: KDE OBEX Push Server for Bluetooth. • kbluepin: A KDE KPart Application. • auth-helper: A helper program for kbtobexsrv that sends an authentication request for a given ACL link. # emerge kdebluetooth Other Interesting Applications • app-mobilephone/obexftp: File transfer over OBEX for mobile phones • app-mobilephone/bemused: Bemused is a system which allows you to control your music collection from your phone, using Bluetooth. • app-pda/multisync: Multisync allows you to sync contacts, calendar entries and notes from your mobile phone with your computer, over a Bluetooth connection (amongst other things). It includes such features as backing up this information and restoring it later, and syncing with the Evolution e-mail client. You will need the irmc USE flag set to ensure that multisync has Bluetooth support. • net-wireless/opd and net-wireless/ussp-push are command line tools (server and client) that can be used to send files to your mobile phone. Special thanks to Marcel Holtmann for his time and dedication to the Bluetooth development and for reviewing this guide. And big thanks to Douglas Russell for performing additional hardware tests and improving this guide.
global_01_local_0_shard_00000017_processed.jsonl/23170
Take the 2-minute tour × Let's say I have a View that is bound to ViewModel A which has an observable collection Customers. An advantage of this MVVM pattern is that I can also bind the View to ViewModel B which fills it with different data. But what if in my View converter Converters to display my customers, e.g. I have a "ContractToCustomerConverter" that accepts a Contract and returns the appropriate Customer to be displayed. The problem with this is that the converter exists outside the MVVM pattern and thus doesn't know that my ViewModel has another source for customers. • is there a way for the View to pass the ViewModel into the Converter so that it participates in the decoupling that the MVVM pattern provides? • is there a way for me to somehow include the Converter in my ViewModel so that the converter uses the current dependencies which ViewModel has available? • or are converters just glorified code-behind and thus not used in the MVVM pattern, so if you are using MVVM then you just create your own "converters" (methods on your ViewModel class) which return things like Image objects, Visibility objects, FlowDocuments, etc. to be used on the view, instead of using converters at all? (I came upon these questions after seeing the use of Converters in the WPF demo application that comes with the MVVM Template Toolkit download, see the "Messenger Sample" after unpacking it.) share|improve this question add comment 5 Answers I usually don't use converters at all in MVVM, except for pure UI tasks (like BooleanToVisibilityConverter for instance). IMHO you should rather declare a Customer property of type CustomerViewModel in your ContractViewModel, rather than use a ContractToCustomerConverter share|improve this answer add comment In this conversation there is a comment that agrees with Kent's position, not to use Converters at all, interesting: A ViewModel is basically a value converter on steroids. It takes "raw" data and converts it into something presentation-friendly, and vice-versa. If you ever find yourself binding an element's property to a ViewModel's property, and you're using a value converter, stop! Why not just create a property on the ViewModel that exposes the "formatted" data, and then drop the value converter altogether? And in this conversation: The only place I can see a use for value converters in an MVVM architecture is cross-element bindings. If I'm binding the Visibility of a panel to the IsChecked of a CheckBox, then I will need to use the BooleanToVisibilityConverter. share|improve this answer add comment Converters should rarely be used with MVVM. In fact, I strive not to use them at all. The VM should be doing everything the view needs to get its job done. If the view needs a Customer based on a Contract, there should be a Customer property on the VM that is updated by VM logic whenever the Contract changes. I dispute that claim. In my experience, views are not shared across different VM types, and nor is that a goal of MVVM. share|improve this answer OK I see your point about views should not be shared across different VMs, but a ViewModel should be able to be shared by different Views, hence the advantage of testability of MVVM, right? You should be able to hook up a mock view and mock model to ViewModel to make sure that all data combinations that it receives from the mock model produce the correct property values that get exposed to the view. Would you agree? –  Edward Tanguay Jun 17 '09 at 15:07 add comment For those effectively saying no "non-trivial converters" in the view, how do you handle the following? Let's say that I have a Model of climate sensors that represents time series of readings from various instruments (barometer, hygrometer, thermometer, etc.) at a given location. Let's say that my View Model exposes an observable collection of the sensors from my Model. I have a View containing a WPF Toolkit DataGrid that binds to the View Model with the ItemsSource property set to observable collection of sensors. How do I represent the view of each instrument for a given sensor? By displaying a small graph (think Edward Tufte sparkline here) that is generated by converting the time series to an image source using a converter (TimeSeriesToSparklineConverter) Here is how I think of MVVM: The Model exposes data to View Models. The View Model exposes behavior, Model data and state to View. Views do the job of representing Model data visually and providing an interface to behaviors consistent with the View Model state. Thusly, I don't believe that the sparkline images go in the Model (the Model is data, not a particular visual representation of it). Nor do I believe that the sparkline images go in the View Model (what if my View wants to represent the data differently, say as a grid row just showing min, max, average, standard deviation etc. of the series?). Thus, it seems to me that the View should handle the job of transforming the data into the desired representation. So if I want to expose the behaviors, Model data and given state for a certain View Model in a command-line interface instead of a WPF GUI, I don't want my Model nor my View Model containing images. Is this wrong? Are we to have a SensorCollectionGUIViewModel and a SensorCollectionCommandLineViewModel? That seems wrong to me: I think of the View Model as an abstract representation of the view, not concrete and tied to a particular technolgy as these names suggest they are. That's where I am in my continually evolving understanding of MVVM. So for those saying not to use converters, what are you doing here? share|improve this answer I see the problem you describe like this: With a value converter, you would make a ClimateSensorToSparklineGraphConverter which takes a collection of climate sensors and outputs an image. For something like creating a bitmap image, you aren't going to be doing this with a DataTemplate and a collection of ViewModels containing ViewModels, at some point you need C# code to create the image. The problem comes when in the converter you also access e.g. a Users collection to determine what the current user is allowed to see. This would break MVVM since the ViewModel should have users injected. –  Edward Tanguay Jun 18 '09 at 7:51 add comment I'll add my 2 cents to this discussion. I do use converters, where it makes sense. Explanation: There are cases where you need to represent 1 value in Model in more ways in the UI. I expose this value through 1 type. The other is type is handled through converter. If you were to expose 1 value through 2 properties in VM, you would need to manually handle update notifications. For example I have a model with 2 ints: TotalCount, DoneCount. Now I want both this values to be displayed in TextBlocks and additionally I want to display done percentage. I solve this using DivisionConverter multi converter which takes 2 previously mentioned ints. If I were to have special PercentDone in VM, I would need to update this property whenever DoneCount is updated. share|improve this answer You basically update the PercentDone property by declaring the converter and having it fire when your binding throws a propertychanged, only it is represented by a class with a function, rather than a property... So I don't think this is an actual good use case. I think pure UI-to-UI stuff warrants converters. –  Joris Jul 5 '11 at 0:32 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23171
Take the 2-minute tour × While looking for something else, quite out of mere coincidence I stumbled upon few comments about how diabolical case class inheritance is. There was this thing called ProductN , wretches and kings, elves and wizards and how some kind of a very desirable property is lost with case classes inheritance. So what is so wrong with case class inheritance ? share|improve this question add comment 1 Answer One word: equality case classes come with a supplied implementation of equals and hashCode. The equivalence relation, known as equals works like this (i.e. must have the following properties): 1. For all x; x equals x is true (reflexive) 2. For x, y, z; if x equals y and y equals z then x equals z (transitive) 3. For x, y; if x equals y then y equals x (symmetric) As soon as you allow for equality within an inheritance hierarchy you can break 2 and 3. this is trivially demonstrated by the following example: case class Point(x: Int, y: Int) case class ColoredPoint(x: Int, y: Int, c: Color) extends Point(x, y) Then we have: Point(0, 0) equals ColoredPoint(0, 0, RED) But not ColoredPoint(0, 0, RED) equals Point(0, 0) You might argue that all class hierarchies may have this problem, and this is true. But case classes exist specifically to simplify equality from a developer's perspective (among other reasons), so having them behave non-intuitively would be the definition of an own goal! There were other reasons as well; notably the fact that copy did not work as expected and interaction with the pattern matcher. share|improve this answer And what about a little elaboration :) ? –  ashy_32bit Jun 22 '12 at 15:15 It seems like such an asymmetric equivalence would be a useful thing in the OO paradigm, in the same way that at the type level a ColoredPoint is-a Point but not vice-versa. Might have to call it something other than equals though... maybe subEquals? –  Luigi Plinge Jun 22 '12 at 18:58 @LuigiPlinge perhaps canReplace, supersedes, specifies, or overrides for the reverse relationship? Anything to indicate the >=-ness (or >: if you like) of it. It seems much easier for me to name it in terms of >= rather than <=. –  Dan Burton Jun 22 '12 at 22:26 On second thoughts, such a thing would be tricky (impossible?) to implement due to the possibility of upcasting, so maybe it's not such a great idea –  Luigi Plinge Jun 22 '12 at 23:52 This example is not compilable in 2.10.2 –  ruslan Aug 3 '13 at 7:22 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23173
Take the 2-minute tour × The syscall.Mmap() call takes a length argument of type int, which is only good for 2GB. How do I mmap a bigger file then? Note: 64-bit system, so address space is not a problem. share|improve this question add comment 1 Answer up vote 4 down vote accepted Look in http://golang.org/src/pkg/syscall/syscall_unix.go at the Mmap method on mmapper. You should be able to copy that code and adapt it as required. Of course you won't be able to mmap to a []byte, since slice lengths are defined to be "int" (which is 32-bit everywhere at the moment). You could mmap to a larger element type (e.g. []int32), or just muck with the pointer to the memory, but it won't be a drop-in replacement to syscall.Mmap. share|improve this answer Even if I map to []int64, it will only handle max 16GB files, right? Still quite limited on modern 64-bit platforms. –  Rio Jul 24 '12 at 12:06 That's 16GB at a time. You can rebuild the slice header as often as you want without having to re-mmap. –  dsymonds Jul 26 '12 at 6:50 That's disappointing. Slices are supposed to be a better replacement for pointer arithmetic so they eliminated pointer arithmetic (except with the "unsafe" package), but slices are hamstrung! I would think that the slice index type should be uintptr with automatic conversion to/from other integer types. –  Matt Jul 27 '12 at 18:51 go1.1 will have 64-bit ints on 64-bit platforms, which will largely solve this. –  dsymonds Sep 28 '12 at 5:29 Note that this is unix only. Go in Windows has a different api for memory mapped files. –  kristianp Jan 12 '13 at 5:16 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23174
Take the 2-minute tour × This is code for a linked list in the C programming language. #include <stdio.h> /* For printf */ #include <stdlib.h> /* For malloc */ typedef struct node { int data; struct node *next; /* Pointer to next element in list */ LLIST *list_add(LLIST **p, int i); void list_remove(LLIST **p); LLIST **list_search(LLIST **n, int i); void list_print(LLIST *n); The code is not completed, but I think it's enough for my question. Here at the end of struct node "LLIST" is used, and it's also used as a return type in the prototyping of the function list_add. What is going on? share|improve this question add comment 5 Answers up vote 5 down vote accepted typedef creates a new "type" in your program, so the return value and types of parameters of those functions are just your struct. It is just shorthand for using struct node for the type. If you were to create a new node, you could do it like this (using the type): LLIST *node = malloc(sizeof(LLIST)); node->data = 4; node->next = someOtherItem; list_add(node, 1) Also, with the function prototypes in your question, you don't really need the double pointers; since the data in your struct is just an int, you could do something like LLIST *list_add(int data, int position); then the list_add function would handle the allocation, copy the int into the struct and add it to the linked list. Putting it in at a certain position is as simple as changing the next pointer in the node before it to the address of the newly allocated node, and the next pointer in the new node to point at the next one (the one the node before that one was originally pointing at). Keep in mind that (given the rest of your function prototypes) you will have to keep track of pointers to every node you create in order to delete them all. I'm not sure I understand how the search function will work. This whole thing could be implemented a lot better. You shouldn't have to provide the location of a node when you create it (what if you specify a higher number than there are nodes?), etc. share|improve this answer add comment That's a typedef. It's actually doing two things at once. First, it defines a structure: struct node { int data; struct node *next; And then does a typedef: typedef struct node LLIST; That means LLIST is a type, just like int or FILE or char, that is a shorthand for struct node, your linked-list node structure. It's not necessary - you could replace LLIST with struct node in all of those spots - but it makes it a bit easier to read, and helps hide the implementation from pesky end-users. share|improve this answer add comment LLIST is just another type name for the struct that has been created. In general, the following format will create a type "NAME" that is a "struct x": typedef struct x { ... } NAME; share|improve this answer add comment C requires that you reference structs with a "struct" prefix, so it's common to introduce a typedef for less verbose mention. That is, the declaration of your struct has two parts, and can be rewritten as such: struct node { int data; struct node *next; /* pointer to next element in list */ typedef struct node LLIST; So, LLIST is just another name for struct node (thanks Chris Lutz). share|improve this answer add comment LLIST* is a pointer to a structure defined by the LLIST struct. You should do LLIST* myList = malloc(sizeof(LLIST)*number_of_elements); to have some memory allocated for this list. Adding and removing items requires you to reallocate the memory using realloc. I've already written some piece of code for lists (made with arrays). I might post the code as soon as I'm home, which is currently not the case. share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23175
Take the 2-minute tour × I have a windows service that can create an executable in the users windows session, via calling the "CreateProcessAsUser" function. This works fine as long as there is a windows session already there. In the case that there isn't one already I'd like to be able to create one programmatically. Is this is possible? Can't seem to find a function to do it. share|improve this question add comment 3 Answers You cannot create a new session from a service. Sessions are managed by the OS. New ones get created when users logon interactively. share|improve this answer You can use the WTS API, such as WTSGetActiveSessionId() and WTSEnumerateSessions(), to determine if a user session exists before calling CreateProcessAsUser(). –  Remy Lebeau Dec 21 '12 at 20:52 For the record: Windows Server 2012 supports the Remote Desktop Protocol Provider API, which you could use to create a session programatically. Also, at least in theory, you could write your own Remote Desktop client (or modify one of the open source clients) to create a new session on any supported version of Windows - provided that Remote Desktop is enabled, of course. –  Harry Johnston Dec 23 '12 at 0:06 @RemyLebeau this is what I do already, I'm trying to avoid problems where the session has been unexpectedly closed. –  Robert Dec 27 '12 at 9:45 @HarryJohnston thanks, this sounds like solutions though have to admit I was hoping for something simpler. –  Robert Dec 27 '12 at 9:47 @Robert: you and me both. If you do manage to get a working solution, and are able to share it, could you let me know? (My profile includes my email address.) –  Harry Johnston Dec 27 '12 at 11:23 add comment This isn't quite the solution for the question I asked, but it was the solution that helped achieve what I was trying to achieve by asking this question, if you see what I mean. Rather than have having a windows services that creates a server session you can configure windows to automatically logon at boot time. This still means someone could accenditally log off, but cures the main reason for sessions disappearing: the server being rebooted. Use the following steps to activate auto-logon: 2. Type regedit and hit enter to open the Registry Editor 3. Then browse to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\Winlogon\ 4. Set AutoAdminLogon = 1 (create it if doesn't exist its a string variable) 5. Set DefaultUserName = your username (create it if doesn't exist its a string variable) 6. Set DefaultPassword = your password (create it if doesn't exist its a string variable) Instructions were taken from this post: http://channel9.msdn.com/Blogs/coolstuff/Tip-Auto-Login-Your-Windows-7-User-Account share|improve this answer add comment What about the LogonUser function? share|improve this answer Unfortunately "session" is an overloaded term in Windows. In this context, I think the OP is talking about Remote Desktop (aka Windows Terminal Services) session rather than a logon session. Robert, could you please clarify? –  Harry Johnston Dec 23 '12 at 0:01 I'm not sure I understand the difference here, it doesn't need to be a remote desktop session per se, but it does need to have a sessionid > 0 so that it can execute programs with a gui. –  Robert Dec 27 '12 at 9:54 Remote Desktop Services is the part of Windows that allows user switching and session zero isolation, as well as actual Remote Desktop connections. You're definitely talking about Remote Desktop session IDs here, so LogonUser won't solve your problem. –  Harry Johnston Dec 27 '12 at 11:22 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23176
Take the 2-minute tour × I have a database context with multiple database set. I have an edit (httpget) and an edit (httppost) method. I want to save specific data in specific datasets. How can I specify which database set I want to use? BTW, the view from of the Edit method uses several models. My database context looks like this: public class mycontext :DBContext public DBSet<Table1> table1{get; set;} public DBSet<Table2> table2{get; set;} public DBSet<Table3> table3{get; set;} When I define private mycontext data = new mycontext() The only option is data.SaveChanges(). I want to so something like data.table1.SaveChanges() and pass in the data I want to save. share|improve this question The whole point of the DbContext in Entity Framework is that it keep track of the changes itself, and saves out whatever is needed, all in one transaction, when you call .SaveChanges() on it. You cannot just save half the changes, on one table - all the changes you've made to the context will be saved - all at once –  marc_s Dec 21 '12 at 16:42 Thanks. But say I have a field whose value depend on the button clicked to submit the form and I need to update that field in the table. How will I do that please? –  jpo Dec 21 '12 at 18:51 add comment 1 Answer if you only want to update data in Table1, modify only data in Table1, and then call .SaveChanges() share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23177
Take the 2-minute tour × I've been trying to convert a byte array to its original SecretKey, but I've no more ideas left. The most promising attempt was this one: byte[] encodedKey = Base64.decode(stringKey); SecretKey originalKey = SecretKeySpec(encodedKey, 0, encodedKey.length, "AES") found here: Converting Secret Key into a String and Vice Versa I'm using the import javax.crypto.spec.SecretKeySpec, so the constructor for SecretKeySpec should be used correctly, at least referring to http://docs.oracle.com/javase/1.5.0/docs/api/javax/crypto/spec/SecretKeySpec.html. Nonetheless I always get "The Method SecretKeySpec is undefined for ... [Class Name]" - which I just don't get. I'm guessing it's just some minor mistake, but I just can't figure it out. Can someone please help me out here? share|improve this question Do you have a semi-colon at the end of the constructor? :) –  Jeff Gohlke Jan 7 '13 at 21:36 add comment 1 Answer up vote 8 down vote accepted You need to use the new keyword to call the constructor and create the object. SecretKey originalKey = new SecretKeySpec(encodedKey, 0, encodedKey.length, "AES"); When you try to call it without new, the compiler thinks it might be a method you've defined inside that class, hence your error message. share|improve this answer OMG, thanks, definitely been sitting in front of the screen too long ... –  Horstus Horax Jan 7 '13 at 21:47 :) Happens to all of us. –  Jeff Gohlke Jan 7 '13 at 21:48 I know, still - it's good to know not to be alone ;-) thanks again! –  Horstus Horax Jan 7 '13 at 22:00 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23178
Take the 2-minute tour × Can anyone why this code is not working? echo "Welcome to automatic code check" echo "Hello $USER, Welcome to CODE CHECK" echo "The Date & Time is:" read -p "Enter foldername in downloads : " Foldername echo -e "Please enter a directory name to be created" read DIRK if [ ! -d $DIRK ] mkdir ~/Downloads/$DIRK echo -e "Directory $DIRK created" #read -p "enter file name:" filename find ~/Downloads/$Foldername -type f -iname '*.sce' -print0 | while IFS= read -r -d '' f; do cp -- "$f" ~/Downloads/$DIRK ; FILES= "~/Downloads/$DIRK/*.sce" #read -p "enter filename where result should be saved : " Filename #touch ~/Downloads/$DIRK/$Filename for f in $FILES; do scilab -nb -nwni -ns -f "$f" & let count+=1 #[[ $((count%NR_CPUS)) -eq 0 ]] && wait done >> a.txt echo -e "$DIRK directory already exists!" echo -e "1.Faild to create $DIRK directory, Directory already exists" share|improve this question Hi @Lavitha, welcome to SO. You are likely to get a better response to your question if you can put it more clearly - exactly what are you expecting to happen, and what is actually happening? Can you pin it down any more closely within the script you posted? –  Vicky Feb 13 '13 at 10:42 agree, please be more specific. As a hint add: set -x line somewhere at the beginning of your script to see how each line of your script is executed - maybe you will find problem by yourself!. Good luck. –  mzet Feb 13 '13 at 11:00 add comment Your Answer Browse other questions tagged or ask your own question.
global_01_local_0_shard_00000017_processed.jsonl/23179
Take the 2-minute tour × I have looked through a few posts on this but none really solve the issues I'm having. Quite a few suggest: self.tableView.scrollEnabled = YES; However, this doesn't enable me to scroll. The cells are updated dynamically so reloadData is called a few times throughout the "test" I am doing. Once finished the cells are created based on the length of an Array and at the moment they go off the bottom of the screen. I was hoping scrolling would be automatic but no such luck. If some code would be useful just let me know and any advice would be great share|improve this question Please post any relevant sample code. –  Bryan Luby Feb 16 '13 at 4:05 are you able to use the table otherwise? aka: select items from within the table. –  Grymjack Feb 16 '13 at 6:06 add comment Your Answer Browse other questions tagged or ask your own question.
global_01_local_0_shard_00000017_processed.jsonl/23180
Take the 2-minute tour × I'm currently working on a UDP socket application and I need to build in support so that IPV4 and IPV6 connections can send packets to a server. I was hoping that someone could help me out and point me in the right direction; the majority of the documentation that I found was not complete. It'd also be helpful if you could point out any differences between Winsock and BSD sockets. Thanks in advance! share|improve this question add comment 4 Answers up vote 41 down vote accepted The best approach is to create an IPv6 server socket that can also accept IPv4 connections. To do so, create a regular IPv6 socket, turn off the socket option IPV6_V6ONLY, bind it to the "any" address, and start receiving. IPv4 addresses will be presented as IPv6 addresses, in the IPv4-mapped format. The major difference across systems is whether IPV6_V6ONLY is a) available, and b) turned on or off by default. It is turned off by default on Linux (i.e. allowing dual-stack sockets without setsockopt), and is turned on on most other systems. In addition, the IPv6 stack on Windows XP doesn't support that option. In these cases, you will need to create two separate server sockets, and place them into select or into multiple threads. share|improve this answer Thanks for this information, exactly what I was looking for. –  Charles Oct 24 '09 at 16:01 Saying that IPV6_V6ONLY is off by default on Linux is wrong: it depends on the operating system, not just on the kernel. For instance, on Debian GNU/Linux, it recently switched to on by default. –  bortzmeyer Sep 18 '10 at 17:21 OS X also has it off by default, but the best thing is to always set it explicitly. The local sysadmin might've changed it after all. –  Per Johansson Nov 13 '11 at 16:49 The default on Windows is enabled (just implemented this on Win7). –  tdistler Oct 2 '12 at 22:09 @Andrius Bentkus: as Windows XP demonstrates, it is well possible to have a system where you can simultaneously use IPv4 and IPv6, yet IPV6_V6ONLY is not available. Whether or not this is "dual stacking" depends on your definition of that term. –  Martin v. Löwis Dec 27 '12 at 23:29 show 1 more comment The socket API is governed by IETF RFCs and should be the same on all platforms including windows WRT IPv6. For IPv4/IPv6 applications its ALL about getaddrinfo() and getnameinfo(). getaddrinfo is a genious - looks at DNS, port names and capabilities of the client to resolve the eternal question of can I use IPv4, IPv6 or both to reach a particular destination? Or if your going dualstack route and want it to return IPv4 mapped IPv6 addresses it will do that too. It provides direct sockaddr * structure that can be plugged into bind(), recvfrom(), sendto() and address family for socket()... In many cases this means no messy sockaddr_in(6) structures to fill out and deal with. For UDP implementations I would be careful about setting dual stack sockets or more generally binding to all interfaces (INADDR_ANY) The classic issue is that when addresses are not locked down (see bind()) to specific interfaces and the system has multiple interfaces requests responses may transit from different addresses for computers with multiple addresses based on the whims of the OS routing table confusing application protocols especially any systems with authentication requirements. For UDP implementations where this is not a problem or TCP ... dual stack sockets can save a lot of time when IPv* enabling your system. One must be careful to not rely entirely on dual stack where its not absolutely necessary as there are no shortage of reasonable platforms (Old linux,BSD,Windows 2003) deployed with IPv6 stacks not capable of dual stack sockets. share|improve this answer add comment The RFCs don't really specify the existence of the IPV6_V6ONLY socket option, but, if it is absent, the RFCs are pretty clear that the implementation should be as though that option is FALSE. Where the option is present, I would argue that it should default FALSE, but, for reasons passing understanding, BSD and Windows implementations default to TRUE. There is a bizarre claim that this is a security concern because an unknowing IPv6 programmer could bind thinking they were binding only to IN6ADDR_ANY for only IPv6 and accidentally accept an IPv4 connection causing a security problem. I think this is both far-fetched and absurd in addition to a surprise to anyone expecting an RFC-compliant implementation. In the case of Windows, non-compiance won't usually be a surprise. In the case of BSD, this is unfortunate at best. share|improve this answer The standard on IPv6 API, RFC 3493, describes IPV6_V6ONLY in its section 5.3 if you want to read all the details. –  bortzmeyer Sep 18 '10 at 17:24 add comment I've been playing with this under Windows and it actually does appear to be a security issue there, if you bind to the loopback address then the IPv6 socket is correctly bound to [::1] but the mapped IPv4 socket is bound to INADDR_ANY, so your (supposedly) safely local-only app is actually exposed to the world. share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23181
Take the 2-minute tour × I have an ASP.Net MVC Controller with a 'MapColumns' action along with a corresponding ViewModel and View. I'm using the defaultModelBinder to bind a number of drop down lists to a Dictionary in the ViewModel. The view model also contains an IList field for both source and destination columns which are used to render the view. My question is what to do when validation fails on the Post call to the MapColumns action? Currently the MapColumns view is returned with the ViewModel resulting from the default binding. This contains the Dictionary values but not the two lists used to render the page. What is the best way to re-provide these to the view? I can set them explicitly after failed validation, but if obtaining these values (via GetSourceColumns() and GetDestinationColumns() in the example) carries any overhead this doesn't seem ideal. What I am looking for is a way to retain these lists when they are not bound to the model from the view. Here is some code to illustrate: public class TestViewModel public Dictionary<string, string> ColumnMappings { get; set; } public List<string> SourceColumns; public List<string> DestinationColumns; public class TestController : Controller public ActionResult MapColumns() var model = new TestViewModel; model.SourceColumns = GetSourceColumns(); model.DestinationColumns = GetDestinationColumns(); return View(model); public ActionResult MapColumns(TestViewModel model) if( Validate(model) ) // Do something with model.ColumnMappings // Here model.SourceColumns and model.DestinationColumns are empty return View(model); The relevant section of MapColumns.aspx: int columnCount = 0; foreach(string column in Model.targetColumns) <input type="hidden" name="ColumnMappings[<%= columnCount %>].Value" value="<%=column %>" /> <%= Html.DropDownList("ColumnMappings[" + columnCount + "].Key", Model.DestinationColumns.AsSelectItemList())%> share|improve this question add comment 1 Answer up vote 1 down vote accepted You'll have to rebind your model if validation fails. In your else statement just add the, model.SourceColumns = GetSourceColumns(); and model.DestinationColumns = GetDestinationColumns(); before returning the view again. share|improve this answer Thanks for the reply. I was really wondering if there was a clever way I could avoid making a second call to GetSourceColumns() GetDistnationColumns(). As there doesn't appear to be I will just resort to repopulating these fields by calling the functions again, and look into improving the effeciency of the functions if this proves a problem later. –  TonE Mar 18 '10 at 12:44 That's exactly what I've done; just work on improving the efficiency of these calls. –  Victor Mar 19 '10 at 7:12 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23182
Take the 2-minute tour × Generally, when using the conditional operator, here's the syntax: int x = 6; int y = x == 6 ? 5 : 9; Nothing fancy, pretty straight forward. Now, let's try to use this when assigning a Lambda to a Func type. Let me explain: Func<Order, bool> predicate = id == null ? p => p.EmployeeID == null : p => p.EmployeeID == id; That's the same syntax, and should work? Right? For some reason that doesn't. The compiler gives this nice cryptic message: Error 1 Type of conditional expression cannot be determined because there is no implicit conversion between 'lambda expression' and 'lambda expression' I then went ahead and changed the syntax and this way it did work: Func<Order, bool> predicate = id == null ? predicate = p => p.EmployeeID == null : predicate = p => p.EmployeeID == id; I'm just curious as to why it doesn't work the first way? (Side note: I ended up not needing this code, as I found out that when comparing an int value against null, you just use object.Equals) share|improve this question add comment 2 Answers up vote 18 down vote accepted You can convert a lambda expression to a particular target delegate type, but in order to determine the type of the conditional expression, the compiler needs to know the type of each of the second and third operands. While they're both just "lambda expression" there's no conversion from one to the other, so the compiler can't do anything useful. I wouldn't suggest using an assignment, however - a cast is more obvious: Func<Order, bool> predicate = id == null ? (Func<Order, bool>) (p => p.EmployeeID == null) : p => p.EmployeeID == id; Note that you only need to provide it for one operand, so the compiler can perform the conversion from the other lambda expression. share|improve this answer Very very interesting. That actually makes alot of sense. Thanks. –  BFree Nov 4 '08 at 20:05 add comment The C# compiler cannot infer the type of the created lambda expression because it processes the ternary first and then the assignment. you could also do: Func<Order, bool> predicate = id == null ? new Func<Order,bool>(p => p.EmployeeID == null) : new Func<Order,bool>(p => p.EmployeeID == id); but that just sucks, you could also try Func<Order, bool> predicate = id == null ? (Order p) => p.EmployeeID == null : (Order p) => p.EmployeeID == id; share|improve this answer The latter doesn't work, because the compiler doesn't know whether to convert to a delegate or an expression tree (or to, say, a Func<Order, object> which would be okay too). –  Jon Skeet Nov 4 '08 at 19:55 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23183
Take the 2-minute tour × I'm fairly new to rails, working on a Rails 3 app with a Profile model for users. In the profile Model I'd like to have a "name" entry, and I'd like to be able to access logical variations of it using simple syntax like: user.profile.name = "John Doe" user.profile.name.first = "John" user.profile.name.last = "Doe" Is this possible, or do I need to stick with "first_name" and "last_name" as my fields in this model? share|improve this question add comment 4 Answers up vote 9 down vote accepted It's possible, but I wouldn't recommend it. I would just stick with first_name and last_name if I were you and add a method fullname: def fullname "#{first_name} #{last_name}" If you really do want user.profile.name, you could create a Name model like this: class Name < ActiveRecord::Base belongs_to :profile def to_s "#{first} #{last}" This allows you to do: user.profile.name.to_s # John Doe user.profile.name.first # John user.profile.name.last # Doe share|improve this answer Ok, advice noted. For the sake of me learning Ruby would you mind showing how the "possible but not recommended" approach would work? –  Andrew Oct 14 '10 at 5:17 Sure, just a sec... –  Mischa Oct 14 '10 at 5:21 Agreed. We just spent almost six months spliting a single name into multiple fields because some idiot "designed" the database using a single field for that name. Worse off our customer service reps would also put company names into that field because there was not business name field! I wish I knew who designed that database so I could smack 'em on the nose with a newspaper and say, "No! Bad programmer! Bad!" –  Mike Bethany Oct 14 '10 at 5:23 Cool, thanks for all the feedback and ideas all. I'll stick with first_name and last_name and use the fullname method to give me the options I need :) –  Andrew Oct 14 '10 at 5:53 add comment The other answers are all correct, in so far as they ignore the #composed_of aggregator: class Name attr_reader :first, :last def initialize(first_name, last_name) @first, @last = first_name, last_name def full_name [@first, @last].reject(&:blank?).join(" ") def to_s class Profile < ActiveRecord::Base composed_of :name, :mapping => %w(first_name last_name) # Rails console prompt > profile = Profile.new(:name => Name.new("Francois", "Beausoleil")) > profile.save! > profile = Profile.find_by_first_name("Francois") > profile.name.first As noted on the #composed_of page, you must assign a new instance of the aggregator: you cannot just replace values within the aggregator. The aggregator class acts as a Value, just like a simple string or number. I also sent a response yesterday with a very similar answer: How best to associate an Address to multiple models in rails? share|improve this answer add comment FYI (assume you have a field fullname. ie your profile.name = "John Doe") class Profile def name @splited_name ||= fullname.split # @splited_name would cache the result so that no need to split the fullname every time Now, you could do something like this: user.profile.fullname # "John Doe" user.profile.name.first # "John" user.profile.name.last # "Doe" Note the following case: user.profile.fullname = "John Ronald Doe" user.profile.name.first # "John" user.profile.name.second # "Ronald" user.profile.name.last # "Doe" I agree with captaintokyo. You won't miss out the middle names. Also this method assume no Chinese, Japanese names are input. It's because those names contain no spaces in between first name and last name normally. share|improve this answer add comment As Capt. Tokyo said that's a horrible idea but here's how you would do it: rails g model User full_name:hash Then you would store data in it like so: user = User.new user.full_name = {:first => "Forrest", :last => "Gump"} Now your problems begin. To search the field requires both names and you can't do a partial search like searching for all people with the same last name. Worst of all you can store anything in the field! So imagine another programmer mistypes one of the field names so for a week you have {:fist => "Name", :last => "Last"} being inserted into the database! Noooooooooooooooooo! If you used proper field names you could do this: user = User.new(:first_name => "First", :last_name => "Last") Easy to read and no need for hashes. Now that you know how to do it the wrong way, do it the right way. :) share|improve this answer I'll go ahead and forget how to do this. I had no idea this was even possible. –  Jason Noble Oct 14 '10 at 11:28 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23184
Take the 2-minute tour × How do I determine whether or not two lines intersect, and if they do, at what x,y point? share|improve this question It might help to think of the edges of the rectangle as separate lines instead of the complete polygon. –  Ryan Graham Feb 18 '09 at 22:53 You might find some useful information in the question I previously asked about this very topic (but with GDI+) at stackoverflow.com/questions/153592/…. –  Robert S. Feb 19 '09 at 3:20 Do you want to know (A) where two line segments intersect or (B) whether or not two lines intersect (C) whether or not two line segments intersect (D) where two lines intersect? Could you please make your title consistent with your question? –  moose Feb 23 '13 at 15:29 add comment 16 Answers There’s a nice approach to this problem that uses vector cross products. Define the 2-dimensional vector cross product v × w to be vx wy − vy wx (this is the magnitude of the 3-dimensional cross product). Suppose the two line segments run from p to p + r and from q to q + s. Then any point on the first line is representable as p + t r (for a scalar parameter t) and any point on the second line as q + u s (for a scalar parameter u). Two line segments intersecting The two lines intersect if we can find t and u such that: p + t r = q + u s Formulae for the point of intersection Cross both sides with s, getting (p + t r) × s = (q + u s) × s And since s × s = 0, this means t (r × s) = (qp) × s And therefore, solving for t: t = (qp) × s / (r × s) In the same way, we can solve for u: (p + t r) × r = (q + u s) × r u (s × r) = (pq) × r u = (pq) × r / (s × r) To reduce the number of computation steps, it's convenient to rewrite this as follows (remembering that s × r = − r × s): u = (qp) × r / (r × s) Now there are five cases: 1. If r × s = 0 and (q − p) × r = 0, then the two lines are collinear. If in addition, either 0 ≤ (q − p) · rr · r or 0 ≤ (p − q) · ss · s, then the two lines are overlapping. 2. If r × s = 0 and (q − p) × r = 0, but neither 0 ≤ (q − p) · rr · r nor 0 ≤ (p − q) · ss · s, then the two lines are collinear but disjoint. 3. If r × s = 0 and (q − p) × r ≠ 0, then the two lines are parallel and non-intersecting. 4. If r × s ≠ 0 and 0 ≤ t ≤ 1 and 0 ≤ u ≤ 1, the two line segments meet at the point p + t r = q + u s. 5. Otherwise, the two line segments are not parallel but do not intersect. (Credit: this method is the 2-dimensional specialization of the 3D line intersection algorithm from the article "Intersection of two lines in three-space" by Ronald Goldman, published in Graphics Gems, page 304. In three dimensions, the usual case is that the lines are skew (neither parallel nor intersecting) in which case the method gives the points of closest approach of the two lines.) share|improve this answer This is essentially the same technique as mine, but I use the dot product instead of cross product. In this case, I believe the efficiency is approximentally identical. –  Jason Cohen Mar 15 '09 at 17:00 Excellent solution. Thanks Gareth for your valuable answer. –  Wodzu Nov 29 '09 at 13:19 @myrkos: No. The first line segment runs "from p to p + r" so when it's represented in parametric terms as "p + tr" then the segment corresponds to 0 ≤ t ≤ 1. Similarly for the other segment. –  Gareth Rees Jan 31 '12 at 17:48 For those interested, here is a simple C# implementation, taking PointF start and end coordinates for lines, that seems to be working: ideone.com/PnPJgb –  Matt Dec 17 '12 at 0:42 I put together a JavaScript implementation following @Matt. I made corrections for the errors pointed out by Tekito. –  pgkelley Aug 29 '13 at 4:10 show 28 more comments FWIW, the following function (in C) both detects line intersections and determines the intersection point. It is based on an algorithm in Andre LeMothe's "Tricks of the Windows Game Programming Gurus". It's not dissimilar to some of the algorithm's in other answers (e.g. Gareth's). LeMothe then uses Cramer's Rule (don't ask me) to solve the equations themselves. I can attest that it works in my feeble asteroids clone, and seems to deal correctly with the edge cases described in other answers by Elemental, Dan and Wodzu. It's also probably faster than the code posted by KingNestor because it's all multiplication and division, no square roots! I guess there's some potential for divide by zero in there, though it hasn't been an issue in my case. Easy enough to modify to avoid the crash anyway. // Returns 1 if the lines intersect, otherwise 0. In addition, if the lines // intersect the intersection point may be stored in the floats i_x and i_y. char get_line_intersection(float p0_x, float p0_y, float p1_x, float p1_y, float p2_x, float p2_y, float p3_x, float p3_y, float *i_x, float *i_y) float s1_x, s1_y, s2_x, s2_y; s1_x = p1_x - p0_x; s1_y = p1_y - p0_y; s2_x = p3_x - p2_x; s2_y = p3_y - p2_y; float s, t; s = (-s1_y * (p0_x - p2_x) + s1_x * (p0_y - p2_y)) / (-s2_x * s1_y + s1_x * s2_y); t = ( s2_x * (p0_y - p2_y) - s2_y * (p0_x - p2_x)) / (-s2_x * s1_y + s1_x * s2_y); // Collision detected if (i_x != NULL) *i_x = p0_x + (t * s1_x); if (i_y != NULL) *i_y = p0_y + (t * s1_y); return 1; return 0; // No collision BTW, I must say that in LeMothe's book, though he apparently gets the algorithm right, the concrete example he shows plugs in the wrong numbers and does calculations wrong. For example: (4 * (4 - 1) + 12 * (7 - 1)) / (17 * 4 + 12 * 10) = 844/0.88 = 0.44 That confused me for hours. :( share|improve this answer Thanks Gavin - the solution you mention is the one that works best for me too. –  Shunyata Kharg Jan 4 '10 at 12:12 function getLineIntersection(p0_x, p0_y, p1_x, p1_y, p2_x, p2_y, p3_x, p3_y) { var s1_x, s1_y, s2_x, s2_y; s1_x = p1_x - p0_x; s1_y = p1_y - p0_y; s2_x = p3_x - p2_x; s2_y = p3_y - p2_y; var s, t; s = (-s1_y * (p0_x - p2_x) + s1_x * (p0_y - p2_y)) / (-s2_x * s1_y + s1_x * s2_y); t = ( s2_x * (p0_y - p2_y) - s2_y * (p0_x - p2_x)) / (-s2_x * s1_y + s1_x * s2_y); –  cortijon Dec 19 '12 at 15:27 if (s >= 0 && s <= 1 && t >= 0 && t <= 1) { // Collision detected var intX = p0_x + (t * s1_x); var intY = p0_y + (t * s1_y); return [intX, intY]; } return null; // No collision } –  cortijon Dec 19 '12 at 15:28 Great answer! But crap. I just spent like 15 minutes porting it to JavaScript only to find that someone already did it. D: –  Cheezey May 18 '13 at 17:02 good algorithm, however fyi it doesn't handle cases where the determinant is 0. (the -s2_x * s1_y + s1_x * s2_y above). If it's 0 (or near 0) the lines are parallel or collinear. If it's collinear then the intersection may be another line segment. –  seand Jul 5 '13 at 22:56 show 5 more comments The problem reduces to this question: Do two lines from A to B and from C to D intersect? Then you can ask it four times (between the line and each of the four sides of the rectangle). Here's the vector math for doing it. I'm assuming the line from A to B is the line in question and the line from C to D is one of the rectangle lines. My notation is that Ax is the "x-coordinate of A" and Cy is the "y-coordinate of C." And "*" means dot-product, so e.g. A*B = Ax*Bx + Ay*By. E = B-A = ( Bx-Ax, By-Ay ) F = D-C = ( Dx-Cx, Dy-Cy ) P = ( -Ey, Ex ) h = ( (A-C) * P ) / ( F * P ) This h number is the key. If h is between 0 and 1, the lines intersect, otherwise they don't. If F*P is zero, of course you cannot make the calculation, but in this case the lines are parallel and therefore only intersect in the obvious cases. The exact point of intersection is C + F*h. More Fun: If h is exactly 0 or 1 the lines touch at an end-point. You can consider this an "intersection" or not as you see fit. Specifically, h is how much you have to multiply the length of the line in order to exactly touch the other line. Therefore, If h<0, it means the rectangle line is "behind" the given line (with "direction" being "from A to B"), and if h>1 the rectangle line is "in front" of the given line. A and C are vectors that point to the start of the line; E and F are the vectors from the ends of A and C that form the line. For any two non-parallel lines in the plane, there must be exactly one pair of scalar g and h such that this equation holds: A + E*g = C + F*h Why? Because two non-parallel lines must intersect, which means you can scale both lines by some amount each and touch each other. (At first this looks like a single equation with two unknowns! But it isn't when you consider that this is a 2D vector equation, which means this is really a pair of equations in x and y.) We have to eliminate one of these variables. An easy way is to make the E term zero. To do that, take the dot-product of both sides of the equation using a vector that will dot to zero with E. That vector I called P above, and I did the obvious transformation of E. You now have: A*P = C*P + F*P*h (A-C)*P = (F*P)*h ( (A-C)*P ) / (F*P) = h share|improve this answer This algorithm is nice. But there is a hole in it as pointed to by Dan @ stackoverflow.com/questions/563198/… & Elemental @ stackoverflow.com/questions/563198/… It would be cool if you would update your answer for future reference. Thanks. –  Chantz Oct 6 '09 at 1:45 Is this algorithm numerically stable? I've tried a similliar aproach and it turned out to give weird results when working on floats. –  milosz Aug 1 '10 at 8:57 There seems to be another problem with this algorithm. When it's fed the points A={1, 0} B={2, 0} C={0, 0} D={1,0}, although the line segments clearly touch at an end, F*P (and also E*Q, in line with the user below's fix) are both 0, thus causing division by 0 to find h and g. Still working on the solution for this one, but I thought the problem was worth pointing out. –  candrews Feb 27 '11 at 6:24 This answer is simply incorrect. Try A={0,0}, B={0,1}, C={0,2} D={2,0} –  Tim Cooper Dec 23 '12 at 14:32 A + E*g = C + F*h The two lines intersect if and only if the solution to that equation (assuming they are not parallel) has both, g and h between 0 and 1 (in- or exclusive, depending on whether you count touching at an end point). –  Daniel Fischer Dec 23 '12 at 16:23 show 5 more comments I have tried to implement the algorithm so elegantly described by Jason above; unfortunately while working though the mathematics in the debugging I found many cases for which it doesn't work. For example consider the points A(10,10) B(20,20) C(10,1) D(1,10) gives h=.5 and yet it is clear by examination that these segments are no-where near each other. Graphing this makes it clear that 0 < h < 1 criteria only indicates that the intercept point would lie on CD if it existed but tells one nothing of whether that point lies on AB. To ensure that there is a cross point you must do the symmetrical calculation for the variable g and the requirement for interception is: 0 < g < 1 AND 0 < h < 1 share|improve this answer I've been pulling my hair out trying to figure out why the accepted answer wasn't working for me. Thanks so much! –  Matt Bridges Aug 22 '09 at 18:40 Also notable that the boundary conditions work in this case (i.e for h=0 or h=1 or g=0 or g=1 the lines 'just' touch –  Elemental Oct 7 '09 at 8:34 add comment The answer once accepted here is incorrect (it has since been unaccepted, so hooray!). It does not correctly eliminate all non-intersections. Trivially it may appear to work but it can fail, especially in the case that 0 and 1 are considered valid for h. Consider the following case: Lines at (4,1)-(5,1) and (0,0)-(0,2) These are perpendicular lines which clearly do not overlap. h=((4,1)-(0,0)) dot (0,1) / ((0,-2) dot (0,1)) = 0 According to the above answer, these two line segments meet at an endpoint (values of 0 and 1). That endpoint would be: So, apparently the two line segments meet at (0,0), which is on line CD, but not on line AB. So what is going wrong? The answer is that the values of 0 and 1 are not valid and only sometimes HAPPEN to correctly predict endpoint intersection. When the extension of one line (but not the other) would meet the line segment, the algorithm predicts an intersection of line segments, but this is not correct. I imagine that by testing starting with AB vs CD and then also testing with CD vs AB, this problem would be eliminated. Only if both fall between 0 and 1 inclusively can they be said to intersect. I recommend using the vector cross product method if you must predict end-points. share|improve this answer The "accepted" answer can change, so you should call it something else. (In fact, I think it has changed since your comment) –  Johannes Hoff Feb 15 '13 at 21:21 add comment Here's an improvement to Gavin's answer. marcp's solution is similar also, but neither postpone the division. This actually turns out to be a practical application of Gareth Rees' answer as well, because the cross-product's equivalent in 2D is the perp-dot-product, which is what this code uses three of. Switching to 3D and using the cross-product, interpolating both s and t at the end, results in the two closest points between the lines in 3D. Anyway, the 2D solution: int get_line_intersection(float p0_x, float p0_y, float p1_x, float p1_y, float s02_x, s02_y, s10_x, s10_y, s32_x, s32_y, s_numer, t_numer, denom, t; s10_x = p1_x - p0_x; s10_y = p1_y - p0_y; s32_x = p3_x - p2_x; s32_y = p3_y - p2_y; denom = s10_x * s32_y - s32_x * s10_y; if (denom == 0) return 0; // Collinear bool denomPositive = denom > 0; s02_x = p0_x - p2_x; s02_y = p0_y - p2_y; s_numer = s10_x * s02_y - s10_y * s02_x; if ((s_numer < 0) == denomPositive) return 0; // No collision t_numer = s32_x * s02_y - s32_y * s02_x; if ((t_numer < 0) == denomPositive) return 0; // No collision if (((s_numer > denom) == denomPositive) || ((t_numer > denom) == denomPositive)) return 0; // No collision // Collision detected t = t_numer / denom; if (i_x != NULL) *i_x = p0_x + (t * s10_x); if (i_y != NULL) *i_y = p0_y + (t * s10_y); return 1; Basically it postpones the division until the last moment, and moves most of the tests until before certain calculations are done, thereby adding early-outs. Finally, it also avoids the division by zero case which occurs when the lines are parallel. You also might want to consider using an epsilon test rather than comparison against zero. Lines that are extremely close to parallel can produce results that are slightly off. This is not a bug, it is a limitation with floating point math. share|improve this answer The most elegant solution. Thank you. –  Ray Feb 19 '13 at 9:58 Fails if some of the points have a value of 0.. that should not happen right? –  hfossli Feb 20 '13 at 15:21 I've made a correction for a bug introduced when deferring the divide. t could be positive when the numer and denom were both negative. –  iMalc Apr 2 '13 at 5:05 add comment Question C: How do you detect whether or not two line segments intersect? I have searched for the same topic and I wasn't happy with the answers. So I have written an article that explains very detailed how to check if two line segments intersect with a lot of images. There is complete (and tested) Java-code. Here is the article, cropped to the most important parts: The algorithm, that checks if line segment a intersects with line segment b, looks like this: enter image description here What are bounding boxes? Here are two bounding boxes of two line segments: enter image description here If both bounding boxes have an intersection, you move line segment a so that one point is at (0|0). Now you have a line through the origin defined by a. Now move line segment b the same way and check if the new points of line segment b are on different sides of line a. If this is the case, check it the other way around. If this is also the case, the line segments intersect. If not, they don't intersect. Question A: Where do two line segments intersect? You know that two line segments a and b intersect. If you don't know that, check it with the tools I gave you in "Question C". Now you can go through some cases and get the solution with 7th grade math (see code and interactive example). Question B: How do you detect whether or not two lines intersect? Lets say your point A = (x1, y1), point B = (x2, y2), C = (x_3, y_3), D = (x_4, y_4). Your first line is definied by AB (with A != B), your second one by CD (with C != D). function doLinesIntersect(AB, CD) { if (x1 == x2) { return !(x3 == x4 && x1 != x3); } else if (x3 == x4) { return true; } else { // both lines are not parallel to the y-axis m1 = (y1-y2)/(x1-x2); m2 = (y3-y4)/(x3-x4); return m1 != m2; Question D: Where do two lines intersect? Check with Question B if they intersect at all. The lines a and b are defined by two points for each line. You can basically apply the same logic was used in Question A. share|improve this answer To be clear, the Question B in this answer is truly about two lines intersecting, not line segments. I'm not complaining; it's not incorrect. Just don't want anyone to be misled. –  phord Jul 16 '13 at 3:46 add comment This is working well for me. Taken from here. // calculates intersection and checks for parallel lines. // also checks that the intersection point is actually on // the line segment p1-p2 Point findIntersection(Point p1,Point p2, Point p3,Point p4) { float xD1,yD1,xD2,yD2,xD3,yD3; float dot,deg,len1,len2; float segmentLen1,segmentLen2; float ua,ub,div; // calculate differences // calculate the lengths of the two lines // calculate angle between the two lines. dot=(xD1*xD2+yD1*yD2); // dot product // if abs(angle)==1 then the lines are parallell, // so no intersection is possible if(abs(deg)==1) return null; // find intersection Pt between two lines Point pt=new Point(0,0); // calculate the combined length of the two segments // between Pt-p1 and Pt-p2 // calculate the combined length of the two segments // between Pt-p3 and Pt-p4 // if the lengths of both sets of segments are the same as // the lenghts of the two lines the point is actually // on the line segment. // if the point isn’t on the line, return null if(abs(len1-segmentLen1)>0.01 || abs(len2-segmentLen2)>0.01) return null; // return the valid intersection return pt; class Point{ float x,y; Point(float x, float y){ this.x = x; this.y = y; void set(float x, float y){ this.x = x; this.y = y; share|improve this answer There are several problems with this code. It can raise an exception due to division by zero; it's slow because it takes square roots; and it sometimes returns false positives because it uses a fudge factor. You can do better than this! –  Gareth Rees Feb 19 '09 at 23:14 Okay as a solution but that given by Jason is definitely computationally quicker and avoids a lot of the problems with this solution –  Elemental Oct 6 '09 at 8:17 add comment Just wanted to mention that a good explanation and explicit solution can be found in the Numeric Recipes series. I've got the 3rd edition and the answer is on page 1117, section 21.4. Another solution with a different nomenclature can be found in a paper by Marina Gavrilova Reliable Line Section Intersection Testing. Her solution is, to my mind, a little simpler. My implementation is below: bool NuGeometry::IsBetween(const double& x0, const double& x, const double& x1){ return (x >= x0) && (x <= x1); bool NuGeometry::FindIntersection(const double& x0, const double& y0, const double& x1, const double& y1, const double& a0, const double& b0, const double& a1, const double& b1, double& xy, double& ab) { // four endpoints are x0, y0 & x1,y1 & a0,b0 & a1,b1 // returned values xy and ab are the fractional distance along xy and ab // and are only defined when the result is true bool partial = false; double denom = (b0 - b1) * (x0 - x1) - (y0 - y1) * (a0 - a1); if (denom == 0) { xy = -1; ab = -1; } else { xy = (a0 * (y1 - b1) + a1 * (b0 - y1) + x1 * (b1 - b0)) / denom; partial = NuGeometry::IsBetween(0, xy, 1); if (partial) { // no point calculating this unless xy is between 0 & 1 ab = (y1 * (x0 - a1) + b1 * (x1 - x0) + y0 * (a1 - x1)) / denom; if ( partial && NuGeometry::IsBetween(0, ab, 1)) { ab = 1-ab; xy = 1-xy; return true; } else return false; share|improve this answer add comment Processing.js has a demo with sample code. share|improve this answer add comment C and Objective-C Based on Gareth Rees' answer typedef union AGPoint { struct { double x, y; }; double v[2]; } AGPoint; typedef union AGLine { struct { AGPoint start, end; }; double v[2]; } AGLine; BOOL AGLineIntersection(AGLine l1, AGLine l2, AGPoint *out_pointOfIntersection) AGPoint p = l1.start; AGPoint q = l2.start; AGPoint r = AGPointSubtract(l1.end, l1.start); AGPoint s = AGPointSubtract(l2.end, l2.start); double s_r_crossProduct = AGPointCrossProduct(r, s); double t = AGPointCrossProduct(AGPointSubtract(q, p), s) / s_r_crossProduct; double u = AGPointCrossProduct(AGPointSubtract(q, p), r) / s_r_crossProduct; if(t < 0 || t > 1.0 || u < 0 || u > 1.0) if(out_pointOfIntersection != NULL) *out_pointOfIntersection = AGPointZero; return NO; if(out_pointOfIntersection != NULL) AGPoint i = AGPointAdd(p, AGPointMultiply(r, t)); *out_pointOfIntersection = i; return YES; extern AGPoint AGPointSubtract(AGPoint p1, AGPoint p2) return (AGPoint){p1.x - p2.x, p1.y - p2.y}; extern double AGPointCrossProduct(AGPoint p1, AGPoint p2) return (p1.x * p2.y) - (p1.y * p2.x); extern AGPoint AGPointAdd(AGPoint p1, AGPoint p2) return (AGPoint){p1.x + p2.x, p1.y + p2.y}; extern AGPoint AGPointMultiply(AGPoint p1, double factor) return (AGPoint){p1.x * factor, p1.y * factor}; Many of the functions and structs are private, but you should pretty easy be able to know what's going on. This is public on this repo https://github.com/hfossli/AGGeometryKit/ share|improve this answer add comment Here there is Matlab function with a very fast algorithm which calculates the intersection point between two line segments: From Mathworks (author: Douglas Schwarz): share|improve this answer Thanks to your link, I have found Fast Line Segment Intersection, by U. Murat Erdem (2010) and a link to an explanation by Paul Bourke (1989). –  Wok May 8 '11 at 15:20 add comment I tried some of these answers, but they didnt work for me (sorry guys); after some more net searching I found this. With a little modification to his code I now have this function that will return the point of intersection or if no intersection is found it will return -1,-1. Public Function intercetion(ByVal ax As Integer, ByVal ay As Integer, ByVal bx As Integer, ByVal by As Integer, ByVal cx As Integer, ByVal cy As Integer, ByVal dx As Integer, ByVal dy As Integer) As Point '// Determines the intersection point of the line segment defined by points A and B '// with the line segment defined by points C and D. '// Returns YES if the intersection point was found, and stores that point in X,Y. '// Returns NO if there is no determinable intersection point, in which case X,Y will '// be unmodified. Dim distAB, theCos, theSin, newX, ABpos As Double '// Fail if either line segment is zero-length. If ax = bx And ay = by Or cx = dx And cy = dy Then Return New Point(-1, -1) '// Fail if the segments share an end-point. If ax = cx And ay = cy Or bx = cx And by = cy Or ax = dx And ay = dy Or bx = dx And by = dy Then Return New Point(-1, -1) '// (1) Translate the system so that point A is on the origin. bx -= ax by -= ay cx -= ax cy -= ay dx -= ax dy -= ay '// Discover the length of segment A-B. distAB = Math.Sqrt(bx * bx + by * by) '// (2) Rotate the system so that point B is on the positive X axis. theCos = bx / distAB theSin = by / distAB newX = cx * theCos + cy * theSin cy = cy * theCos - cx * theSin cx = newX newX = dx * theCos + dy * theSin dy = dy * theCos - dx * theSin dx = newX '// Fail if segment C-D doesn't cross line A-B. If cy < 0 And dy < 0 Or cy >= 0 And dy >= 0 Then Return New Point(-1, -1) '// (3) Discover the position of the intersection point along line A-B. ABpos = dx + (cx - dx) * dy / (dy - cy) '// Fail if segment C-D crosses line A-B outside of segment A-B. If ABpos < 0 Or ABpos > distAB Then Return New Point(-1, -1) '// (4) Apply the discovered position to line A-B in the original coordinate system. '// Success. Return New Point(ax + ABpos * theCos, ay + ABpos * theSin) End Function share|improve this answer add comment I tried lot of ways and then I decided to write my own. So here it is: bool IsBetween (float x, float b1, float b2) return ( ((x >= (b1 - 0.1f)) && (x <= (b2 + 0.1f))) || ((x >= (b2 - 0.1f)) && (x <= (b1 + 0.1f)))); bool IsSegmentsColliding( POINTFLOAT lineA, POINTFLOAT lineB, POINTFLOAT line2A, POINTFLOAT line2B) float deltaX1 = lineB.x - lineA.x; float deltaX2 = line2B.x - line2A.x; float deltaY1 = lineB.y - lineA.y; float deltaY2 = line2B.y - line2A.y; if (abs(deltaX1) < 0.01f && abs(deltaX2) < 0.01f) // Both are vertical lines return false; if (abs((deltaY1 / deltaX1) - (deltaY2 / deltaX2)) < 0.001f) // Two parallel line return false; float xCol = ( ( (deltaX1 * deltaX2) * (line2A.y - lineA.y)) - (line2A.x * deltaY2 * deltaX1) + (lineA.x * deltaY1 * deltaX2)) / ((deltaY1 * deltaX2) - (deltaY2 * deltaX1)); float yCol = 0; if (deltaX1 < 0.01f) // L1 is a vertical line yCol = ((xCol * deltaY2) + (line2A.y * deltaX2) - (line2A.x * deltaY2)) / deltaX2; else // L1 is acceptable yCol = ((xCol * deltaY1) + (lineA.y * deltaX1) - (lineA.x * deltaY1)) / deltaX1; bool isCol = IsBetween(xCol, lineA.x, lineB.x) && IsBetween(yCol, lineA.y, lineB.y) && IsBetween(xCol, line2A.x, line2B.x) && IsBetween(yCol, line2A.y, line2B.y); return isCol; Based on these two formulas: (I simplified them from equation of lines and other formulas) formula for x formula for y share|improve this answer add comment Python version of iMalc's answer: def find_intersection( p0, p1, p2, p3 ) : s10_x = p1[0] - p0[0] s10_y = p1[1] - p0[1] s32_x = p3[0] - p2[0] s32_y = p3[1] - p2[1] denom = s10_x * s32_y - s32_x * s10_y if denom == 0 : return None # collinear denom_is_positive = denom > 0 s02_x = p0[0] - p2[0] s02_y = p0[1] - p2[1] s_numer = s10_x * s02_y - s10_y * s02_x if (s_numer < 0) == denom_is_positive : return None # no collision t_numer = s32_x * s02_y - s32_y * s02_x if (t_numer < 0) == denom_is_positive : return None # no collision if (s_numer > denom) == denom_is_positive or (t_numer > denom) == denom_is_positive : return None # no collision # collision detected t = t_numer / denom intersection_point = [ p0[0] + (t * s10_x), p0[1] + (t * s10_y) ] return intersection_point share|improve this answer add comment If each side of the rectangle is a line segment, and the user drawn portion is a line segment, then you need to just check the user drawn segment for intersection with the four side line segments. This should be a fairly simple exercise given the start and end points of each segment. share|improve this answer Note that this was a reasonable answer to the question as originally framed but now that the question has been edited heavily it doesn't make so much sense. –  Ganesh Sittampalam Jan 10 at 22:12 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23185
Take the 2-minute tour × I'd like to be more explicit about my closures regarding their argument types. So I would write something like List<Y> myCollect(List<X> list, Closure<X,Y> clos) { ... } I know that Groovy won't use that type information, but Groovy++ may use it at compile time. Can this be be achieved (other than putting it into a comment)? UPDATE: The title may sound misleading, but I thought the above example would make it clearer. I'm interested in specifying types of a closure which is the argument of some function. Suppose, I want to redefince the built-in collect. So I'm interested in writing myCollect, not in writing clos. What I want to achieve is get compile time errors myCollect(['a', 'ab'], { it / 2 }) // compile error myCollect(['a', 'ab'], { it.size() }) // OK share|improve this question In the type Closure<V>, the V represents the return value of the closure, not its parameters. As such, it wouldn't make sense to have a Closure<V, X> since you can't return two values. –  Nancy Deschênes Jul 26 '13 at 1:16 I mean Closure<X,Y> to accept a single X as input and returns Y. So it can be applied to the items of List<X>. I updated the return type of the function. –  Adam Schmideg Jul 26 '13 at 8:57 add comment 1 Answer up vote 3 down vote accepted You can define the types of a closure's parameters, but the syntax shown above is incorrect. Here is a closure without parameter types: def concatenate = {arg1, arg2 -> return arg1 + arg2 And here is the same closure with parameter types def concatenate = {String arg1, String arg2 -> return arg1 + arg2 Groovy does do some compile-time type checking, but not as much as Groovy++ (or Java). Even if the type information is not used at compile-time it will be checked at runtime, and is also valuable as a form of documentation. share|improve this answer please, see my update. Of what type is your concatenate? Is it Closure<String,String,String>? Or what? –  Adam Schmideg Apr 18 '11 at 17:07 I accept this answer not because I'm happy with it, but this is the only one :( Maybe, my question wasn't clear enough. –  Adam Schmideg Jul 22 '11 at 10:35 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23186
Take the 2-minute tour × If I link to a favicon in my head from another domain will it work ok? I want to use the same favicon as an existing site, and my new site may end up being a different domain altogether or a sub domain of the existing one. Thanks a lot share|improve this question add comment 2 Answers up vote 1 down vote accepted Put this in the head tag: <link rel="shortcut icon" href="your url here"> It'll work fine, but leeching (stealing bandwidth from another server) isn't cool. Host it on your own server or ask the webmaster for permission. share|improve this answer add comment Yes you may do this, favicon is treated just as any other HTTP resources on page, so it can be external. share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23187
Take the 2-minute tour × I'm reading this article and there is a paragraph: If you ever find yourself needing to explicitly scope a variable inside a function you can use an anonymous function to do this. You can actually create an anonymous function and then execute it straight away and all the variables inside will be scoped to the anonymous function: (function() { var myProperty = "hello world"; alert(typeof(myProperty)); // undefined I met with this already but still need some clarification why should I need to explicitly scope a variable inside a function when variables are implicitly scoped inside a function in Javascript. Could you explain the purpose of this? thank you share|improve this question maybe because you don't want to overwrite the value of a global variable? –  Achshar Sep 13 '11 at 12:49 @Achshar Yes, but I couldn't imagine this situation. Raynos example opened my eyes. –  xralf Sep 13 '11 at 12:59 add comment 1 Answer up vote 3 down vote accepted setTimeout(function() { console.log(i) }, 10); // alerts 10, 10 times (function(i) { // explicitly scope i setTimout(function() { console.log(i) }, 10); When generating functions inside other functions and accessing variables up the scope chain through closure scope it may be useful to "explicitly scope" a variable inside the outer function. Although this is an anti pattern. The correct solution would be var generateLogger = function(i) { return function() { console.log(i); }; setTimeout(generateLogger(i), 10); Since generating functions in a loop is inefficient and bad practice. There are no real use cases of "explicitly scoping" variables that can't be avoided by not creating functions inside other functions. share|improve this answer Thank you for answering the OP's question and providing an actual example –  gladsocc Sep 13 '11 at 12:39 @Raynos : Thank you, now it's clear. There is only one strange thing. The output in the first example is (54, 10, ... 10) and (undefined, 0, ..., 9). Why there is one more extra value? and why in the first case it isn't (54, 0, ..., 0), the last value of i overwrites the preceding values? –  xralf Sep 13 '11 at 12:56 @xralf the 54 and undefined are the value of executing the "expression" that is the for loop in the console. 54 is the return value of the last setTimeout and undefined i the return value of the anonymous function. –  Raynos Sep 13 '11 at 13:26 @xralf in the first example it's 10, ... 10 because the setTimeout function prints the current value of i after 10 milliseconds. the current value of i is 10 when those functions execute. –  Raynos Sep 13 '11 at 13:27 @Raynos This could be a bug in the future when the for loop should execute 10 times but it executes 11 times, it's good to know this pitfall. –  xralf Sep 13 '11 at 13:58 show 2 more comments Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23188
Take the 2-minute tour × I'm trying to display a hosted form on my business page as a 'Page Tab'. I have created a new app, entered the URLs into the Page Tab section, saved, and added the app to my page. I have both http and https URLS to satisfy Facebook's requirements. I have filled in the Basic info and the Page Tab information only, then saved. But when I add the app to my page and view it in a live environment I see: The page cannot be found Make sure that the Web site address displayed in the address bar of your browser is spelled and formatted correctly. If you reached this page by clicking a link, contact the Web site administrator to alert them that the link is incorrectly formatted. Click the Back button to try another link. HTTP Error 404 - File or directory not found. Internet Information Services (IIS) Technical Information (for support personnel) The URLS are: http://www.mardevdm2.com/reports/eForm.html (and the corresponding https version of the same URL) The page for the app is: https://www.facebook.com/apps/application.php?id=169630223121106 What am I doing wrong? When embedded in a business page it displays the above 404 error. But in all other instances the content is visible. share|improve this question add comment 1 Answer When it's an app on a tab, the request from Facebook is sent as a POST request, is your server set up to allow that? That error message is generated from your server, so check your own logs and see what the request from Facebook was, it should be pretty easy to see why your server returned 404 once you know exactly what the Facebook request was share|improve this answer Hi, the form uses method=”post” so I assume this is set-up correctly. Could it be something else? –  Rick Burtch Oct 27 '11 at 19:04 Is your page tab URL set up correctly? that error is from your own server, check the logs and see what page us being requested. –  Igy Oct 28 '11 at 10:03 This fixed my problem. Thanks! –  John Dec 13 '11 at 23:54 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23189
Take the 2-minute tour × What I am trying to do is show a window, that does not explicitly have a height/width, (both values ommited or set to Auto). I was guessing that the window would find out its size by auto - calculating all contained usercontrols sizes, but this doesn't actually work! Instead I get a big window with Actualwidth and Actualheight values both set to 512 (?!?!) Window declaration: <Window x:Class="Window3" Showing this window as a dialog via: Private Sub Button2_Click(ByVal sender As System.Object, ByVal e As System.Windows.RoutedEventArgs) Handles Button2.Click Dim dlg As New Window3 End Sub Is there a solution for this? I don't want to explicitly set the size of my window because many controls in the form will be collapsed based on constructor parameters, and trying to find the actual size of the form would be tricky (and ugly). share|improve this question You may also want to center the window with .WindowStartupLocation set to "CenterScreen" –  Anders Lindén Jul 5 '12 at 14:16 add comment 1 Answer up vote 168 down vote accepted Set the window's property SizeToContent to "WidthAndHeight". This should help. share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23190
Take the 2-minute tour × I would like to connect my page on facebook to my website, using the livestream plugin ( http://developers.facebook.com/docs/reference/plugins/live-stream/ ) It asks for an app id. How do I create this for a facebook-page? I have tried everything (at least!) It is really not clear to me how to create this app id for my page to add in the livestream plugin. share|improve this question add comment 1 Answer https://developers.facebook.com/apps is where you create a new app - the app ID is displayed in your list of apps and in the App settings page. share|improve this answer Thanks, I have come so far, but I cant find out how to Set up the app correct. I can tell that my app id is 310151325680020, but when i type this on this site: developers.facebook.com/docs/reference/plugins/live-stream It shows my profile. I want it to show the profile for my company.. –  Lars Kristian Engelsby Hansen Nov 27 '11 at 19:59 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23191
Take the 2-minute tour × I have a server that listens for socket connections and perform different kind of actions, depending on the request. One of them is long lived database queries, for which the server forks. The server keeps a log of all the active children and whenever asked to shutdown, it will kill all it's children before exiting. A couple of times I have encountered the situation that the server crashed or was killed ungracefully, which lead to the child process becoming orphan. If I try to bring the server back again, it will refuse saying the the listening socket is not able to bind because that address/port is already bound. I am looking for a way to improve this kind of situation, so that the main server process can come back right away. I've tried monitoring the parent existance from the child and exiting as soon at is gone, but this has only resulted in having zombie processes and the socket seems to still be bound. The server is written in Python, but any explanation or suggestion in any language is welcome. share|improve this question Do you know how the server terminated? If you can catch a problem in a signal handler, you may be able to exit the children. –  Kekoa May 14 '09 at 21:43 What flavor of Unix? –  sigjuice May 14 '09 at 21:49 Are you sure the child processes are detecting the disappearance of their parent and calling exit? If a parent disappears, init (pid 1) should inherit the child processes, call wait() on exited children and zombies should never happen. –  sigjuice May 14 '09 at 22:06 add comment 4 Answers Use this on your socket before you call listen(): int on = 1; setsockopt (sockfd_wan, SOL_SOCKET, SO_REUSEADDR, &on, sizeof (on)); It allows your programm to use that socket, even it was randomly picked before by another outgoing TCP-connection (cannot happen for ports <1024). But it should also help directly with your problem!! There is another bad thing that can happen: If your childs are forked, they inherit EVERY open filedescriptor. If they simply fork and launch another long running programm, those will also have an open handle to your listen-socket, so it stays in use (find out with lsof and netstat command!) So one should call this: int close_on_exec_on(int fd) return fcntl(fd, F_SETFD, FD_CLOEXEC); But I never tried it in the main programm if it forks off childs and it clearly will not help you because the childs are forked, not run with exec. But keep it in mind and call it on your listen socket in the main programm anyway! Just in case you run an external programm share|improve this answer add comment Make your server the leader of a process group. In that case children are terminated when the group leader exits. share|improve this answer add comment Perhaps when you fork, disown the child, so that the parent process isn't the parent registered with the OS. Does the parent really need to communicate with the child? If not this may be an option. You can keep track of child processes, but in a different way. You won't get SIGCHLD events anymore. share|improve this answer add comment Unix can handle this for you automatically. When the parent process exits (for any reason) the child processes will all receive SIGCHLD. By default, your child process will ignore this signal, however. All you have to do is register a signal handler for this signal. share|improve this answer I think you got this backwards. SIGCHLD is sent to a parent when a child exits. –  sigjuice May 14 '09 at 21:50 On Linux, a child process can be notified of the death of its parent. See PR_SET_PDEATHSIG here, kernel.org/doc/man-pages/online/pages/man2/prctl.2.html –  sigjuice May 14 '09 at 22:03 Oh you're right -- SIGCHLD is sent to the parent when the child exits. You can set things up so the child process will receive a SIGHUP, though. I don't remember the details, but look for "orphaned process group" as a key phrase. –  Chris Jones May 14 '09 at 22:15 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23192
Take the 2-minute tour × I'm using a datagrid with its default scrollPanel which set "opacity" to "0.7". I want to change the opacity to "1" but the ScrollPanel is just created by some "div", not using "CSS" resources. So, I could not override any CSS resource also CSS attribute. Please suggest me how to change the "opacity". share|improve this question add comment 1 Answer Even though it its setting in some div element you can set the opacity in css... In your css type the following .gwt-scrollpanel { Opacity : 1.0 !important; share|improve this answer Thanks, but scrollPanel of the DataGrid does not use gwt-scrollpanel. It is just two <div>, so I used "DOM.setElementAttribute" to update the Opacity and saw it takes effect. –  Xuna Jan 17 '12 at 3:03 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23193
Take the 2-minute tour × What does ... = ... mean as a function parameter? I saw this in some R source code. I understand that ... is additional arguments, but not sure what the equals does? share|improve this question Do you have any example? –  kohske Jan 4 '12 at 2:36 sorry, this was used in a function call, not in the function definition. PerformanceAnalytics:::SharpeRatio –  SFun28 Jan 4 '12 at 3:10 add comment 1 Answer up vote 18 down vote accepted It is superfluous and the same as .... Probably the author meant that the ... of the called function should equal to the ... of the calling function, but that's the same as using just ... (in fact the ...=... construct may be confusing since it may invoke the idea that arguments other than ... of the called function won't be used which is not true). share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23194
Take the 2-minute tour × var data = message: "Posting SWF using FB.api", display: 'iframe', caption: "Caption Field", name: "Name", picture: 'http://www.example.com/image.png', source: 'http://www.example.com/FlashMovieSample.swf', link: "http://www.example.com/", // Go here if user click the picture description: "Description field", FB.api('/me/feed', 'post', data, onPostToWallCompleted); The above successfully posts a swf to newsfeed, but the swf has scrollbars. Facebook sets the swf width and height equal to the container (iFrame) width and height. The swf is 200px x 200px. Using smaller width and height on the swf size does not prevent scroll bars from showing. Using the old stream.publish with the expanded_width and expanded_height parameters also produces scrollbars. Using stage.scaleMode = StageScaleMode.NO_SCALE within AS3 helps, but does not prevent scrollbars. I'm out of ideas. Any suggestions? share|improve this question add comment 1 Answer up vote 0 down vote accepted This is a bug with the Facebook platform in the way that it formats the iFrame that contains the SWF. It does this for all video's on the news feed, YouTube, Vimeo, Soundcloud, etc. I would also say that you should not use "Source", but rather define the OpenGraph meta data on the link you are providing. You should also supply both the og:video and og:video:secure_url tags, so people using Facebook with secure browsing enabled get an SSL version of your SWF. share|improve this answer Thank you Adrian –  Michael Penfield Feb 10 '12 at 2:34 @Lix,@Adrian i want to ask one thing , when will this swf be reflected in the post , i thought when we click on the picture a swf will come mentioned in source , but then what about the link .. –  Peter Jul 25 '12 at 7:24 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23211
Take the 2-minute tour × I am seeking to understand that "average" file size for a folder on my computer. I have calculated the true average of files (total size / no. of files), however I would, ultimately, love to group together files into file-size categories as follows; Files < 1kb: 23 files. Files < 100kb: 276 files. Files < 1mb: 786 files. Is this possible using a batch script? share|improve this question I smell XYZ problem here... –  Hello71 Jul 21 '11 at 16:36 add comment 1 Answer Yes this is possible. You will need to write some code that follows along an algorithm of your choice, using variables to store values. The algorithm could look perform actions like this: • define File-Size-Range-Groups ( < 1kb, < 100 kb, etc ) • Sum the number of files in each File-Size-Range-Group • Sum the individual file sizes for a File-Size-Range-Group total • Output the sum value of files in a File-Size-Range-Group • Output the average (total file size sum for File-Size-Range-Group / number of files in File-Size-Range-Group) share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23212
Take the 2-minute tour × How do I get my sound working again? It works with Headphones, but when I unplug the headphones a red light is coming out of the headphone jack and I can't adjust the volume on my Macbook Pro. share|improve this question add comment 2 Answers up vote 10 down vote accepted macRumors ref: No sound out of MacBook speakers & red LED in Headphone jack. The red light is probably because its a Mini-TOSLINK port. There is a metal prong in the jack and sometimes it can be bent, causing the digital audio to be turned on when it shouldn't be. The problem is there is a switch in the jack that tells it if you have a mini headphone plug or an optical plug plugged into the headphone port. The problem is when you remove the plug, the jack doesn't know it and keeps shining the red light to talk with the optical. This disables the internal speakers and you see digital out instead of internal speakers in the speaker conrtol panel. Plugging and unplugging the speakers may get it working right for awhile but it won't last forever. And, for the brave hearted, I slid a paperclip in, and pushed out a little metal on the side. That did the job, and I have internal sound again. And the digital red light, is now off. Important Update: If this answer helped you, please do not add a 'thank you' as another answer... This is not a forum -- please read the faq. share|improve this answer I don't see any metal sticking out in there to bend back.... –  ScArcher2 Sep 8 '09 at 21:04 s/paper clip/toothpick/ - I had this same problem and a wooden toothpick worked great. –  jtimberman Sep 9 '09 at 6:23 I took it to the Apple store. They blew it out with a can of air and it fixed my problem! –  ScArcher2 Sep 10 '09 at 17:09 Happened to me too, I went crazy for a while :) –  Davide Gualano Jan 8 '10 at 10:56 Thanks @ScArcher, the toothpick (from other forums) didn't work, but the compressed air did. MUCH thanks. –  Rich Homolka Nov 4 '10 at 1:40 add comment Closing the lid and reopening it resolves this for me whenever it occurs, this isn't optimal, but it's quick and easy. When this happened to my macbook 13" unibody (aluminum) model it was very frustrating as I would plug in headphones and they'd seem broken. The same thing works if your tap-to-click feature stops working, or in that case you can use the system prefs to disable and reenable it. share|improve this answer add comment protected by Jeff Atwood Jun 7 '10 at 6:49
global_01_local_0_shard_00000017_processed.jsonl/23213
Take the 2-minute tour × I have tried doing the steps here: http://www.timeatlas.com/email/general/create_image_signatures_in_windows_mail_or_live_mail However I don't like having to uncheck the "Block images" box in the Security tab in Safety Options, because I believe this is weakening the security of the email client. Is there way to have images in email signatures in Windows Live Mail without having to weaken the security? Note, I have tried embedding an image into html using Base64 encoding but Windows Live Mail declares that the signature is "too large" and truncates the signature to remove the image. share|improve this question How about this tutorial. However I'm not using the windows live mail but I'll check the compatibility of the article and tell you. If you got problem let me know. –  avirk May 9 '12 at 2:35 Also see this video and check other videos uploaded by the same user. –  avirk May 9 '12 at 12:12 add comment 1 Answer You need to use the path to the image, rather than the image itself. See this article for step-by-step instructions: Create Image Signatures in Windows Mail or Live Mail. The steps are : 1. Create a HTML Signature Source File 2. Link the HTML file to your Email Signature 3. Fix Windows Live Mail Signature Image 4. You may need to go to Tools > Safety Options... > Security Tab > Download Images section, to uncheck Block Images share|improve this answer Please see in the question where I have stated that I don't want to do step 4 because I think it weakens the security of the mail client. Is there way to achieve this without step 4? –  CJ7 May 8 '12 at 8:51 It might work with Block Images on, just try and see. The security problem with images is usually the parameters that are appended to them. However, today all major email clients get images without parameters. I myself don't see images as a danger, especially since I have installed both antivirus and anti-intrusion security products. –  harrymc May 8 '12 at 11:15 It doesn't work with Block Images on. The main problem with images in emails is that if they are linked to external websites then senders of spam can learn your IP address. –  CJ7 May 8 '12 at 11:59 I use MailWasher as a pre-filter for my mail, so I never read any spam on my computer to start with. Advertising your IP address is not dangerous, since nobody can attack you thru your modem/router, which is by itself an excellent firewall, and then thru your own computer's firewall, unless you have unwisely turned on DMZ. The crooks know anyway your IP and maybe even your modem's model, since they know the block of iPv4 IP addresses allocated to your ISP thru the Internet Registry database. –  harrymc May 8 '12 at 16:06 Ok, but then why is the mail client defaulted to block images? I would prefer to leave the default security settings alone. –  CJ7 May 8 '12 at 17:55 show 5 more comments Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23214
Take the 2-minute tour × My intent in a very near future is to set up a dual boot system with Windows 8 and Debian Linux (for testing purposes) on a Lenovo Ideapad 205, that has UEFI BIOS. I saw several articles about the new Windows 8 features regarding faster boot on UEFI and this may cause a sort of incompatibility with GRUB and, in general, Linux distributions. How much is it true? In other words, could I expect some problems when installing those OSs in the following order? 1. Install Windows 8 on a first partition, like 50% of the HDD size. 2. Then install Debian with GRUB2 in another partition set, at this point GRUB2 should replace the default Windows MBR. share|improve this question Please edit your question to include links to those articles you read. –  Moab Sep 6 '12 at 17:16 You don't need grub when you have UEFI. UEFI is capable of selecting the OS loader on its own. Information about how to set up Debian on UEFI –  Marco Sep 6 '12 at 17:32 Here is a detailed guide for installing Debian on a PC with UEFI Secure-Boot Windows 8 pre-Installed. –  user187428 Jan 12 '13 at 11:46 add comment 2 Answers up vote 3 down vote accepted Since you said near future the question probably doesn't apply anymore, but I want to answer it to clarify the situation in case some people have the same doubt as you. You shouldn't have any problems for that setup, whether the laptop was already bought with Windows 8 or not. What you read about incompatibilities is probably related to a security feature of UEFI called Secure Boot which requires bootloaders (ideally anything accessing hardware directly) to be digitally signed so they can be verified, which among other things prevents malware targeting boot loaders or man-in-the-middle attacks when booting over the network. For the Windows 8 certification (for new machines), Microsoft requires that feature to be implemented and enabled by default; so unsigned or compromised bootloaders wouldn't be able to boot by default. But, Microsoft also requires that the user should be able to disable that feature altogether if he want to (if the machine doesn't have an ARM processor), with it disabled everything would work as usual. Anyway, many systems with UEFI that didn't ship with Windows 8 don't even implement Secure Boot, so it's even less hassle. The problem may be when you want to have Secure Boot enabled but also compile your own bootloader or kernel. In that case all that's you'd need to sign them (maybe only the bootloader) and add the public key to the UEFI storage so that anything signed with your private key would be verified as secure, but you'd have to buy the key to sign it. Regarding what you say about Windows 8 booting faster, it won't cause any problem in that setup either. It's something they called hybrid boot which uses hibernation to cache most of the core system instead of a traditional boot sequence; but it happens in any kind of system not only UEFI based ones (remember that Windows 8 works in BIOS based systems as well). In any case if that gives you any kind of problem it can be disabled too and the traditional startup is still available. I hope that clarifies things. share|improve this answer add comment There should be no problems at all if the computer did not originally come with Windows 8 installed (if you are indeed "installing" 8 on a primary partition yourself). If your plan is to buy a UEFI computer that comes with Windows 8, then things get a little more interesting. however, most Linux distributions have already solved this proble of boot code signing. Ubuntu and Fedora in particular have already found solutions for this and Debian has been discussing it too. I am sure that it will be solved in a couple of months. share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23215
Take the 2-minute tour × I Googled a lot to find how to download and install the weather feature in Windows 8. I am not having it in my app list. Please tell me how can I add it? I am not finding it in the store available in start menu. I found that it is one of the best applications which win8 have included. Hence I want to make use of it. share|improve this question Have you tried searching for "Weather"? I had this application on my Start screen right after installation. –  ta.speot.is Dec 16 '12 at 10:45 @ta.speot.is Yeah first thing what I did was searching for weather in start screen. –  Dibya Dec 16 '12 at 13:45 add comment 3 Answers Overall concept use Window 8's own search. Key Point put the focus on 'Store' not 'Apps'. From the Metro UI Type 'Weather', remember to put the focus on 'Store'. You could refine the search with 'Bing Weather' Search for Bing Weather Tip for Windows 8 weather apps. Install at least 3 and see which is the best for your needs. share|improve this answer On my Windows 8 machine "Weather" came as part of the Start screen. allthingsd.com/files/2011/06/Windows-8-start-menu.png I am not certain, but I imagine Microsoft's "Weather" app would be powered by Bing. –  ta.speot.is Dec 16 '12 at 10:44 I didn't find this weather in store. I am running an enterprise edition. –  Dibya Dec 16 '12 at 13:43 add comment The official store link for the Windows 8 (Bing) Weather app is this Click in Internet Explorer 10 in Windows 8 and the store will download the right app. (The app name is localized to your system Language, you won't find it as "Bing Weather") share|improve this answer add comment up vote 0 down vote accepted Windows 8 contains the Bing Weather feature. The application was not correctly installed. This problem can be solved easily with the feature Refresh. Refreshing the PC takes a bit time but puts all the settings to default. share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23220
Export (0) Print Expand All Expand Minimize 1 out of 5 rated this helpful - Rate this topic Exchange Server 2003 Active Directory Connector Changes Topic Last Modified: 2005-10-31 By Vincent Yim and Nino Bilic. This article introduces the main changes to Active Directory Connector (ADC) for Microsoft® Exchange Server 2003 when compared to the Exchange 2000 Server version. The Active Directory Connector (ADC) management console now contains an ADC Tools option. ADC Tools is a collection of wizards and tools that help you set up connection agreements. Specifically, ADC Tools scans your current Active Directory® directory service and Exchange Server 5.5 directory and organization, and then automatically creates the recommended connection agreements. E2K3 ADC Tools Main Screenshot The following sections discuss the wizards that are included in ADC Tools. The Resource Mailbox Wizard identifies Active Directory accounts that match more than one Exchange Server 5.5 mailbox. Using this wizard, you can match the appropriate primary mailbox to the Active Directory account and stamp other mailboxes with the NTDSNoMatch attribute, which designates the mailboxes as resource mailboxes. You can either make these changes through the graphical user interface (GUI) or export a comma-separated value (.csv) file that you can update and import into the Exchange Server 5.5 directory. If any changes were made after you ran this wizard, the results will not be seen by ADC Tools unless you run the data collection step again. Therefore, if any changes were made in later steps, you need to re-run data collection to be able to verify that the ADC wizards are now running without reporting errors. This wizard recommends public folder connection agreements and recipient connection agreements based on your Exchange Server 5.5 directory and Active Directory configuration. You can review the list of recommended connection agreements and select those you want the wizard to create. In Exchange 2000 Server, the ADC schema files were a subset of the Exchange 2000 Server core schema files. So, although the ADC’s setup /schemaonly switch extended the schema, customers were required to perform further schema extensions using setup /forestprep. This meant longer lockdown periods for larger customers whose custom applications were sensitive to schema extensions because of the delayed nature of replications and resetting of indexed attributes. For more information about indexed attributes, see Microsoft Knowledge Base article 230662, "Enumerating Indexed Attributes in Windows 2000 Active Directory." In Exchange Server 2003, the schema files imported during the installation or upgrade of an Active Directory Connector service are identical to the core Exchange Server 2003 schema; therefore, the schema is updated only once. So, if the Exchange Server 2003 version of ADC setup detects the existence of the Exchange Server 2003 schema, no further schema updates are applied. On the other hand, if ADC setup detects a schema version below 6870, the Exchange Server 2003 schema updates are applied. Although the Exchange Server 2003 ADC setup includes the entire schema, it does not mean it is equal to a setup /forestprep. This is because ADC Setup does not perform many of forestprep’s actions, such as importing the Microsoft Office Outlook® templates and setting access control lists (ACLs) on some Active Directory containers. Additionally, forestprep cleans up some address templates and display specifiers that were Exchange Server 5.5 classes that were never used nor shown in Active Directory. Therefore, forestprep is still required if ADC setup is run first, but customers who follow change-control procedures within large environments will not need to plan for additional administrative lockdowns as they wait for schema changes to replicate, because schema extensions are skipped when running forestprep later. When the ADC is upgraded to the Exchange Server 2003 version, the ADC setup program not only upgrades the ADC binaries, but it also modifies the versionNumber attribute on any connection agreements owned by that ADC service. To determine what connection agreements are owned by an ADC service, use the Active Directory Connector Services snap-in, and select the ADC server indicated by the name Active Directory Connector servername on the left pane. Its owned connection agreements will be viewable on the right. When ADC setup upgrades the connection agreement versionNumber attribute, the values are set to 16973842. Older ADC services (such as Microsoft® Windows® 2000 Server ADC and Exchange 2000 Server SP3 ADC) cannot process these new connection agreements because they expect the older Major version (versionNumber = 16908296). Additionally, if an Exchange 2000 Server or Windows 2000 Server ADC manager snap-in is used to administer an upgraded or new Exchange Server 2003 connection agreement, the following warning is displayed. ADC Connection Agreement Warning Screenshot By the same token, whenever an Exchange 2003 ADC Services snap-in is used to open the properties of an Exchange 2000 Server or Windows 2000 Server connection agreement, the same warning will appear. The following are two reasons for increasing the major versions on public folder connection agreements and recipient connection agreements: • Windows 2000 Server ADC services cannot run any newer connection agreements. Any public folder connection agreement re-homed to the Windows 2000 Server version of the ADC service caused corruption. • The new connection agreements use Kerberos for authentication, which is not understood by Exchange 2000 Server ADC services. In summary, an Exchange 2000 Server ADC service cannot run a connection agreement whose version is incompatible with its own. Conversely, an Exchange Server 2003 service cannot run a connection agreement whose versionNumber is below 16973842. Eventually, all ADC services must be upgraded prior to the installation of the first server that runs Exchange 2003 Server. Otherwise, Exchange Server 2003 setup may not proceed. Administrators must perform an "in-place upgrade" of all pre-Exchange Server 2003 ADC services prior to installation, so that all legacy connection agreements are phased-out. If an Exchange Server 5.5 object exists, but its primary Windows account (assoc-NT-account) resides in a Microsoft Windows NT® Server 4.0 domain or in a separate forest, a properly configured connection agreement directs the ADC service to perform object-creation. By contrast, if the mailbox’s primary Windows NT account pre-existed within the forest, the ADC performs object matching and stamps pre-existing user accounts during the initial replication cycle. By default, in the object-creation connection agreements, a disabled account is created by the ADC service. In the past, the Exchange 2000 Server ADC services would generate the disabled security principal (that is, samaccountname or pre-Windows 2000 logon name) that matched the Exchange Server object’s alias name. This situation caused problems for a couple of reasons. • Customers often had the misunderstanding that ADC object creation was an easy way to migrate Windows NT Server 4.0 accounts to Active Directory. Although it wasn’t proper, customers would enable these “placeholder” accounts that were generated by the ADC, not knowing that doing so would cause delegation problems, public folder ACL conversion problems, and other permissions problems that may prevent logon or mailbox moves. For more information about problems caused by enabling placeholder accounts, see Microsoft Knowledge Base article 316047, XADM: Addressing Problems That Are Created When You Enable ADC-Generated Accounts. • ADC-generated objects conflict with the Active Directory Migration Tool’s (ADMT) ability to migrate user logon names from their source domains. (This situation only applies if ADMT is used after the initial ADC replication, and if the aliasname=userlogonname of the source domain). So when ADMT attempts to create user objects in the target domain, it encounters conflicts with the ADC-generated accounts. ADMT was designed to resolve these conflicts by appending -1 to each samaccountname it generates – thus satisfying the samaccountname uniqueness within a domain. Although ADMT is a proper and supported migration method for user accounts, the -1 object causes an issue for customers because their users prefer not to append a -1 to their logon process. One may believe that ADClean may be used to merge the two objects into a single account, thereby resolving this issue. However, ADClean excludes transferring samaccountname when it merges the disabled objects’ attributes to the ADMT-generated account. In the end, users are still stuck with different user logon names (for example, a user was accustomed to logging on to the source domain as "johnsmith" but must now log on as "johnsmith-1"). In Exchange Server 2003, by randomizing samaccountnames (that is, pre-Windows 2000 Server logon name) whenever the ADC generates a placeholder object, both previous problem scenarios are resolved. A typical user logon name for an ADC-generated account would be "ADC_BDZQOKNUIZDWPPHG" where the characters following the underscore are always randomized. Because this random username is difficult to use for any logon prompt, it inhibits administrators from improperly enabling the placeholder accounts generated by the ADC. Second, the prepared random name will not cause naming conflicts when the future ADMT migrations try to create new users in the Exchange Server 2003 forest. The following figure shows how this looks on the actual account. Screenshot Of ADC Random Logon Name Although the Exchange Server 2003 ADC corrects this issue during object creation, any existing objects that were created prior to when ADC was upgraded may still need their account names renamed. CleanSAM.vbs, a script used by Microsoft Product Support Services to correct the above issues for Exchange 2000 Server topologies, may be used against accounts residing in Exchange Server 2003 environments that were upgraded from Exchange 2000 Server. The script may be obtained by contacting Microsoft Product Support Services. The CleanSAM script also resolved the behavior where, in some instances, ADMT would “match” with the disabled accounts and subsequently merge on top of them, thereby enabling the accounts but failing to clear the msExchMasterAccountSID attribute. There is one exception that prevents the new ADC from randomizing samaccountnames and that is when the ADC replicates an Exchange Server 5.5 "resource" object. (Specifically, the Exchange Server 5.5 object contains the value NTDSNoMatch on custom attribute 10). This is because the Exchange Server 5.5 object's associated Windows NT account is already matched with some other Exchange Server  5.5 object that is a 'primary.' Because the ADC randomizes for that 'primary' mailbox, that primary mailbox will be merged with the ADMT account. But for the resource mailbox, the Exchange Server 2003 ADC creates them the same way as in Exchange 2000 Server; that is, with the samaccountname being the same as the alias, because it is not expected that ADMT will see a conflict. (There was no unique associated Windows NT account for our resource mailbox in the first place!) Connection agreements no longer use Windows NT Server 4.0 Challenge/Response (NTLM) for authentication to domain controllers. Instead, the Exchange Server 2003 ADC uses Kerberos for authentication. This change was made because: • The ADC contains many valuable passwords, such as domain administrator or even enterprise administrator-level of privileges. • NTLM and unsigned Lightweight Directory Access Protocol (LDAP) are susceptible to replay attacks. This change only affects the ADC server during its communication with a Windows 2000 Server SP3 domain controller or later. The following figure illustrates the hard-coded changes to the ADC manager. Screenshot Of ADC Kerberos/Signed LDAP CA Setting In this figure, Exchange Server 5.5 LDAP communication remains at the Windows NT Server 4.0 Challenge/Response authentication mechanism. Only Windows 2000 Server SP3 or Windows Server 2003 domain controllers support signed LDAP on connection agreements. It is worth noting that the automatically created (by ADC Tools) connection agreements (either public folder or recipient connection agreements) will have an additional filter that will make them replicate only objects for specific site naming context. In other words, let's say that you have a domain called "Root" and a site called "Mixed". You run ADC Tools and the connection agreements are created for this site and it is all replicating well. Next, you create a new administrative group, let's say it is a pure Exchange 2000 Server/Exchange Server 2003 administrative group and you call it "AG2". With an ADC Tools created recipient connection agreement in place, you will notice that new mailbox-enabled users in the same Active Directory domain that are in site AG2 will not replicate to Exchange Server 5.5/Site Replication Service (SRS), even though those AG2 user mailboxes might be in the same Organizational Unit as user mailboxes for the Mixed site that do replicate. The reason why this is the case can be found in the following attribute of ADC-created recipient connection agreement: 1> msExchServer1SearchFilter: (&(|(objectclass=user)(objectclass=contact)(objectclass=group))(|(legacyExchangeDN=/o=My Organization/ou=Mixed/cn=*)(legacyExchangeDN=ADCDisabledMail*)(isDeleted=TRUE))); This attribute shows that this recipient connection agreement will filter on the site called Mixed, therefore, it will replicate only mailboxes that are in the Mixed site. In contrast to the attribute above, the following attribute is the same attribute from a recipient connection agreement that was created manually (not through ADC Tools): 1> msExchServer1SearchFilter: (|(objectclass=user)(objectclass=contact)(objectclass=group)); Therefore, this recipient connection agreement will replicate any objects that belong to any site, as long as they are in the containers that ADC replicates. What you would need to do in this situation is to either create a manual connection agreement (which will not include a per-administrative group filter), or rerun the ADC Tools so that it will discover the new administrative group and create another user connection agreement whose search filter includes objects from /ou=AG2. It is worth noting that recipient connection agreements created by ADC Tools will typically point from "domain" to "site" levels and vice versa. This is done to ensure that everything actually gets replicated. Replication will happen, but it should be explained that, because of the domain to site levels and vice versa agreements, there may be additional containers created on both sides of the connection agreements. In the following figure, note that the containers on those connection agreements are "top level," in other words, the domain and site. Screen Shot of Auto Created Connection Agreements This might cause additional containers to be created on both sides of this recipient connection agreement (depending on where mailboxes and mailbox-enabled Active Directory accounts are stored). The following figure shows an example. Screenshot of additional containers created on RCA Although these additional containers may appear to “litter” each directory, there is nothing technically wrong with having these extra containers because users who view the global address list (GAL) do not see the Exchange Server 5.5/Organizational Unit hierarchies. Additionally, administrators may easily move objects between Organizational Units to their desired locations, and those objects will still replicate because they will not fall outside of the domain-level search scope of the recipient connection agreement. (Previously, administrators moving objects with manual recipient connection agreements would often cause their GALs to become out-of-sync.) Keep in mind that the goal of ADC Tools’ connection agreement creation is to get the GALs replicated; it obviously cannot guess how administrators want to organize their administrative structures. One thing to note, ADC is not necessarily going to match with already created containers. For example, let's say that you have a simple Organizational Unit like Users and a mailbox-enabled user like Administrator in Exchange Server 2003 and the connection agreements were set up by ADC Tools so they source the domain and the Exchange Site. When the connection agreement runs, it creates a Users sub-container in Exchange Server 5.5 under the Recipients container. When the connection agreement runs again, it doesn’t find the existing Organizational Unit called Users on the Active Directory side; instead, it creates a sub Organizational Unit underneath the Organizational Unit you specify as the default staging area. This problem, again, is only a "cosmetic" problem. Did you find this helpful? (1500 characters remaining) Thank you for your feedback Community Additions © 2014 Microsoft. All rights reserved.
global_01_local_0_shard_00000017_processed.jsonl/23221
Export (0) Print Expand All 10 out of 17 rated this helpful - Rate this topic Import duplicate company data to use for testing [AX 2012] Updated: February 1, 2013 In Microsoft Dynamics AX 2009, the duplicate company feature was used extensively to copy data from one company to another. The feature was also used to build development or test environments, and deployment scenarios that moved data from one environment to another. However, this feature became obsolete in Microsoft Dynamics AX 2012. We recommend that you use the Microsoft Dynamics AX data export and import feature to support scenarios that previously required the duplicate company feature. Company, or DataArea, can no longer be used as a data security boundary. Because of changes that were made to the organization model in Microsoft Dynamics AX 2012, data is no longer related to a company or legal entity in a simple relationship that is defined by setting the SaveDataPerCompany metadata property of a table to Yes. Because data relationships are now defined through the Relations metadata property, it is not easy to duplicate all data that is related to a legal entity. Therefore, it may not make business sense to duplicate the data that is related to a legal entity. For example, we created organizational hierarchies in which legal entities and business units have a complex relationship. There is no parent/child relationship between business units and legal entities. Therefore, the duplication of business units based on legal entities in the system is erroneous. Follow these steps to use an existing legal entity as a template for other legal entities. 1. Create a legal entity to use as a template. For more information, see Create or modify a legal entity. 2. Set all configuration data for the legal entity. 3. Use the Microsoft Dynamics AX data export and import feature to export the legal entity to a .dat file, such as TMP.dat. 1. Before you export data, you must create a definition group. For more information, see Create definition groups for import and export. 2. To export configuration data, include the following table groups: Reference, Parameter, Group, Framework, and Miscellaneous. To export master data, you must also include the tables that are in the Main table group. Do not include the tables that are in the Transaction, Transaction header, Transaction line, Worksheet, Worksheet header, and Worksheet line table groups. These tables include transaction data. For a detailed list of all the tables in a table group, see Table group reference. 4. In the new environment, create new legal entities, and then import the .dat file that you created into each entity individually. Shared and per-company data is imported. When other legal entities are subsequently imported into the new legal entities, the shared data is merged. For more information, see Import data from another instance of Microsoft Dynamics AX. You can use the Microsoft Dynamics AX data import and export feature to import and modify transaction data. However, it can be difficult to create a duplicate environment for transaction data if the SaveDataPerCompany property of tables is set to No. Instead, we recommend that you use the backup and restore functionality in Microsoft SQL Server to build demo environments in which minor configuration changes can be made to illustrate specific Microsoft Dynamics AX features. Did you find this helpful? (1500 characters remaining) Thank you for your feedback Community Additions © 2014 Microsoft. All rights reserved.
global_01_local_0_shard_00000017_processed.jsonl/23223
Take the 2-minute tour × I’m using two types of footnotes in my document, but when I use the two types in the same page, the space between the two blocks is too large (the paper size small, so it is problematic). Is there a way to reduce the space between the two blocks of notes? My minimal example working is \footmarkstyle{\textbf{#1--} } \lipsum*[1]{}\footnote{Short footnote} \lipsum*[2]{}\footnote{Another short footnote} share|improve this question add comment 1 Answer up vote 3 down vote accepted Are you trying to get rid of all space between the two levels of footnotes? LaTeX fights pretty hard to keep some space there. I think this is pretty wise, in fact. In your case, there are two things you could do to get rid of some excess vertical space: Also, given that you defined a footnote series 'G', you could do something like this: \supersmallskipamount=1pt plus 1pt minus 1pt \skip\footinsG=\supersmallskipamount% <-- see memoir.cls, starting where 'newfootnoteseries' is defined Try putting \the\footskip and \the\footnotesep in your first two footnotes then try compiling with the two \setlengths (not) commented out. share|improve this answer Thanks for your response. It appears the the \setlength{\footskip}{0pt} line decreased the space between the end of the footnote and the footer; \setlength{\footnotesep}{0pt} indeed decreased the space between the two block of notes in the way I intended, but not all of it, as is supposed to. Unfortunatelly, that also decreased the space between two or more G series (the numbered ones) notes, so it looks kind of odd. I suspect that some lenghts related to the footnote rule could decrease the space without that effect. Do you know which ones are involved? –  marlonob Oct 4 '12 at 15:21 The second block of code only works when only are G series footnotes. Sorry for my english; I hope that it were understandable. –  marlonob Oct 4 '12 at 15:22 @marlonob -- yes, very true. If you had defined an 'H' series, you would need to do the same with \footinsG. Or you could re-define the long \newfootnoteseries command from memoir.cls. I chose this route because it made a shorter answer. If you like I can add how you would want to re-define \newfootnoteseries. –  jon Oct 5 '12 at 0:39 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23233
Tolkien Gateway Dome of Stars The Dome of Stars refers to two great starred domes. The first Dome of Stars was the Dome of Stars of Valinor, created by the Vala Varda (also known as Elbereth), and referred both to the great domed building in Valimar, as well as (poetically) the stars of the skies. The second Dome of Stars was the great hall in the King's city of Osgiliath, capital of Gondor, which Elendil built on top of the great bridge across the Anduin, and where the chief palantír was housed. This second Dome of Stars was destroyed during the civil war of the Kin-strife.
global_01_local_0_shard_00000017_processed.jsonl/23247
id summary reporter owner description type status priority milestone component version severity resolution keywords cc lang patch platform 1878 Increase minimum perl 5 version to 5.10.0 coke cotto "See: #1175 We are currently bundling the p5 Pod::Simple & Pod::Escapes in parrot. These modules became core in 5.9.3. {{{ 5.8.4 2004-Apr-21 5.9.3 2006-Jan-28 }}} We could jump through hoops if necessary to keep our perl version stable and require versions of Pod::Simple, but this seems much simpler." RFC closed normal 2.11 configure 2.10.0 medium wontfix
global_01_local_0_shard_00000017_processed.jsonl/23250
Take the 2-minute tour × I will be in the Republic of Georgia in the Caucasus in a few months and am considering a side trip to Russia; either via the Black Sea to Sochi or across the one land border that is apparently open to foreigners. But considering the recent history and delicate political situation between these countries, is it possible for a foreigner to get a visa to Russia while in Georgia? If not what about Armenia? If not, what other possibilities exist? share|improve this question add comment 2 Answers up vote 7 down vote accepted Embassy of Switzerland in Georgia, Russian Federation Interests Section is issuing the Russian visas in Tbilisi, but not the touristic ones, unfortunately. So you must apply for a visa in Australia. share|improve this answer add comment Everything I've been told about Russian visas (and I'm in Russia now) is that you can ONLY apply for a visa from your country of citizenship, OR your country of residency, if the two are different. Which was handy for me as a New Zealander living in the UK. From http://www.visatorussia.com/russianvisa.nsf/FaqNew.html : "If you apply for a visa not in your own country, a copy of your residence or work permit in the country of your temporary residence may also be required. In this case our operators will inform you about this additional requirement. " Which would seem to imply that yes, you can from Georgia IF and only IF you are currently working there. However, I'd suggest contacting visatorussia.com, as they may have some additional tricks. share|improve this answer Hmm if I have to apply in Australia then definitely no Russia for me this year since I'm already in Korea and flying to Turkey tomorrow )-: –  hippietrail Jun 23 '11 at 11:24 All you can do is to try to consult the Russian сonsul in country you currently are in. But I think what this can be really a problem. –  VMAtm Jun 23 '11 at 11:35 Russian consulates can often be quite flexible, though I would recommend finding someone who speaks fluent Russian for such negotiations. –  JonathanReez Jan 18 at 11:23 add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23265
main index Topical Tropes Other Categories TV Tropes Org Video Game: Sailor Moon Another Story 2 A Fan Sequel video game crated with RPG Maker XP by a fan of the series. It takes place 6 months after Sailor Moon Sailor Stars in the manga and Chibi-Usa is still around. It was a fan sequel to Sailor Moon: Another Story which is basically a similar story to the first but somehow better. The game can be downloaded here in both Italian and English. This Work Contains the Following Examples: • Awesome, but Impractical: Eternal Sailor Moon and Super Sailor Chibi Moon, as with the original the stat buff really isn't that great for characters whose abilities are built more for support, but at least Eternal Sailor Moon gets more defense this time. • Awesome yet Practical: Sailor Star Maker's attack and Sailor Mercury's debuffing Shabon Spray. • Badass: Every fighting character. • Badass Crew: The Sailor Senshi of course. • Combination Attack: The Senshi get these in battle like the first game. • Continuity Nod: There a large number of them, mostly towards both anime and the game. The manga gets a very tiny nod in Crystal Tokyo when Chibi-Usa is mentioned to be 910 years old by a civilian. • Continuity Snarl: Like with the original Another Story it mixes anime and manga elements along with a small reference to the previous game and Sera Myu. • Curb-Stomp Battle: Sometimes you can beat the enemies in 1-hit kills, sometimes they do it to you. Unlike the first game this is a lot more balanced so you can be level 8 and still take on some low-level monsters who put up a good fight longer then a 1-hit kill. • Gratuitous Italian: As the English version was translated from Italian some words were left alone. • Gratuitous Japanese: When applicable certain words were left alone or purposely used Japanese words like 'Senshi' for example. • Guide Dang It: Certain things can be missed so easily if you do not search around, including the four Asteroid Senshi's equipment that boost's their unique attack's power. • Level Grinding: Required for nearly every boss, especially for Galaxia and Tiamat! • Multiple Endings: There are three known endings - Golden Ending, Normal Ending, Bad Ending. • Random Encounters: Various monsters to fight. • Video Game Geography: Type 2 since it seems like you can walk miles in mere screen changes. • Voice Actors: Several fans were requested to do the voice acting of several characters who had no voice when doing attacks. Permissions beyond the scope of this license may be available from Privacy Policy
global_01_local_0_shard_00000017_processed.jsonl/23273
Category:Boys with Girls' Names From Uncyclopedia, the content-free encyclopedia Revision as of 09:03, June 20, 2011 by Haydrahlienne (talk | contribs) Jump to: navigation, search Pages in category "Boys with Girls' Names" Personal tools
global_01_local_0_shard_00000017_processed.jsonl/23283
Take the 2-minute tour × What is the best experience to use for a photo tagging feature? to view tagging when we hover on the photo? or having a separate button in the photo page that calls for tagging ? share|improve this question When you say "tagging", do you mean tagging specific regions of an image (e.g. tagging faces on Facebook) or just associating keyword metadata with the image? –  Matt Obee Oct 22 '12 at 9:26 Yes tagging faces or items in a photo, we can say like the facebook tagging. –  Jamila Hyasat Oct 22 '12 at 9:33 add comment 2 Answers up vote 2 down vote accepted The best photo tagging experience is no tagging: when the photos are just "magically" tagged correctly, either by a program or by someone else. Then comes the first fallback. community tagging. It depends on the context wether tagging mode should be enabled by default or not. I guess if showing already tagged items is on by default, then tagging mode should be default as well on desktop interfaces, where hover mode actually exists. You could, for example, show a frame around untagged faces on hover and say:"Who's this?" instead of a person's name. On more and more prevalent mobile and tablet (touch) interfaces however, there's no such thing as hover: here, accidental taps could happen, so tagging is better left off by default, except when the whole application is mainly about tagging. But no matter which mode you choose, it's always recommended to make computers help people: even if a computer cannot recognize a face for sure, nowadays we can more or less recognize where is a face. Therefore the computers can pre-calculate the rectangles where it is likely that there's a face. share|improve this answer decent advice if you've got a killer server and a AAA-class programmer. Let's be real, face detection is not an easy algorithm even google and facebook fail to do it successfully at times. A small business would be hopelessly bad at the task. Magically tagged correctly is so laughably difficult that you might as well have said that the business can magically print money. Yes I am aware that google does have a face recognition program, but they also practically print money. –  VoronoiPotato Oct 22 '12 at 13:26 I apologize if my comment was rude, it's entirely possible that you might not have known the relative difficulty of the task you suggested. I come from a programming background, and people often suggest very difficult tasks (or sometimes impossible) as if they're trivial. –  VoronoiPotato Oct 22 '12 at 13:31 @VoronoiPotato We risk going off topic but, while it's difficult to do reliably in mission-critical situations, you can achieve very basic facial recognition (suitable for this use case) using client-side scripting. Although this is of course limited to "we think there's a face here" rather than "we think there's a person called Bob here". –  Matt Obee Oct 22 '12 at 13:52 Recognizing that it is a face, yes, recognizing whose face, no. Not to mention that a poorly implemented facial recognition system can be a social faux pas. For example failing to account for varying skin tones, eye shapes, can be quite (unintentionally) offensive. The human face is pretty complex, and an intimate part of our social interactions. It's a tall order to expect a computer to handle the niche scenarios. It would not feel good if the machine couldn't recognize your face because of cleft palate. EDIT: I think we're [mostly] on the same page. –  VoronoiPotato Oct 22 '12 at 14:00 @VoronoiPotato: I think we both agree, that the best experience is when these algos are there. After that, I started to enlist the "escape routes", mentioning recognizing simply that there is a face. I do agree it's a hard topic and it takes a lot of neural networks and perhaps it's nearly magic, but an OpenCV-based classifier is not unreacheable even for a startup (it detects just the existence of a face). Still I hold: the best experience is when it's not manual labor –  Aadaam Oct 22 '12 at 16:14 add comment I think honestly it depends on the kind of website or program you're trying to run. If the tag depends on a specific subject in the image that the viewer could not otherwise identify such as a face, or a type of fruit that isn't common knowledge (etc) then I think the hover tagging is essential. Otherwise if it's to denote a general mood or general subject matter such as a Yosemite park or sorrowful, then the button is more helpful. You might find that your users value both, as the general tags can be very useful for sorting. share|improve this answer add comment Your Answer
global_01_local_0_shard_00000017_processed.jsonl/23300
Commentary & Opinion 3:50 pm Thu October 17, 2013 Rabbi Dan Ornstein: A New Internet Idea: Google Divine Images Below is my imaginary letter to Sergei Brin, the co-founder and owner of Google. Dear Mr. Brin:     As an avid consumer of Google’s search engine services, I thank you for creating this enormously powerful information tool.  Your invention is an outstanding example of your passionate commitment to unfettered access to information in the pursuit of strengthening international democratic freedoms.  As a Jewish man whose family left the former Soviet Union to seek those freedoms, you know better than most people how they foster and protect human dignity.       It is the protection of human dignity that moves me to write to you.  The Hebrew Bible teaches that God created human beings in the divine image and likeness.  This idea is a foundation of our most cherished beliefs concerning human freedom and equality because it posits that every person is a reflection of God and therefore deserving of dignified treatment.  It is so important in Jewish ethics that one ancient sage famously asserted that it is the most important principle of Judaism.  Another famous Jewish teaching explained it by contrasting God’s creation of human beings with a coin maker stamping coins.  The latter can make many silver dollars but they will look exactly the same.  When God, the ultimate Coin Maker “stamps” human beings, each of us looks different from the other, a powerful way of expressing how miraculous human uniqueness and diversity are. This coin making concept has ancient roots in the practices of near eastern monarchs who would stamp official documents with their royal seals - their images - that identified them.  Every person is like a royal seal representing God’s presence on earth.  This concept steadfastly holds that human beings have unconditional worth and a right to each other’s respect.  Google already plays a role in fostering that respect by fostering human freedom.  I suggest another role for it to play in this regard.  My searches on  Google Images using the key words “divine image” result mostly in pictures of Jesus, angels in clouds, or  images of the comedian and drag queen, Divine.   Why not create Google Divine Images, a search engine devoted entirely to honoring humanity by presenting the portraits and stories of individual human beings, in order to highlight their suffering, struggles and successes?    Imagine encouraging Google users throughout the planet to upload the photos, names and brief histories of men, women and children hiding behind the numbing, faceless statistics of populations brutalized or destroyed by racism, misogyny, and political repression.  They could also highlight individuals in the tiniest corners of the world, whose decency and kindness make a difference to even one other human being.  Flag, categorize, and archive every uploaded photo and story in order to create a massive “divine image” database.  Add a link to Google Divine Images on the Google applications section at the top of every Gmail page.  As I browse, I would be able to look up and learn about people listed by country, conflict, and compassionate or courageous types of behavior.  Judaism teaches that the divine image is enhanced or damaged whenever any person’s life is enhanced or damaged.  Each image and story would stand as tribute or as terrifying testimony to that teaching.     The great scholar and social activist, Rabbi Abraham Heschel wrote that “The mark of Cain in the face of man has come to overshadow the likeness of God.”  Your company’s motto is “Don’t be evil.”  Human evil is the toxic descendant of the world’s first act of brutality, when Cain murdered his brother, Abel, thus permanently damaging God’s image and likeness.  Mr. Brin, whether or not you accept my proposal and whatever you believe about God, I challenge you and others with your level of immense power and influence to  never stop helping to restore that image and likeness.  No less than human survival depends upon it.  Related program:
global_01_local_0_shard_00000017_processed.jsonl/23313
Movie Reviews 12:56 pm Sun February 17, 2008 City of Men A labyrinth of life "Poverty, to be picturesque, should be rural. Suburban misery is as hideous as it is pitiable." Anthony Trollope Whew! I'm out of breath following youth gangs in the favela of Rio as they fight for a city hill as if they were in WWII's Pork Chop battle. Machine guns rule; women do not (contrary to the stereotype of matriarchal Latin society). It may not be City of God, the frenetic precursor using two of the same actors, but it has the Battle of Algiers' claustrophobia, which had a better-appointed Kasbah yet the same feeling of people darting around corners to avoid ever present Death. The two central characters, teenage boys trying to keep their friendship and families in tact while around them chaos rules, veer between themselves and annihilation as they fight off the temptation to carry weapons like their friends yet can't find a way to survive without guns. There is more, however, than just gang warfare because sub-textually director/writer Paulo Morelli identifies a root cause of the dislocations?absentee fathers. (Heck, even the current Spiderwick uses this powerful ingredient.) Much of the film is dedicated to one of the boys finding his father and the other coming to terms with the murder of his. While the former is adequately explored, the latter could have used much more explanation for the boy's suddenly joining the gang's war. Could it have been the murder of his father? I can't tell you. The requisite hillside shots of the Rio harbor help the figurative contrast between the rich Brazilian scenery and the squalor of the barrio. Both conditions, of course, help to emphasize the globally accurate distance between the have's and the have not's, a condition the present economic global downturn is exacerbating. City of Men is a city of all men, racing through the labyrinth of life trying to survive, and losing. City of God is a movie that contradicts its name; City of Men is spot on?God help us.
global_01_local_0_shard_00000017_processed.jsonl/23317
Take the 2-minute tour × Twitter now has an 'activity' tab where the 'retweets' tab used to be. How can I find out which of my tweets have been retweeted, and by whom? share|improve this question add comment 1 Answer The Rewteeted of Mine page still exists for the moment but I believe the target you are looking for in the current interface is Twitter > Connect > Interactions as listed in the help center: Go to Connect in the top navigation bar. In the Interactions section you will see all activity concerning your Tweets — including which have recently been retweeted and by whom. share|improve this answer add comment protected by Community Apr 15 '12 at 11:47
global_01_local_0_shard_00000017_processed.jsonl/23318
Take the 2-minute tour × My recent activity doesn't show the part that says I am now friends with somebody. How do I get it back? I have already tried going to options at the bottom of my wall. For most people, if you go on to your wall, scroll right down to the bottom of your page and click on "Edit options" on the right hand side, a box should open saying "Edit your profile story settings" and you just click the x beside friending activity. The problem I'm having is that it says I have no stories hidden. share|improve this question do you have the Timeline profile or still the old one? –  Isuru Jan 24 '12 at 18:01 the old one still –  Mary Jan 24 '12 at 18:06 Have you hidden your friend list from your friends? If yes, then set the privacy setting of your friend list to Friends and the activity should start showing up on your profile once again. –  input Jan 26 '12 at 17:42 What about if u had the timeline how would u do it then –  user20154 May 16 '12 at 23:06 add comment 3 Answers Go to your activity log. At the top it will say posts and apps, switch to all. Now you will literally be able to see everything you did, including the things you searched for on Facebook and the profiles you viewed. You will also find friending activity you hid, so you can unhide. share|improve this answer add comment 1. Go to your Timeline. 2. Click: Activity LogPosts and AppsFriends. 3. You can see: Friend activity can show up in — Click the settings icon next to this, then check: Recent Activity share|improve this answer add comment had the same problem, just managed to solve it. This is what you do. Go on your Profile, click on Activity Log. in the long list on the left hand side under Photos, Likes, Comments there's an option to expand to get the rest of the list by clicking on MORE. Once it's done that, find the option that says FRIENDS. A page will load, next to where it says 'who can see your friends list' there is a symbol that looks like a left pointing arrow (really hard symbol to have to try to describe!). It'll open up a dialog box with 4 options to tick/untick. One of them is Recent activity. Hey presto, your friending activity is once again visible on your Profile page. :) share|improve this answer add comment protected by Community Jan 15 at 13:58
global_01_local_0_shard_00000017_processed.jsonl/23319
Take the 2-minute tour × I've been using Google Latitude now for a while and although the beta dashboard gives some sort of insight about what I've been doing where I've been it's not quite as extensive as I'd like to see it. Because of this I'm wondering (and googling quite unsuccessful) if there are webapps where I can upload my KML file that I can export from Latitude to get more fancy statistics/images. I'm especially keen to visualise specific trips for example or just to find out "hotspot" of my daily travel etc etc. Any kind of visualisation or statistics that can be harvested I'm keen on seeing/trying. I'm not looking for anything particular as this is more or less to fulfil my curiosity :-). share|improve this question add comment Your Answer Browse other questions tagged or ask your own question.
global_01_local_0_shard_00000017_processed.jsonl/23321
From Eclipsepedia Jump to: navigation, search Simple Example - student The following are the minimal requirements for this example: 1. Installation & Configuration • Install GlassFish • Check out student example from GIT • Database connectivity • GlassFish - Datasource configuration • Verify config • Deploy web application 2. Running the Example • View metadata • Create entity • Update entity • Query entity • Switch between XML and JSON • Customize XML/JSON representation using MOXy Bindings • Delete entity Installation and Configuration • javax.persistence.jar • org.eclipse.persistence.antlr.jar • org.eclipse.persistence.asm.jar • org.eclipse.persistence.core.jar • org.eclipse.persistence.dbws.jar • org.eclipse.persistence.jpa.jar • org.eclipse.persistence.jpa.jpql.jar • org.eclipse.persistence.jpa.modelgen.jar • org.eclipse.persistence.moxy.jar Note: If you want to see JPA-RS logs, add a logger for org.eclipse.persistence.jpars. Currently exceptions are logged at "FINER" log level, so configure the logger to FINER or FINEST. Use GlassFish Admin Console ➾ Configurations ➾ default-config ➾ Logger Settings ➾ Log Levels tab ➾ Add Logger to add a logger for JPA-RS. git clone git:// Configuring JDBC Connection Pool JDBC Resource 4. Launch Eclipse. Select File ➾ Import ➾ Maven ➾ Existing Maven Projects, hit Next and point Root Directory to the student folder. Hit Finish. 5. Build the student project. Right-click on the student project and select Maven ➾ Update Project..., then click OK. Example domain.png St deploy.png You are now ready to run the example. Running the Example Launch Chrome and Postman Get metadata Execute a GET http://localhost:8080/student.web/persistence/v1.0/jpars_example_student/metadata Metadata 2.png Create a student with a course Execute a POST http://localhost:8080/student.web/persistence/v1.0/jpars_example_student/entity/Student/ (with the following body as an example): "id": 65, "name": "Jane Smith", "courses": [ "name": "math" Execute a named query Execute a GET http://localhost:8080/student.web/persistence/v1.0/jpars_example_student/query/Student.findAll to execute the named query findAll as defined in the eclipselink.example.jpars.student.model.Student entity. Switch between JSON and XML Add a Header called accept with value application/xml and execute the findAll query again. This time you will see the result in XML format. Changing the accept Header to application/json can also be used to accept/output JSON. Customize JSON/XML representation using MOXy Bindings By specifying a MOXy bindings file in persistence.xml, you can modify the format of the XML and JSON produced and accepted by your JPA-RS application. Uncomment the following line in persistence.xml: <persistence version="2.0" xmlns="" xmlns:xsi="" <persistence-unit name="jpars_example_student" transaction-type="JTA"> Delete student Execute a DELETE http://localhost:8080/student.web/persistence/v1.0/jpars_example_student/entity/Student/65
global_01_local_0_shard_00000017_processed.jsonl/23322
The STEM Development Team From Eclipsepedia Jump to: navigation, search back to STEM Contents Page The following is a list of current contributors to STEM • Arik Kershenbaum worked part-time at the IBM Haifa Research Lab while he was a doctoral student at the Department of Evolutionary and Environmental Biology at the University of Haifa, Israel (personal website). He developed add-ons and applications for STEM in the fields of zoonotic disease spread, particularly vector borne diseses. In addition, he was looking at other collaborative applications for the STEM framework in the fields of ecology and zoology, where ecosystems can be represented as a graph network. He has an undergraduate degree in Natural Sciences from the University of Cambridge in England, and is currently a postdoctoral fellow at the National Institute for Mathematical and Biological Synthesis ( • Christian Thoens was an intern in the Healthcare Research team at IBM Almaden and now works at the Federal Institute for Risk Assessment (BfR) in Berlin, Germany. Christian holds a MS degree in computer science from Bielefeld University. ( • Toshiaki Kurokawa is CSK Fellow at CSK Corporation in Tokyo, Japan ( He is also an Affiliated Fellow at National Institute of Science and Technology Policy (NISTEP, Ministry of Education, Culture, Sports, Science and Technology (MEXT). He worked for Toshiba and IBM before joining CSK. He has been engaged in the research of programming languages, object-orientation, and standardization of metadata. His current interests include human resource development and Design Thinking. He is one of founders of ICES (International Cooperation for Education about Standardization • Daniel Doerr is a graduate student in bioinformatics at Bielefeld University. His thesis project focuses on developing new methods in comparative genomics. Daniel first contributed to STEM during an internship in the Healthcare Research team at IBM Almaden. • Kassaye Y. Yigzaw is a PhD student in Computer Science at University of Tromsø, Norway. He first contributed to STEM during his MSc. study in Telemedicine and E-Health at University of Tromsø. Kassaye received his Bachelor degree in Electrical Engineering from Hawassa University, Ethiopia. STEM Developers Emeritus • Iris Eiron was a researcher at the IBM Almaden Research Lab before relocating to the IBM Research Lab in Haifa, Israel, where she continues to contribute to the development and implementation of a national health care information infrastructure. Together with Matthew Hammer and James Kaufman, Iris was one of the creators of the original version of STEM. • Daniel Ford, Ph.D., was a committer and former project co-lead for STEM. He designed and implemented initial versions of STEM, including the core composable graph framework that gives STEM its ability to represent arbitrary models. He received his Ph.D. in Computer Science from the University of Waterloo. • Matthew Hammer was an undergraduate at the University of Wisconsin. He is majoring in computer science with an interest in the field of programming languages. Mr. Hammer worked as an IBM research intern in the summers of 2003 and 2004. Together with Iris Eiron and James Kaufman, Matthew was one of the creators of the original version of STEM. • Ohad Greenshpan is part of the Healthcare and Life Sciences group at the IBM Haifa Research Labs. Mr. Greenshpan is an MSc student for Bioinformatics in Ben-Gurion University, concentrating on Protein Folding algorithms and Structural Bioinformatics. Prior to joining IBM, Mr. Greenshpan was a member of the Genecards team in Weizmann Institute of Science. • Nelson A. Perez was a software engineer for the Healthcare Informatics Research Group at IBM Almaden. Nowadays, Nelson is mostly interested in software engineering, distributed computing, social computing, and web technologies. He holds an MS degree in computer science from the University of California at Riverside. • Dirk Reuter, Ph.D., a research staff member at the Federal Institute for Risk Assessment (BfR), Germany, was an Eclipse Committer. He studied physics and received a Ph.D. in biochemistry from University of Cologne. He has worked in electrophysiology, studying properties and behaviour of ion channels. He has also been involved in gene expression profiling projects on the Affymetrix platform. • Joanna "Jo" Conant. Jo graduated from Middlebury College in 2003 and is now a medical student at the University of Vermont College of Medicine. She is considering a career in Public Health, though also exploring other specialties. An avid skier, Jo moved from the deserts of Phoenix, Arizona, to the mounts of Vermont, where she enjoys skiing 100+ days each year. She now lives in Warren, Vermont, with her husband and dog. • Charles "Chuck" Hulse. Chuck graduated from Bucknell University in 1982, received his PhD in Chemistry from the University of Virginia in 1989 and his MD from the University of North Carolina at Chapel Hill in 1995. He completed his family medicine residency at the Department of Family Medicine of the University of Vermont College of Medicine in 1998. After serving as chief resident, he joined the facult and is now an Associate Professor of Family Medicine. A native of Eastern Long Island, Chuck has an intense interest in nature and is an aspiring nature photographer. He lives with his family in the beautiful Champlain Islands where he raises heirloom vegetables, fruits and berries, bees, chickens, goats, and sheep.
global_01_local_0_shard_00000017_processed.jsonl/23325
Preparing to Upgrade to Exchange 2007 Microsoft Exchange Server 2007 represents a significant upgrade from Exchange Server 2003 and earlier versions. It’s as big of a jump as it was migrating from Exchange Server 5.5 to Exchange 2000 Server. You can upgrade from Exchange 2003 or Exchange 2000, but there's no direct upgrade path from Exchange 5.5 and earlier versions. If you need to upgrade from Exchange 5.5 you could upgrade from Exchange 5.5 to Exchange 2003 and then from Exchange 2003 to Exchange 2007, but that's a lot of work. Another method is to export all the mailbox data to PSTs, then import the PST information to the Exchange 2007 mailboxes. If you use this method, make sure that your mailboxes don't exceed 2GB--the maximum size of a PST. As you know, Exchange 2007 will run on the x64 platform only, which probably means you’ll need to purchase new hardware. If you’re running Exchange 2000, one of the motivating factors to upgrade to Exchange 2007 is the lack of a (reasonably priced) Daylight Saving patch for Exchange 2000. Symantec just released Backup Exec 11d, which supports Exchange 2007, so you can finally use this software to back up your Exchange 2007 server. If you install Backup Exec 11d, make sure to download the latest version from Symantec’s Web site. The current release appears to be stable, but earlier versions of 11d had significant problems. If your backup vendor doesn’t support Exchange 2007 yet, you can use NTBackup to create an Exchange 2007 backup to disk, then use your backup software to back up the contents of the server. If your Exchange 2007 environment will have less than 400 users, to ease the pain of purchasing new hardware, consider virutalizing the server so you can run different virtual guests on the same host server. You can use either VMware Server or ESX Server as your virtualization platform because both platforms support x64 guests. You can't use Microsoft Virtual Server 2005 because it doesn't support x64 guests. Two strategies that make Exchange 2007 more scalable are the x64 platform and the amount of memory that Exchange 2007 can use. With Exchange 2003/2000, you were ultimately at the mercy of your disk subsystem because the speed of your disk subsystem determined the ultimate performance of the Exchange server. With Exchange 2007, the strategy is to cache everything, which significantly reduces the load on the disk subsystem. However, you need a lot more memory to get good performance from the server. For an Exchange 2007 mailbox server with less than 50 users, you could use VMware Server, however you can only address a maximum of 3.8GB of memory for any virtual server guest. ESX Server lets you have a maximum of 16GB of memory for any virtual guest. If your mailbox server must support a larger number of users (more than 400), you should probably keep the Exchange 2007 server dedicated and not virtualize it. However, if you plan to have dedicated servers for specific roles (e.g., Client Access, Edge Transport, Hub Transport), you might be able to virtualize these servers. The new Exchange Management Console (EMC) runs rather slow so having fast hardware is vital to good performance of the EMC. The roles a server can have in Exchange 2007 are much more granular that earlier versions of Exchange. Depending on the size and requirements of your company, you can dedicate separate servers to different roles or consolidate most roles on a single server. With Exchange 2007, a server can have the following roles: • Mailbox server. • Client Access server. • Edge Transport server. • Hub Transport server • Unified Messaging server Separating these roles allows Exchange 2007 to be much more scalable. For smaller installations, most of these roles will be placed on one server. Before you upgrade, make sure all your Exchange-related applications are compatible with Exchange 2007. In addition to the backup software, make sure your antivirus solution and antispam solution are compatible with Exchange 2007. Check with your vendor ahead of time to ensure the packages you have are compatible with Exchange 2007. You must have at least one Windows 2003 domain controller (DC) designated as your Schema Master and your Active Directory (AD) domain must be in Windows 2000 Native Mode. Technically only the Schema Master role needs to be transferred to a Windows 2003 DC, but if you haven’t transferred all the Flexible Single Master Operations (FSMO) roles to Windows 2003 DCs, now is a good time to do so. Although you can have Win2K DCs in AD, having all of your DCs running Windows 2003 will simplify the installation and maintenance of your Exchange server. If you have any Win2K DCs, when you try to install Exchange 2007 you might receive an error message that the DC must be running Windows 5.2 (Windows 2003). You can override the DC that the Exchange 2007 server is using by issuing the following command switches with the Exchange 2007 setup program: <DVDDrive>:\setup.exe /mode:install /roles:HT,CA,MB,MT /domaincontroller:<Windows2003DC> This command will run the Exchange 2007 setup program in installation mode with the server roles of Hub Transport, Client Access, Mailbox, and Management Tools using the specified Windows 2003 DC. If you’re migrating from Exchange 2003 or Exchange 2000 and you still have Win2K DCs, you might have difficulty moving mailboxes between your Exchange 2003/2000 server and Exchange 2007 via the EMC. You can override the DC by using the PowerShell move-mailbox command with the –DomainController switch. In addition, using the EMC to run any recipient task might also present problems if you have any Win2K DCs. The bottom line is to upgrade all your Win2K DCs to Windows 2003 before you attempt the Exchange 2007 installation. You’ll have enough on your hands without having to learn new PowerShell commands in addition to the EMC. Installing Windows 2003 SP2 will reduce the number of updates you’ll need when you install Exchange 2007. One curious item you’ll notice after you’ve installed Exchange 2007 is the inability to create a mailbox using the Microsoft Management Console (MMC) Active Directory Users and Computers snap-in. You have to use EMC or PowerShell to create the mailbox, so usually creating a new account will be a two-step process: creating the AD account with Active Directory Users and Computers, then creating the mailbox using EMC or Powershell. Although you can create AD accounts directly in EMC or Powershell you’re limited in the parameters you can add to the new account, so I suspect that most administrators will create the AD account in Active Directory Users and Computers first. Exchange 2007, in many respects, represents a quantum leap forward in scalability, functionality, performance, and features, but it has a definite learning curve. Tip: Windows Sharepoint Service 3.0 Notifications If you’re running Windows Sharepoint Services (WSS) 3.0, and you suddenly stop receiving notifications when items are updated on the portal, check the Timer Tasks on the portal. Try deleting any existing tasks. I’ve seen corrupted tasks get stuck in the task queue, which prevents other tasks from properly executing.
global_01_local_0_shard_00000017_processed.jsonl/23327
Ready to get started?Download WordPress Plugin Directory Google Shared Contents Google Shared Contents publishs the contents you shared with your google reader account. Plugin allows you to publish your shared contents which you What are the requirements for this Plugin? You just need a Google ID (account). If you don't know what your Google ID is: Login to your Google Reader acount and click “shared items” on left sidebar. Than you will see a link that has a path to contents you shared before as below: The equivalent data in path you get to bolded data above is your Google ID. How can I have a button to directly share some content from webpages with Google Reader Login to your Google Reader acount and click “Notes” on left sidebar. Than you can drag and drop the sampled buton to any area you want on your browser. Or you can manually create a button has a location as below (pleace remove the line spaces): Compatible up to: 2.7.1 Last Updated: 2009-5-22 Downloads: 1,148 3 stars 3 out of 5 stars Got something to say? Need help? Not enough data 0 people say it works. 0 people say it's broken.
global_01_local_0_shard_00000017_processed.jsonl/23365
Skip to navigation | Skip to content Polar bears are no new kids on the block Research confirms that polar bears have been through warming phases before (Source: Hansruedi Weyrich/ Bear facts Polar bears evolved as a separate species far earlier than previously thought, according to a new genetic study, which adds to worries about their ability to adapt in a rapidly warming world. Research published today in the journal Science, shows the Arctic's top predators split off from brown bears, their closest relatives, around 600,000 years ago - five times earlier than scientists had generally assumed. The finding suggests polar bears took a long time to adapt to their icy world and may therefore struggle to adjust as the Arctic gets warmer and the sea ice melts, depriving them of vital hunting platforms. Despite being a very different species in terms of body size, skin and coat colour, fur type, tooth structure, and behaviour, previous research had indicated that polar and brown bears diverged only recently in evolutionary terms. That assumption was based on studying mitochondrial lineage - a small part of the genome, or DNA, that is passed exclusively from mothers to offspring. But after studying DNA from inside the cell nucleus, using samples from 19 polar and 18 brown bears, Frank Hailer of Germany's Biodiversity and Climate Research Centre and colleagues reached a very different conclusion. They found both polar and brown bears were much older, as species. "Previous studies suggested that polar bears would have had to be evolving very rapidly, since they were so young," says Hailer. "Our study provides a lot more time for polar bears to adapt ... It makes more sense from an evolutionary standpoint that polar bears would be older." Slow process His team's calculations put the moment when the two types of bears diverged in the Pleistocene period, when the climate record shows that global temperatures reached a long-term low. That could be coincidental but it suggests that the planet's cooling may have triggered the split. While the latest research implies that past polar bear adaptation was probably a slow process, it also means the animals have been through warming phases before. "If they go extinct in this phase of warming, we're going to have to ask ourselves what our role in that process was," says Hailer. "In previous warm phases between the ice ages polar bears were able to survive. The main difference this time is that humans are impacting polar bears as well." Genetic studies are an important tool in researching the evolutionary history of polar bears, since the animals typically live and die on sea ice. As a result, their bodies sink to the sea floor, where they get ground up by glaciers or remain undiscovered, making fossils scarce. Tags: climate-change, animals, genetics, mammals
global_01_local_0_shard_00000017_processed.jsonl/23366
people in the news about the news  More About Sunday Profile About Sunday Profile Contact Us Listen live in: Windows Media Real Media [9.05pm - 9.30pm Sundays EAT] Australia All Over Tony Delroy's Nightlife Saturday Night Country Speaking Out Sunday Nights Julian Burnside, QC. Presenter: Monica Attard MONICA ATTARD: This week we revisit an interview with human rights lawyer Julian Burnside QC about his passionate, and at times controversial, defence of unpopular minorities. The prominent barrister speaks about the great personal cost of speaking out on political issues - a fact detailed in his new book, Watching Brief. He expresses some sympathy for the position of his colleague, Peter Faris QC, who was under investigation after speaking out about drug use amongst lawyers, but Julian Burnside has some views on the topic that some might find surprising. I began by asking Julian Burnside about what he says is the very personal toll he's paid for his public advocacy. JULIAN BURNSIDE: Well I've lost quite a few friends because for the first few years at least a lot of people still accepted everything the Howard Government was saying and so I guess my views seemed a bit strange. And the second thing was that, you know, quite a, whereas I used to get most of my work from the big firms at the big end of town working for big corporations, a number of those firms, I think, found it politically unwise to keep briefing me and so for a few years at least, those people disappeared and briefed other people. MONICA ATTARD: Alright and... MONICA ATTARD: ...why did they find it inappropriate to be briefing you? JULIAN BURNSIDE: Ah, because the Federal Government hands out millions and millions of dollars worth of legal work to the private profession every year and the big national firms get the lion share of that work. Now, rightly or wrongly, the Government is perceived as being somewhat vindictive so that if you are wanting more work from the Government it's not a smart move to be seen to be briefing one of the Government's critics. MONICA ATTARD: And was that ever told to you explicitly? JULIAN BURNSIDE: It was never told to me explicitly but it was a view which I formed just by coincidence of events and it was confirmed a few years later when a junior employee of a national firm rang me up because he wanted to write a piece for the Solicitors Journal about Spare Lawyers for Refugees, which is a pro-bono group I set up to help manage the asylum-seeker cases that were coming through in large numbers. He wrote the piece. It wasn't about my views. It wasn't about me, particularly, it was just about this group of pro-bono lawyers that I'd set up and administered. When he finished the article and it was published, he sent it to me with an apologetic note pointing out it was not published over his name because his firm does a lot of government work and one of the Sydney partners in the firm thought it was undesirable that his name be connected with my name in an article in the Solicitors Journal. MONICA ATTARD: So you're assuming then that that is part of the reason why work from the private firms to you has dried up? JULIAN BURNSIDE: Well I assume that and it's notable that since mid-2005, when Petro Georgiou and the other dissident backbenchers managed to push through some reforms to the system, the big end of town came back. So (chuckles), you know, I... there may be other explanations but I thought that probably the things were connected. MONICA ATTARD: So the work has come back to you? MONICA ATTARD: Now Julian Burnside, can I ask you as well, in your book you say that the attacks on you by colleagues, by Liberal Party MPs and by what you call the pro-Howard commentariat, has added a new dimension of personal discomfort to your life. JULIAN BURNSIDE: Yeah. Well, you know, I mean being... MONICA ATTARD: What do you mean by that? JULIAN BURNSIDE: Being publicly criticised for expressing honest opinions is a new experience for me. I mean, maybe I've lead a sheltered life but I hadn't experienced before and it's particularly annoying when the pattern, especially amongst some of the columnists, has been: "Burnside says this. That's idiotic for these reasons. What a fool he must be," where the starting proposition is actually a distortion of what I've been saying. You know, if they would actually attack the arguments I do make, I wouldn't have a problem with that and in particular I'd be grateful if they can demonstrate why I'm wrong, but to distort your argument in order to knock it down is a different thing. And of course, for the average reader, they just take a face value what they read and they think, "Oh yeah well Burnside must be an idiot." But then, but then, of course, there are other people who sort of approach the thing agreeing with me already and so I guess they don't pay much regard to the people who criticize my views. MONICA ATTARD: So playing the man not the ball is what annoys you? JULIAN BURNSIDE: Um, ah well, all I'm saying is it was a novel experience. Yeah, but I guess there's a bit of playing the man. MONICA ATTARD: Mm. Do you think it's appropriate though for a member of the bar to be commenting publicly on political issues? Even though you say it's an ethical issue which has a political dimension, but it, at the end of the day, I think most people would see it as a political issue. For example, the treatment of asylum-seekers. JULIAN BURNSIDE: Well that's pre-eminently an ethical issue. I mean, the way in which a society, with the weight of the Government, treats innocent human beings. That's an ethical issue. Of course in intersects with politics because it's the politicians who are ultimately responsible for what's done and politicians who are responsible for whether it's changed. MONICA ATTARD: So if you see it as an ethical issue would you then expect the Labor Party, if elected to power, to overturn the current laws? JULIAN BURNSIDE: Well I would hope so but it has to be remembered that they introduced the current laws. They didn't politicise them the way the Howard Government did post-2001 but, you know, look, I'm sort of cautiously optimistic that Kevin Rudd's influence might see an amelioration of our approach to asylum-seekers. They have said that they will abolish the Pacific Solution, which is all to the good, 'cause that's really just the most shameful extremity of our treatment of asylum-seekers. MONICA ATTARD: But back to that question, do you think it's appropriate for a member of the bar to be commenting publicly on these sorts of issues? JULIAN BURNSIDE: You sound exactly like the wife of a colleague of mine who asked me that question at a fancy social function some years ago and on the spur of the moment I said, well do you think it's appropriate to know that these things are going on and stay silent? And for me that's a sufficient answer because I do see it as an ethical problem. MONICA ATTARD: So in your view there are some things about which one can't stay silent? MONICA ATTARD: Now, what do you make of the Victorian Bar Association's investigation into the comments by another of, another barrister - one of your colleagues - Peter Faris? JULIAN BURNSIDE: Look, I'm curious about it actually. I only learned about it in the last day or so and I'm a little bit surprised. I'm disappointed by what I've read about statements attributed to members of the Bar's executive but then, I don't know all the facts. So I'm, I personally don't have a problem with Peter saying the sorts of things that he's said. I don't agree with him but I don't have a problem with him saying them. MONICA ATTARD: Of course he's talking about lawyers who take drugs. Do you think he has a public duty to bear witness to that? JULIAN BURNSIDE: No because he has already said that he is not bearing witness to anything because all he's doing is speaking about anecdotal material that he's received. Now, any lawyer knows that anecdotal evidence is usually pretty unreliable. So, you know, no. I don't think he has, I certainly don't think he has an ethical obligation to raise the matters that he's raising but neither does he have an ethical obligation to keep quiet about it. MONICA ATTARD: Because there are people in the professions upon whom the general public rely and lawyers are amongst them. If these people do indulge in drugs, I would have thought there is a great public interest in standing up and facing that. Would you agree? JULIAN BURNSIDE: I would agree with that with one qualification and the qualification is if it's affecting their professional performance. MONICA ATTARD: And if it's not, not to worry about it? JULIAN BURNSIDE: Well, yeah, to be honest. If it's not affecting their performance well then I don't think that there's a particular public interest in the matter. I mean, the public certainly has an interest in knowing that they're getting a proper professional performance from any professional they engage but if a lawyer or a doctor or an architect or an engineer has odd private habits that don't affect their professional performance, well then I don't think that the public has an interest in knowing about that apart from the prurient interest of learning facts like that about people they don't know. MONICA ATTARD: So if we have lawyers breaking the law there's no public interest in discussing that or knowing about it? JULIAN BURNSIDE: Well, if you take that to its extreme you'd say that the public should be concerned about every lawyer who speeds or gets pinged for .05 and so on down the line. I mean I understand your point but I guess it's a question of degree. MONICA ATTARD: And taking drugs is at which end of the scale then, if you put them on the scale of the crimes that you've just spoken about? JULIAN BURNSIDE: I suppose the only proper answer to that is that you look at what the Parliament has set by way of penalties in order to assess the Parliament's view of the seriousness of various crimes. MONICA ATTARD: And I assume, though, that the crime for drug taking would be greater than the crime for speeding, greater than the penalty for speeding? JULIAN BURNSIDE: Um, well I'm not sure. It's not my area of law but I think if you're caught smoking a marijuana cigarette, you'd probably get off lighter than if you're caught driving over .05. Now... MONICA ATTARD: But if you're caught taking cocaine, for example, presumably you'll get more than you'd get for speeding? JULIAN BURNSIDE: That may be so. It would depend on the speed I suppose but I mean, there have been cases reported in the press over the years of people caught possessing or using cocaine or heroin or amphetamines and this and that, and they get $1,000 fine. Now it's, you know, a person who is caught at, say, .07 or .08 would heave a sigh of relief if they got off with a $1,000 fine. In fact, it's a curious thing, if we are concerned about the effect on a person's performance, it's a curious thing that there's such a disparity in the treatment of alcohol abuse and the abuse of other drugs. Now I know that one answer to that obviously is alcohol is legal and tobacco is legal but a lot of medical opinion which suggests that the society's response to drugs ought to be slightly differently calibrated. MONICA ATTARD: Does it strike you as odd that the Victorian Bar Association, which presumably, you know, upholds freedom of speech as a central tenet, was investigating whether he was a fit and proper person to be a member of the association... JULIAN BURNSIDE: It did surprise me. MONICA ATTARD: ...for speaking his mind? JULIAN BURNSIDE: It did surprise me. MONICA ATTARD: Now, let's move on. You say the Howard fan club in the media, who you name as Andrew Bolt, Piers Akerman, Gerard Henderson, Alan Jones and others, attack you whenever they could. Now you say you're grateful to them. MONICA ATTARD: Why, why are you grateful? JULIAN BURNSIDE: Well because whenever you get a response from people like that I had the impression that maybe I was onto something that mattered. If someone gets up and makes noises that don't hurt anyone well then, they're probably going to ignore it. If they respond sharply, as each of them did from time to time, then I figure that maybe it's perceived that I've said something that could be damaging and so either the message has to be put down or else the messenger has to be put down. But it did give me some sense of when I was touching a nerve and that's always very useful, especially if you're a political naïve as I am. MONICA ATTARD: Mm. Did you have any regrets about wading into political life in this way? I mean, do you look back and think that perhaps, you know, you could have done something differently or that it wasn't the right thing to do at the right time? JULIAN BURNSIDE: I have no regrets at all except that if I had been more politically savvy I might have approached the matter slightly differently. You know, first time involvement in something like this, it would be pretty unlikely that I pushed all the buttons properly. MONICA ATTARD: What would you have done differently? JULIAN BURNSIDE: I'm not sure. I haven't thought that through because I don't really plan to spend a life of political engagement to be honest. MONICA ATTARD: So you're hoping things will change? JULIAN BURNSIDE: I'm really hoping the treatment of asylum-seekers will change. I'm hoping that our approach as a society to notions of social justice will change. You will have noticed in the book that only a third or a quarter of it is to do with asylum-seekers. Really the dominant message in it is about the importance of the justice system generally and social justice as one outcome of the justice system working properly. One of my big complaints is that legal aid and community legal centres are so seriously under funded. What follows from that is that the promise of access to justice, which is so important, is really an illusion. You know, access to justice is no good if you can't afford it and if you can't afford to go to lawyers and if you are not eligible for legal aid then access to justice is just a hollow promise. MONICA ATTARD: Well actually that brings me to a question that I was going to ask you, which was the rumours that have been spread around about you that you were standing up for refugees to build your practice and make money. JULIAN BURNSIDE: (laughs) Yes! Well, let me tell you, I have never received any payment for any refugee case that I've ever done and I never intended to. Apart from anything else - of course the refugees have no money - legal aid is essentially prohibited from providing aid to refugees except in, on an untested point. And I can only think of one human rights case that I've done in the last 10 years where I got paid and that was a four or five month trial that had, was a test case and it had government funding. So I got paid for that although the payment rate was about 15 per cent of my normal commercial rate - so not exactly the way to get rich. MONICA ATTARD: And what was the case? JULIAN BURNSIDE: Oh the Stolen Generation case in Adelaide. MONICA ATTARD: Now despite claims that you're a rusted-on Labor Party member, you're just as critical of the Opposition, as I see it, when it comes to human rights, were you surprised to hear Kevin Rudd carpet Robert McClelland, his Shadow Attorney-General, over the issue of capital punishment? JULIAN BURNSIDE: Yes I was. I was very disappointed because I think both parties, to my understanding, have a policy which is opposed to capital punishment and if you're opposed to capital punishment on grounds of principle rather than pragmatism well then you oppose capital punishment universally. You may not be able to do much about it in other countries but at least you ought to oppose it. Now, I thought it was very tactless of McClelland to say what he did when he did because it was so close to the anniversary of the Bali bombing and so to raise the question of the death penalty for the Bali bombers was tactless. But equally I think it was wrong of Rudd not to come out, what he should have said ideally - and let's put aside the fact that there's an election campaign going - but what he should have said in a perfect world is: "I agree entirely with McClelland's position that capital punishment is always wrong but it is distressing and tactless to mention it right now when the family of the deceased are so focused on the matter. MONICA ATTARD: Now you've recently called for a bill of rights in Australia. What practical difference will it make to our lives do you think? JULIAN BURNSIDE: A bill of rights, and let's make it clear, I'm not talking about a US-style bill of rights but a modern bill of rights, of the sort that Victoria has and the ACT has. I'm talking about a statutory bill of rights so that Parliament can overwrite it if it needs to. The practical difference is that it would provide a legal tool kit that enabled lawyers, courts and parliamentarians to mount plausible arguments against legislation that unreasonably interferes with recognised human rights. Take a single instance. The Al-Kateb case, which most people have never heard of, was a case decided a couple of years ago by the High Court. Al-Kateb had come to Australia asking for asylum. Under the mandatory detention laws he has to be detained until he gets a visa or until he's removed from Australia. Well he was refused a visa but couldn't be removed from Australia because he's stateless. Now, there's a bit of an anomaly. He hasn't committed an offence. He's not considered to be a risk to anyone but the only two ways out of detention are closed to him. What to do? Now, they could have easily introduced a new visa category to deal with an anomalous case like that but instead the Government argued all the way to the High Court that that man, innocent of any offence, can be held in detention for the rest of his life. And the High Court, by a majority of four to three, held that that's what the law means and it's constitutionally valid. Now I think... I can hardly think of a more shocking proposition than that an innocent person who's not a risk to anyone can be detained for life. You know, it offends all our basic instincts about human decency. JULIAN BURNSIDE: A bill of rights, if it existed, would have enabled the court to say that the words having that meaning are not valid. You know, at that extent they are not valid. And I know that one member of the majority, Justice McHugh, who has since retired from the court, is now actively campaigning for a bill of rights and he refers to his regret at being forced to decide Al-Kateb the way he did, and has said publicly that a bill of rights would provide the tool kit to enable decisions like that to be avoided. What it does it to provide a sort of guarantee of some basic ethical principles which are very easily overlooked when society is going through a period of stress, and of course, the people who suffer in those times are not the majority but the unpopular minority. And protecting the rights of unpopular minorities is always a challenge in a majoritarian system. A bill of rights is the way in which you can do it. MONICA ATTARD: But it has to be said that a bill of rights hasn't exactly helped the United States, has it, through its hard times? JULIAN BURNSIDE: No, but it's not a very good bill of rights and there is at least a silent tribute to the bill of rights in the US that the Government set up Guantanamo bay. The reason they set up Guantanamo bay was that they thought it could operate outside the protection which the bill of rights offers and they want to be freed of the inconvenience that they might have to treat people there like human beings. MONICA ATTARD: Now you say that we lack the imagination in Australia to understand the realities of the policies that we have in place in relation to mandatory detention for refugees. Can you tell me why you think Australians find it so hard to actually empathise with refugees in detention? JULIAN BURNSIDE: Well, I don't want to be accused of being too harsh but I think it's something like this. If you ask most Australians - and I have great faith in most Australians - if you asked them if they believe in human rights they'd say; "yes of course". And if you scratch a bit further you'll begin to see that what they really mean is; "I believe in human rights for myself and my family, my loved ones, my friends, my neighbours..." and then you trail off in a row of dots because if it comes to human rights for people we're scared of or people we haven't welcomed here, people we don't want or don't like, they're much more diffident about recognising those rights. Have a look at the difference in people's responses when the Cornelia Rau case happened. You know, Australians had looked with unconcern as thousands of men, women and children had been locked up in desert detention centres, but when it was revealed that a pretty, blonde, blue-eyed woman who looked just like one of us, had been locked up, people were outraged because she was suddenly identifiably one of us instead of being one of the others. MONICA ATTARD: Do you think Australians understand why asylum-seekers come here in the first place? JULIAN BURNSIDE: No, I don't. I don't. I think... MONICA ATTARD: And why do you think that is? Is it distance from the trouble spots of the world? JULIAN BURNSIDE: Well, partly that, partly also because, you know, the Government's message about them has been very powerful and the counter-message has not been nearly as audible. You know, I mean, the message from the Government has been they're illegals, they're queue-jumpers, they're lining up in their thousands to make a better life, they're economic refugees. Now all of that says these are just fakes taking a lend of us because, after all, this is God's own country. That's the prevailing message. Now it's actually untrue because first of all they were coming in tiny numbers, secondly they don't break the law, thirdly they generally don't choose to come here. This is the place they get taken. If they haven't got enough money to be taken to some other place and finally, and very poignantly, you find amongst many of them that what they would like more than anything else is to go back to the country they came from. It's just that they don't want to go back there and be killed. MONICA ATTARD: A final question Julian Burnside if I might and quickly. The other issue which is discussed in your book is our anti-terror laws, which you say betray basic values. You know, the holding of a person incommunicado for a week at a time, preventative detention orders up to 14 days without trial or charge, control orders up to 12 months, secret trials where national security is deemed to be affected. How do you strike a balance between national security and human rights in this instance? JULIAN BURNSIDE: I think that's not too difficult. In most of the instances you mentioned, my main concern is that either evidence is suppressed, which is relevant to the case, or evidence is taken in secret to the exclusion of the person most affected by it and their lawyers. Now, any trial which runs on secret evidence, which the affected litigant is not allowed to know about, represents great dangers. Now I, of course there are cases where it's important that a defendant not know part of the evidence on national security grounds but I cannot see why the lawyers involved shouldn't be allowed to know the evidence which is being led against their client. That seems to me to be a most basic requirement of any framework of human rights. It's the only way in which you can protect the system from mistakes or abuse. MONICA ATTARD: Well would you want or expect a Labor government to dismantle those laws? JULIAN BURNSIDE: Oh, they don't need to be dismantled. I think they need to be adjusted so as to make sure that some basic principles of the justice system are restored. You know, the idea, for example, preventative detention. That's two weeks' jail, not because you've committed an offence but because it's thought you might. When you're arrested and taken in, you're not allowed to be told the evidence that was used against you. Now, I think that's horrendous. Two weeks' jail on secret evidence, which you're not allowed to know about, so that you haven't even got a theoretical opportunity of going back to the court and saying; "hang on, you've got the wrong guy." MONICA ATTARD: We now have a claim in the newspapers that the AFP and the (then) Federal Immigration Minister may have gotten their heads together, at least at some point, to discuss holding Mohamed Haneef, the Indian doctor on the Gold Coast, in detention should the charges against him be dropped. Does that surprise you? JULIAN BURNSIDE: It disappoints me, and I've seen the document. What it shows is that two days, at least two days before the bail magistrate ruled that Dr Haneef was entitled to bail, the federal police had arranged that the visa would be revoked if necessary, in order to make sure that he remained in custody. Now that, I think, is the clearest illustration that the (then) Minister misused his power. MONICA ATTARD: So you think he clearly misused his power? JULIAN BURNSIDE: Clearly misused his power. It looked like it at the time. MONICA ATTARD: What do you think he should do? JULIAN BURNSIDE: Well, I think he should be sacked but there may not be an opportunity to sack him. MONICA ATTARD: Do you believe that that email alone is sufficient evidence to remove the (then) Minister? MONICA ATTARD: And that was Barrister Julian Burnside. Thanks for listening to Sunday Profile and thanks to the Program Producer, Lorna Knowles, and to ABC Local Radio Producer, Dan Driscoll. listenListen in Windows Media format listenListen in Real Media format listenlisten to Monica with Dr Lawrence Gonzi in downloadable MP3 format listenlisten to Monica with John Symond in downloadable MP3 format Last Updated: 16/12/2007 9:35:00 PM AEDT print print friendly version of this page  | email Email this page Around The Backyard Select a region: Story Indexes Current Program List 2008 Archive 2007 Archive 2006 Archive 2005 Archive 2004 Archive 2003 Archive 2002 Archive What's New on the Backyard Other Local and Regional Features