text
stringlengths
8
267k
meta
dict
Q: When is it best to sanitize user input? User equals untrustworthy. Never trust untrustworthy user's input. I get that. However, I am wondering when the best time to sanitize input is. For example, do you blindly store user input and then sanitize it whenever it is accessed/used, or do you sanitize the input immediately and then store this "cleaned" version? Maybe there are also some other approaches I haven't though of in addition to these. I am leaning more towards the first method, because any data that came from user input must still be approached cautiously, where the "cleaned" data might still unknowingly or accidentally be dangerous. Either way, what method do people think is best, and for what reasons? A: The most important thing is to always be consistent in when you escape. Accidental double sanitizing is lame and not sanitizing is dangerous. For SQL, just make sure your database access library supports bind variables which automatically escapes values. Anyone who manually concatenates user input onto SQL strings should know better. For HTML, I prefer to escape at the last possible moment. If you destroy user input, you can never get it back, and if they make a mistake they can edit and fix later. If you destroy their original input, it's gone forever. A: Unfortunately, almost no one of the participants ever clearly understands what are they talking about. Literally. Only Kibbee managed to make it straight. This topic is all about sanitization. But the truth is, such a thing like wide-termed "general purpose sanitization" everyone is so eager to talk about is just doesn't exist. There are a zillion different mediums, each require it's own, distinct data formatting. Moreover - even single certain medium require different formatting for it's parts. Say, HTML formatting is useless for javascript embedded in HTML page. Or, string formatting is useless for the numbers in SQL query. As a matter of fact, such a "sanitization as early as possible", as suggested in most upvoted answers, is just impossible. As one just cannot tell in which certain medium or medium part the data will be used. Say, we are preparing to defend from "sql-injection", escaping everything that moves. But whoops! - some required fields weren't filled and we have to fill out data back into form instead of database... with all the slashes added. On the other hand, we diligently escaped all the "user input"... but in the sql query we have no quotes around it, as it is a number or identifier. And no "sanitization" ever helped us. On the third hand - okay, we did our best in sanitizing the terrible, untrustworthy and disdained "user input"... but in some inner process we used this very data without any formatting (as we did our best already!) - and whoops! have got second order injection in all its glory. So, from the real life usage point of view, the only proper way would be * *formatting, not whatever "sanitization" *right before use *according to the certain medium rules *and even following sub-rules required for this medium's different parts. A: Early is good, definitely before you try to parse it. Anything you're going to output later, or especially pass to other components (i.e., shell, SQL, etc) must be sanitized. But don't go overboard - for instance, passwords are hashed before you store them (right?). Hash functions can accept arbitrary binary data. And you'll never print out a password (right?). So don't parse passwords - and don't sanitize them. Also, make sure that you're doing the sanitizing from a trusted process - JavaScript/anything client-side is worse than useless security/integrity-wise. (It might provide a better user experience to fail early, though - just do it both places.) A: My opinion is to sanitize user input as soon as posible client side and server side, i'm doing it like this * *(client side), allow the user to enter just specific keys in the field. *(client side), when user goes to the next field using onblur, test the input he entered against a regexp, and notice the user if something is not good. *(server side), test the input again, if field should be INTEGER check for that (in PHP you can use is_numeric() ), if field has a well known format check it against a regexp, all others ( like text comments ), just escape them. If anything is suspicious stop script execution and return a notice to the user that the data he enetered in invalid. If something realy looks like a posible attack, the script send a mail and a SMS to me, so I can check and maibe prevent it as soon as posible, I just need to check the log where i'm loggin all user inputs, and the steps the script made before accepting the input or rejecting it. A: Perl has a taint option which considers all user input "tainted" until it's been checked with a regular expression. Tainted data can be used and passed around, but it taints any data that it comes in contact with until untainted. For instance, if user input is appended to another string, the new string is also tainted. Basically, any expression that contains tainted values will output a tainted result. Tainted data can be thrown around at will (tainting data as it goes), but as soon as it is used by a command that has effect on the outside world, the perl script fails. So if I use tainted data to create a file, construct a shell command, change working directory, etc, Perl will fail with a security error. I'm not aware of another language that has something like "taint", but using it has been very eye opening. It's amazing how quickly tainted data gets spread around if you don't untaint it right away. Things that natural and normal for a programmer, like setting a variable based on user data or opening a file, seem dangerous and risky with tainting turned on. So the best strategy for getting things done is to untaint as soon as you get some data from the outside. And I suspect that's the best way in other languages as well: validate user data right away so that bugs and security holes can't propagate too far. Also, it ought to be easier to audit code for security holes if the potential holes are in one place. And you can never predict which data will be used for what purpose later. A: It depends on what kind of sanitizing you are doing. For protecting against SQL injection, don't do anything to the data itself. Just use prepared statements, and that way, you don't have to worry about messing with the data that the user entered, and having it negatively affect your logic. You have to sanitize a little bit, to ensure that numbers are numbers, and dates are dates, since everything is a string as it comes from the request, but don't try to do any checking to do things like block keywords or anything. For protecting against XSS attacks, it would probably be easier to fix the data before it's stored. However, as others mentioned, sometimes it's nice to have a pristine copy of exactly what the user entered, because once you change it, it's lost forever. It's almost too bad there's not a fool proof way to ensure you application only puts out sanitized HTML the way you can ensure you don't get caught by SQL injection by using prepared queries. A: I sanitize my user data much like Radu... * *First client-side using both regex's and taking control over allowable characters input into given form fields using javascript or jQuery tied to events, such as onChange or OnBlur, which removes any disallowed input before it can even be submitted. Realize however, that this really only has the effect of letting those users in the know, that the data is going to be checked server-side as well. It's more a warning than any actual protection. *Second, and I rarely see this done these days anymore, that the first check being done server-side is to check the location of where the form is being submitted from. By only allowing form submission from a page that you have designated as a valid location, you can kill the script BEFORE you have even read in any data. Granted, that in itself is insufficient, as a good hacker with their own server can 'spoof' both the domain and the IP address to make it appear to your script that it is coming from a valid form location. *Next, and I shouldn't even have to say this, but always, and I mean ALWAYS, run your scripts in taint mode. This forces you to not get lazy, and to be diligent about step number 4. *Sanitize the user data as soon as possible using well-formed regexes appropriate to the data that is expected from any given field on the form. Don't take shortcuts like the infamous 'magic horn of the unicorn' to blow through your taint checks... or you may as well just turn off taint checking in the first place for all the good it will do for your security. That's like giving a psychopath a sharp knife, bearing your throat, and saying 'You really won't hurt me with that will you". And here is where I differ than most others in this fourth step, as I only sanitize the user data that I am going to actually USE in a way that may present a security risk, such as any system calls, assignments to other variables, or any writing to store data. If I am only using the data input by a user to make a comparison to data I have stored on the system myself (therefore knowing that data of my own is safe), then I don't bother to sanitize the user data, as I am never going to us it a way that presents itself as a security problem. For instance, take a username input as an example. I use the username input by the user only to check it against a match in my database, and if true, after that I use the data from the database to perform all other functions I might call for it in the script, knowing it is safe, and never use the users data again after that. *Last, is to filter out all the attempted auto-submits by robots these days, with a 'human authentication' system, such as Captcha. This is important enough these days that I took the time to write my own 'human authentication' schema that uses photos and an input for the 'human' to enter what they see in the picture. I did this because I've found that Captcha type systems really annoy users (you can tell by their squinted-up eyes from trying to decipher the distorted letters... usually over and over again). This is especially important for scripts that use either SendMail or SMTP for email, as these are favorites for your hungry spam-bots. To wrap it up in a nutshell, I'll explain it as I do to my wife... your server is like a popular nightclub, and the more bouncers you have, the less trouble you are likely to have in the nightclub. I have two bouncers outside the door (client-side validation and human authentication), one bouncer right inside the door (checking for valid form submission location... 'Is that really you on this ID'), and several more bouncers in close proximity to the door (running taint mode and using good regexes to check the user data). I know this is an older post, but I felt it important enough for anyone that may read it after my visit here to realize their is no 'magic bullet' when it comes to security, and it takes all these working in conjuction with one another to make your user-provided data secure. Just using one or two of these methods alone is practically worthless, as their power only exists when they all team together. Or in summary, as my Mum would often say... 'Better safe than sorry". UPDATE: One more thing I am doing these days, is Base64 encoding all my data, and then encrypting the Base64 data that will reside on my SQL Databases. It takes about a third more total bytes to store it this way, but the security benefits outweigh the extra size of the data in my opinion. A: I like to sanitize it as early as possible, which means the sanitizing happens when the user tries to enter in invalid data. If there's a TextBox for their age, and they type in anything other that a number, I don't let the keypress for the letter go through. Then, whatever is reading the data (often a server) I do a sanity check when I read in the data, just to make sure that nothing slips in due to a more determined user (such as hand-editing files, or even modifying packets!) Edit: Overall, sanitize early and sanitize any time you've lost sight of the data for even a second (e.g. File Save -> File Open) A: Clean the data before you store it. Generally you shouldn't be preforming ANY SQL actions without first cleaning up input. You don't want to subject yourself to a SQL injection attack. I sort of follow these basic rules. * *Only do modifying SQL actions, such as, INSERT, UPDATE, DELETE through POST. Never GET. *Escape everything. *If you are expecting user input to be something make sure you check that it is that something. For example, you are requesting an number, then make sure it is a number. Use validations. *Use filters. Clean up unwanted characters. A: Users are evil! Well perhaps not always, but my approach is to always sanatize immediately to ensure nothing risky goes anywhere near my backend. The added benefit is that you can provide feed back to the user if you sanitize at point of input. A: Assume all users are malicious. Sanitize all input as soon as possible. Full stop. A: I sanitize my data right before I do any processing on it. I may need to take the First and Last name fields and concatenate them into a third field that gets inserted to the database. I'm going to sanitize the input before I even do the concatenation so I don't get any kind of processing or insertion errors. The sooner the better. Even using Javascript on the front end (in a web setup) is ideal because that will occur without any data going to the server to begin with. The scary part is that you might even want to start sanitizing data coming out of your database as well. The recent surge of ASPRox SQL Injection attacks that have been going around are doubly lethal because it will infect all database tables in a given database. If your database is hosted somewhere where there are multiple accounts being hosted in the same database, your data becomes corrupted because of somebody else's mistake, but now you've joined the ranks of hosting malware to your visitors due to no initial fault of your own. Sure this makes for a whole lot of work up front, but if the data is critical, then it is a worthy investment. A: User input should always be treated as malicious before making it down into lower layers of your application. Always handle sanitizing input as soon as possible and should not for any reason be stored in your database before checking for malicious intent. A: I find that cleaning it immediately has two advantages. One, you can validate against it and provide feedback to the user. Two, you do not have to worry about consuming the data in other places.
{ "language": "en", "url": "https://stackoverflow.com/questions/34896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: C# Linq Grouping I'm experimenting with Linq and am having trouble figuring out grouping. I've gone through several tutorials but for some reason can't figure this out. As an example, say I have a table (SiteStats) with multiple website IDs that stores a count of how many visitors by type have accessed each site in total and for the past 30 days. ╔════════╦═════════════╦════════╦══════╗ β•‘ SiteId β•‘ VisitorType β•‘ Last30 β•‘ Totalβ•‘ ╠════════╬═════════════╬════════╬══════╣ β•‘ 1 β•‘ 1 β•‘ 10 β•‘ 100 β•‘ β•‘ 1 β•‘ 2 β•‘ 40 β•‘ 140 β•‘ β•‘ 2 β•‘ 1 β•‘ 20 β•‘ 180 β•‘ β•šβ•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β• In SQL, I can easily get the counts for SiteID 1 with the following: SELECT SiteId, SUM(Last30) AS Last30Sum FROM Sites WHERE SiteId = 1 GROUP BY SiteId and should get a row like... ╔════════╦════════════╗ β•‘ SiteId β•‘ Last30Totalβ•‘ ╠════════╬════════════╣ β•‘ 1 β•‘ 50 β•‘ β•šβ•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β•β•β•β• However I'm not sure how to get this result using Linq. I've tried: var statsRecord = from ss in db.SiteStats where ss.SiteId == siteId group ss by ss.SiteId into ss select ss; but I'm not able to get back the total with something like statsRecord.Last30 Can someone please let me know where I'm going wrong? Any help is appreciated. A: Easiest way for me to illustrate is using in-memory objects so it's clear what's happening. LINQ to SQL should be able to take that same LINQ query and translate it into appropriate SQL. public class Site { static void Main() { List<Site> sites = new List<Site>() { new Site() { SiteID = 1, VisitorType = 1, Last30 = 10, Total = 100, }, new Site() { SiteID = 1, VisitorType = 2, Last30 = 40, Total = 140, }, new Site() { SiteID = 2, VisitorType = 1, Last30 = 20, Total = 180, }, }; var totals = from s in sites group s by s.SiteID into grouped select new { SiteID = grouped.Key, Last30Sum = (from value in grouped select value.Last30).Sum(), }; foreach (var total in totals) { Console.WriteLine("Site: {0}, Last30Sum: {1}", total.SiteID, total.Last30Sum); } } public int SiteID { get; set; } public int VisitorType { get; set; } public int Last30 { get; set; } public int Total { get; set; } } A: Actually, although Thomas' code will work, it is more succint to use a lambda expression: var totals = from s in sites group s by s.SiteID into grouped select new { SiteID = grouped.Key, Last30Sum = grouped.Sum( s => s.Last30 ) }; which uses the Sum extension method without the need for a nested LINQ operation. as per the LINQ 101 examples - http://msdn.microsoft.com/en-us/vcsharp/aa336747.aspx#sumGrouped
{ "language": "en", "url": "https://stackoverflow.com/questions/34913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How do you use XML::Parser with Style => 'Objects' The manual page for XML::Parser::Style::Objects is horrible. A simple hello world style program would really be helpful. I really wanted to do something like this: (not real code of course) use XML::Parser; my $p = XML::Parser->new(Style => 'Objects', Pkg => 'MyNode'); my $tree = $p->parsefile('foo.xml'); $tree->doSomething(); MyNode::doSomething() { my $self = shift; print "This is a normal node"; for $kid ($self->Kids) { $kid->doSomething(); } } MyNode::special::doSomething() { my $self = shift; print "This is a special node"; } A: When ever I need to do something similar, usually I end up using XML::Parser::EasyTree it has better documentation and is simpler to use. I highly recommend it. A: In all cases here is actual code that runs ... doesn't mean much but produces output and hopefully can get you started ... use XML::Parser; package MyNode::inner; sub doSomething { my $self = shift; print "This is an inner node containing : "; print $self->{Kids}->[0]->{Text}; print "\n"; } package MyNode::Characters; sub doSomething {} package MyNode::foo; sub doSomething { my $self = shift; print "This is an external node\n"; for $kid (@ { $self->{Kids} }) { $kid->doSomething(); } } package main; my $p = XML::Parser->new(Style => 'Objects', Pkg => 'MyNode'); my $tree = $p->parsefile('foo.xml'); for (@$tree) { $_->doSomething(); } with foo.xml <foo> <inner>some text</inner> <inner>something else</inner></foo> which outputs >perl -w "tree.pl" This is an external node This is an inner node containing : some text This is an inner node containing : something else Hope that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/34914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Scaffolding in pylons Is there anything similar to rails' scaffolding for pylons? I've been poking around google, but only found this thing called dbsprockets, which is fine, although probably way too much for my needs. What I really need is a basic CRUD that is based on the SQLAlchemy model. A: The question is super old, but hell: http://code.google.com/p/formalchemy/ Gives you basic crud out of the box, customizable to do even relatively complex things easily, and gives you a drop-in Pylons admin app too (written and customizable with the same api, no magic). A: I hear you, I've followed the Pylons mailing list for a while looking for something similar. There have been some attempts in the past (see AdminPylon and Restin) but none have really kept up with SQLAlchemy's rapidly developing orm api. Since DBSprockets is likely to be incorporated into TurboGears it will likely be maintained. I'd bite the bullet and go with that. A: Just updating an old question. DBSprockets has been replaced by sprox which learns a lot of lessons from it and is pretty cool. It isn't quite the throwaway 'scaffolding' that Rails provides, it is more like an agile form generation tool that is highly extensible.
{ "language": "en", "url": "https://stackoverflow.com/questions/34916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do I lock a file in Perl? What is the best way to create a lock on a file in Perl? Is it best to flock on the file or to create a lock file to place a lock on and check for a lock on the lock file? A: CPAN to the rescue: IO::LockedFile. A: Ryan P wrote: In this case the file is actually unlocked for a short period of time while the file is reopened. So don’t do that. Instead, open the file for read/write: open my $fh, '+<', 'test.dat' or die "Couldn’t open test.dat: $!\n"; When you are ready to write the counter, just seek back to the start of the file. Note that if you do that, you should truncate just before close, so that the file isn’t left with trailing garbage if its new contents are shorter than its previous ones. (Usually, the current position in the file is at its end, so you can just write truncate $fh, tell $fh.) Also, note that I used three-argument open and a lexical file handle, and I also checked the success of the operation. Please avoid global file handles (global variables are bad, mmkay?) and magic two-argument open (which has been a source of many a(n exploitable) bug in Perl code), and always test whether your opens succeed. A: I think it would be much better to show this with lexical variables as file handlers and error handling. It is also better to use the constants from the Fcntl module than hard code the magic number 2 which might not be the right number on all operating systems. use Fcntl ':flock'; # import LOCK_* constants # open the file for appending open (my $fh, '>>', 'test.dat') or die $!; # try to lock the file exclusively, will wait till you get the lock flock($fh, LOCK_EX); # do something with the file here (print to it in our case) # actually you should not unlock the file # close the file will unlock it close($fh) or warn "Could not close file $!"; Check out the full documentation of flock and the File locking tutorial on PerlMonks even though that also uses the old style of file handle usage. Actually I usually skip the error handling on close() as there is not much I can do if it fails anyway. Regarding what to lock, if you are working in a single file then lock that file. If you need to lock several files at once then - in order to avoid dead locks - it is better to pick one file that you are locking. Does not really matter if that is one of the several files you really need to lock or a separate file you create just for the locking purpose. A: Have you considered using the LockFile::Simple module? It does most of the work for you already. In my past experience, I have found it very easy to use and sturdy. A: If you end up using flock, here's some code to do it: use Fcntl ':flock'; # Import LOCK_* constants # We will use this file path in error messages and function calls. # Don't type it out more than once in your code. Use a variable. my $file = '/path/to/some/file'; # Open the file for appending. Note the file path is quoted # in the error message. This helps debug situations where you # have a stray space at the start or end of the path. open(my $fh, '>>', $file) or die "Could not open '$file' - $!"; # Get exclusive lock (will block until it does) flock($fh, LOCK_EX) or die "Could not lock '$file' - $!"; # Do something with the file here... # Do NOT use flock() to unlock the file if you wrote to the # file in the "do something" section above. This could create # a race condition. The close() call below will unlock the # file for you, but only after writing any buffered data. # In a world of buffered i/o, some or all of your data may not # be written until close() completes. Always, always, ALWAYS # check the return value of close() if you wrote to the file! close($fh) or die "Could not write '$file' - $!"; Some useful links: * *PerlMonks file locking tutorial (somewhat old) *flock() documentation In response to your added question, I'd say either place the lock on the file or create a file that you call 'lock' whenever the file is locked and delete it when it is no longer locked (and then make sure your programs obey those semantics). A: use strict; use Fcntl ':flock'; # Import LOCK_* constants # We will use this file path in error messages and function calls. # Don't type it out more than once in your code. Use a variable. my $file = '/path/to/some/file'; # Open the file for appending. Note the file path is in quoted # in the error message. This helps debug situations where you # have a stray space at the start or end of the path. open(my $fh, '>>', $file) or die "Could not open '$file' - $!"; # Get exclusive lock (will block until it does) flock($fh, LOCK_EX); # Do something with the file here... # Do NOT use flock() to unlock the file if you wrote to the # file in the "do something" section above. This could create # a race condition. The close() call below will unlock it # for you, but only after writing any buffered data. # In a world of buffered i/o, some or all of your data will not # be written until close() completes. Always, always, ALWAYS # check the return value on close()! close($fh) or die "Could not write '$file' - $!"; A: The other answers cover Perl flock locking pretty well, but on many Unix/Linux systems there are actually two independent locking systems: BSD flock() and POSIX fcntl()-based locks. Unless you provide special options to configure when building Perl, its flock will use flock() if available. This is generally fine and probably what you want if you just need locking within your application (running on a single system). However, sometimes you need to interact with another application that uses fcntl() locks (like Sendmail, on many systems) or perhaps you need to do file locking across NFS-mounted filesystems. In those cases, you might want to look at File::FcntlLock or File::lockf. It is also possible to do fcntl()-based locking in pure Perl (with some hairy and non-portable bits of pack()). Quick overview of flock/fcntl/lockf differences: lockf is almost always implemented on top of fcntl, has file-level locking only. If implemented using fcntl, limitations below also apply to lockf. fcntl provides range-level locking (within a file) and network locking over NFS, but locks are not inherited by child processes after a fork(). On many systems, you must have the filehandle open read-only to request a shared lock, and read-write to request an exclusive lock. flock has file-level locking only, locking is only within a single machine (you can lock an NFS-mounted file, but only local processes will see the lock). Locks are inherited by children (assuming that the file descriptor is not closed). Sometimes (SYSV systems) flock is emulated using lockf, or fcntl; on some BSD systems lockf is emulated using flock. Generally these sorts of emulation work poorly and you are well advised to avoid them. A: My goal in this question was to lock a file being used as a data store for several scripts. In the end I used similar code to the following (from Chris): open (FILE, '>>', test.dat') ; # open the file flock FILE, 2; # try to lock the file # do something with the file here close(FILE); # close the file In his example I removed the flock FILE, 8 as the close(FILE) performs this action as well. The real problem was when the script starts it has to hold the current counter, and when it ends it has to update the counter. This is where Perl has a problem, to read the file you: open (FILE, '<', test.dat'); flock FILE, 2; Now I want to write out the results and since i want to overwrite the file I need to reopen and truncate which results in the following: open (FILE, '>', test.dat'); #single arrow truncates double appends flock FILE, 2; In this case the file is actually unlocked for a short period of time while the file is reopened. This demonstrates the case for the external lock file. If you are going to be changing contexts of the file, use a lock file. The modified code: open (LOCK_FILE, '<', test.dat.lock') or die "Could not obtain lock"; flock LOCK_FILE, 2; open (FILE, '<', test.dat') or die "Could not open file"; # read file # ... open (FILE, '>', test.dat') or die "Could not reopen file"; #write file close (FILE); close (LOCK_FILE); A: Developed off of http://metacpan.org/pod/File::FcntlLock use Fcntl qw(:DEFAULT :flock :seek :Fcompat); use File::FcntlLock; sub acquire_lock { my $fn = shift; my $justPrint = shift || 0; confess "Too many args" if defined shift; confess "Not enough args" if !defined $justPrint; my $rv = TRUE; my $fh; sysopen($fh, $fn, O_RDWR | O_CREAT) or LOGDIE "failed to open: $fn: $!"; $fh->autoflush(1); ALWAYS "acquiring lock: $fn"; my $fs = new File::FcntlLock; $fs->l_type( F_WRLCK ); $fs->l_whence( SEEK_SET ); $fs->l_start( 0 ); $fs->lock( $fh, F_SETLKW ) or LOGDIE "failed to get write lock: $fn:" . $fs->error; my $num = <$fh> || 0; return ($fh, $num); } sub release_lock { my $fn = shift; my $fh = shift; my $num = shift; my $justPrint = shift || 0; seek($fh, 0, SEEK_SET) or LOGDIE "seek failed: $fn: $!"; print $fh "$num\n" or LOGDIE "write failed: $fn: $!"; truncate($fh, tell($fh)) or LOGDIE "truncate failed: $fn: $!"; my $fs = new File::FcntlLock; $fs->l_type(F_UNLCK); ALWAYS "releasing lock: $fn"; $fs->lock( $fh, F_SETLK ) or LOGDIE "unlock failed: $fn: " . $fs->error; close($fh) or LOGDIE "close failed: $fn: $!"; } A: One alternative to the lock file approach is to use a lock socket. See Lock::Socket on CPAN for such an implementation. Usage is as simple as the following: use Lock::Socket qw/lock_socket/; my $lock = lock_socket(5197); # raises exception if lock already taken There are a couple of advantages to using a socket: * *guaranteed (through the operating system) that no two applications will hold the same lock: there is no race condition. *guaranteed (again through the operating system) to clean up neatly when your process exits, so there are no stale locks to deal with. *relies on functionality that is well supported by anything that Perl runs on: no issues with flock(2) support on Win32 for example. The obvious disadvantage is of course that the lock namespace is global. It is possible for a kind of denial-of-service if another process decides to lock the port you need. [disclosure: I am the author of the afor-mentioned module] A: Use the flock Luke. Edit: This is a good explanation. A: flock creates Unix-style file locks, and is available on most OS's Perl runs on. However flock's locks are advisory only. edit: emphasized that flock is portable A: Here's my solution to reading and writing in one lock... open (TST,"+< readwrite_test.txt") or die "Cannot open file\n$!"; flock(TST, LOCK_EX); # Read the file: @LINES=<TST>; # Wipe the file: seek(TST, 0, 0); truncate(TST, 0); # Do something with the contents here: push @LINES,"grappig, he!\n"; $LINES[3]="Gekke henkie!\n"; # Write the file: foreach $l (@LINES) { print TST $l; } close(TST) or die "Cannot close file\n$!"; A: Flock is probably the best but requires you to write all the supporting code around it - timeouts, stale locks, non-existant files etc. I trued LockFile::Simple but found it started setting the default umask to readonly and not cleaning this up. Resulting in random permissions problems on a multi process/multi-threaded application on modperl I've settled on wrapping up NFSLock with some empty file handling.
{ "language": "en", "url": "https://stackoverflow.com/questions/34920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: 3.1 or 5.1 audio in Flash Is it possible to do 3.1 or 5.1 audio using Flash? We're starting a project here for an interactive kiosk, and we've been told to use Flash. However, we also have a requirement to support either 3.1 or 5.1 audio (where 5.1 is the most wanted feature). I haven't done any high-tech audio stuff using Flash, so I was wondering if anyone knew if it was possible to do? Thanks. A: A quick google search gave me this forum http://board.flashkit.com/board/showthread.php?t=715062 where they state that Flash is unable to handle 5.1 audio and the alternative is to use another application that can communicate with Flash to handle the audio side of things. I also found this blog entry from Summit Projects http://summitprojectsflashblog.wordpress.com/2008/08/07/wave-theory-in-actionscript-3-part-4/ where they go into great detail about byte handling and processing audio samples. I'm not sure if they are using their own actionscript libraries for this, or if they are using Adobe's libraries. I'm not too up to speed on the audio side of Flash with respects to surround sound. I think your two options might have to be either using a separate application to run your audio(which may be less stressful) or maybe getting in touch with the Summit people if you are as lost as I am over some of the concepts they touch on, heh. Good luck! A: As far as I know this is not possible. You might be able to do it if you use uncompressed wave files already encoded in DTS or something and put a surround receiver in between. This will however stop you from doing anything with the sound before outputting it, not even changing the volume. And I guess that's not an option? I think going with an external application for sound would be your best choice, maybe you can do something using Director. A: With AIR 3 and Flash Player 11 it is possible.
{ "language": "en", "url": "https://stackoverflow.com/questions/34924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: XmlSerializer changes in .NET 3.5 SP1 I've seen quite a few posts on changes in .NET 3.5 SP1, but stumbled into one that I've yet to see documentation for yesterday. I had code working just fine on my machine, from VS, msbuild command line, everything, but it failed on the build server (running .NET 3.5 RTM). [XmlRoot("foo")] public class Foo { static void Main() { XmlSerializer serializer = new XmlSerializer(typeof(Foo)); string xml = @"<foo name='ack' />"; using (StringReader sr = new StringReader(xml)) { Foo foo = serializer.Deserialize(sr) as Foo; } } [XmlAttribute("name")] public string Name { get; set; } public Foo Bar { get; private set; } } In SP1, the above code runs just fine. In RTM, you get an InvalidOperationException: Unable to generate a temporary class (result=1). error CS0200: Property or indexer 'ConsoleApplication2.Foo.Bar' cannot be assign to -- it is read only Of course, all that's needed to make it run under RTM is adding [XmlIgnore] to the Bar property. My google fu is apparently not up to finding documentation of these kinds of changes. Is there a change list anywhere that lists this change (and similar under-the-hood changes that might jump up and shout "gotcha")? Is this a bug or a feature? EDIT: In SP1, if I added a <Bar /> element, or set [XmlElement] for the Bar property, it won't get deserialized. It doesn't fail pre-SP1 when it tries to deserialize--it throws an exception when the XmlSerializer is constructed. This makes me lean more toward it being a bug, especially if I set an [XmlElement] attribute for Foo.Bar. If it's unable to do what I ask it to do, it should be throwing an exception instead of silently ignoring Foo.Bar. Other invalid combinations/settings of XML serialization attributes result in an exception. EDIT: Thank you, TonyB, I'd not known about setting the temp files location. For those that come across similar issues in the future, you do need an additional config flag: <system.diagnostics> <switches> <add name="XmlSerialization.Compilation" value="1" /> </switches> </system.diagnostics> <system.xml.serialization> <xmlSerializer tempFilesLocation="c:\\foo"/> </system.xml.serialization> Even with setting an [XmlElement] attribute on the Bar property, no mention was made of it in the generated serialization assembly--which fairly firmly puts this in the realm of a silently swallowed error (aka, a bug). Either that or the designers have decided [XmlIgnore] is no longer necessary for properties that can't be set--and you'd expect to see that in release notes, change lists, or the XmlIgnoreAttribute documentation. A: In SP1 does the foo.Bar property get properly deserialized? In pre SP1 you wouldn't be able to deserialize the object because the set method of the Bar property is private so the XmlSerializer doesn't have a way to set that value. I'm not sure how SP1 is pulling it off. You could try adding this to your web.config/app.config <system.xml.serialization> <xmlSerializer tempFilesLocation="c:\\foo"/> </system.xml.serialization> That will put the class generated by the XmlSerializer into c:\foo so you can see what it is doing in SP1 vs RTM A: I rather like this new (?) behavior because the XML document doesn't have any mention of Bar in it, so the deserializer should not even be attempting to set it.
{ "language": "en", "url": "https://stackoverflow.com/questions/34925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Strip HTML from string in SSRS 2005 (VB.NET) my SSRS DataSet returns a field with HTML, e.g. <b>blah blah </b><i> blah </i>. how do i strip all the HTML tags? has to be done with inline VB.NET Changing the data in the table is not an option. Solution found ... = System.Text.RegularExpressions.Regex.Replace(StringWithHTMLtoStrip, "<[^>]+>","") A: Here's a good example using Regular Expressions: https://web.archive.org/web/20210619174622/https://www.4guysfromrolla.com/webtech/042501-1.shtml A: Thanx to Daniel, but I needed it to be done inline ... here's the solution: = System.Text.RegularExpressions.Regex.Replace(StringWithHTMLtoStrip, "<[^>]+>","") Here are the links: http://weblogs.asp.net/rosherove/archive/2003/05/13/6963.aspx http://msdn.microsoft.com/en-us/library/ms157328.aspx A: If you know the HTML is well-formed enough, you could, if you make sure it has a root node, convert the data in that field into a System.Xml.XmlDocument and then get the InnerText value from it. Again, you will have to make sure the text has a root node, which you can add yourself if needs be, since it will not matter, and make sure the HTML is well formed. A: If you don't want to use regular expressions (for example if you need better performance) you could try a small method I wrote a while ago, posted at CodeProject. A: I would go to Report Properties and then code and add the following Dim mRemoveTagRegex AS NEW System.Text.RegularExpressions.Regex("<(.|\n)+?>", System.Text.RegularExpressions.RegexOptions.Compiled) Function RemoveHtml(ByVal text As string) AS string If text IsNot Nothing Then Return mRemoveTagRegex.Replace(text, "") End If End Function Then you can use Code.RemoveHtml(Fields!Content.Value) to remove the html tags. In my opinion this is preferable then having multiple copies of the regex.
{ "language": "en", "url": "https://stackoverflow.com/questions/34926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: if statement condition optimisation I have an if statement with two conditions (separated by an OR operator), one of the conditions covers +70% of situations and takes far less time to process/execute than the second condition, so in the interests of speed I only want the second condition to be processed if the first condition evaluates to false. if I order the conditions so that the first condition (the quicker one) appears in the if statement first - on the occasions where this condition is met and evaluates true is the second condition even processed? if ( (condition1) | (condition2) ){ // do this } or would I need to nest two if statements to only check the second condition if the first evaluates to false? if (condition1){ // do this }else if (condition2){ // do this } I am working in PHP, however, I assume that this may be language-agnostic. A: For C, C++, C#, Java and other .NET languages boolean expressions are optimised so that as soon as enough is known nothing else is evaluated. An old trick for doing obfuscated code was to use this to create if statements, such as: a || b(); if "a" is true, "b()" would never be evaluated, so we can rewrite it into: if(!a) b(); and similarly: a && b(); would become if(a) b(); Please note that this is only valid for the || and && operator. The two operators | and & is bitwise or, and and, respectively, and are therefore not "optimised". EDIT: As mentioned by others, trying to optimise code using short circuit logic is very rarely well spent time. First go for clarity, both because it is easier to read and understand. Also, if you try to be too clever a simple reordering of the terms could lead to wildly different behaviour without any apparent reason. Second, go for optimisation, but only after timing and profiling. Way too many developer do premature optimisation without profiling. Most of the time it's completely useless. A: Pretty much every language does a short circuit evaluation. Meaning the second condition is only evaluated if it's aboslutely necessary to. For this to work, most languages use the double pipe, ||, not the single one, |. See http://en.wikipedia.org/wiki/Short-circuit_evaluation A: In C, C++ and Java, the statement: if (condition1 | condition2) { ... } will evaluate both conditions every time and only be true if the entire expression is true. The statement: if (condition1 || condition2) { ... } will evaluate condition2 only if condition1 is false. The difference is significant if condition2 is a function or another expression with a side-effect. There is, however, no difference between the || case and the if/else case. A: I've seen a lot of these types of questions lately--optimization to the nth degree. I think it makes sense in certain circumstances: * *Computing condition 2 is not a constant time operation *You are asking strictly for educational purposes--you want to know how the language works, not to save 3us. In other cases, worrying about the "fastest" way to iterate or check a conditional is silly. Instead of writing tests which require millions of trials to see any recordable (but insignificant) difference, focus on clarity. When someone else (could be you!) picks up this code in a month or a year, what's going to be most important is clarity. In this case, your first example is shorter, clearer and doesn't require you to repeat yourself. A: According to this article PHP does short circuit evaluation, which means that if the first condition is met the second is not even evaluated. It's quite easy to test also (from the article): <?php /* ch06ex07 – shows no output because of short circuit evaluation */ if (true || $intVal = 5) // short circuits after true { echo $intVal; // will be empty because the assignment never took place } ?> A: The short-circuiting is not for optimization. It's main purpose is to avoid calling code that will not work, yet result in a readable test. Example: if (i < array.size() && array[i]==foo) ... Note that array[i] may very well get an access violation if i is out of range and crash the program. Thus this program is certainly depending on short-circuiting the evaluation! I believe this is the reason for writing expressions this way far more often than optimization concerns. A: While using short-circuiting for the purposes of optimization is often overkill, there are certainly other compelling reasons to use it. One such example (in C++) is the following: if( pObj != NULL && *pObj == "username" ) { // Do something... } Here, short-circuiting is being relied upon to ensure that pObj has been allocated prior to dereferencing it. This is far more concise than having nested if statements. A: Since this is tagged language agnostic I'll chime in. For Perl at least, the first option is sufficient, I'm not familiar with PHP. It evaluates left to right and drops out as soon as the condition is met. A: In most languages with decent optimization the former will work just fine. A: The | is a bitwise operator in PHP. It does not mean $a OR $b, exactly. You'll want to use the double-pipe. And yes, as mentioned, PHP does short-circuit evaluation. In similar fashion, if the first condition of an && clause evaluates to false, PHP does not evaluate the rest of the clause, either. A: VB.net has two wonderful expression called "OrElse" and "AndAlso" OrElse will short circuit itself the first time it reaches a True evaluation and execute the code you desire. If FirstName = "Luke" OrElse FirstName = "Darth" Then Console.Writeline "Greetings Exalted One!" End If AndAlso will short circuit itself the first time it a False evaluation and not evaluate the code within the block. If FirstName = "Luke" AndAlso LastName = "Skywalker" Then Console.Writeline "You are the one and only." End If I find both of these helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/34938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Best practices for debugging linking errors When building projects in C++, I've found debugging linking errors to be tricky, especially when picking up other people's code. What strategies do people use for debugging and fixing linking errors? A: One of the common linking errors I've run into is when a function is used differently from how it's defined. If you see such an error you should make sure that every function you use is properly declared in some .h file. You should also make sure that all the relevant source files are compiled into the same lib file. An error I've run into is when I have two sets of files compiled into two separate libraries, and I cross-call between libraries. Is there a failure you have in mind? A: The C-runtime libraries are often the biggest culprit. Making sure all your projects have the same settings wrt single vs multi-threading and static vs dll. The MSDN documentation is good for pointing out which lib a particular Win32 API call requires if it comes up as missing. Other than that it usually comes down to turning on the verbose flag and wading through the output looking for clues. A: Not sure what your level of expertise is, but here are the basics. Below is a linker error from VS 2005 - yes, it's a giant mess if you're not familiar with it. ByteComparator.obj : error LNK2019: unresolved external symbol "int __cdecl does_not_exist(void)" (?does_not_exist@@YAHXZ) referenced in function "void __cdecl TextScan(struct FileTextStats &,char const *,char const *,bool,bool,__int64)" (?TextScan@@YAXAAUFileTextStats@@PBD1_N2_J@Z) There are a couple of points to focus on: * *"ByteComparator.obj" - Look for a ByteComparator.cpp file, this is the source of the linker problem *"int __cdecl does_not_exist(void)" - This is the symbol it couldn't find, in this case a function named does_not_exist() At this point, in many cases the fastest way to resolution is to search the code base for this function and find where the implementation is. Once you know where the function is implemented you just have to make sure the two places get linked together. If you're using VS2005, you would use the "Project Dependencies..." right-click menu. If you're using gcc, you would look in your makefiles for the executable generation step (gcc called with a bunch of .o files) and add the missing .o file. In a second scenario, you may be missing an "external" dependency, which you don't have code for. The Win32 libraries are often times implemented in static libraries that you have to link to. In this case, go to MSDN or "Microsoft Google" and search for the API. At the bottom of the API description the library name is given. Add this to your project properties "Configuration Properties->Linker->Input->Additional Dependencies" list. For example, the function timeGetTime()'s page on MSDN tells you to use Winmm.lib at the bottom of the page.
{ "language": "en", "url": "https://stackoverflow.com/questions/34955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: Tools for finding memory corruption in managed C++ code I have a .NET application, which is using an open source C++ compression library for compressing images. We are accessing the C++ library via managed C++. I'm seeing heap corruption during compression. A call to _CrtIsValidHeapPointer is finding an error on a call to free() when cleaning up after compression. Are there tools such as Purify to help diagnosis this problem and what is causing the heap corruption when working in a combination of managed and unmanaged code? I do have the exception caught in the debugger, but it would be nice to have other tools to help find the solution to the problem. A: On *nix, there's a tool called Valgrind that I use for dealing with memory issues, like memory leaks and memory corruption. A: In native code, if the corruption always occurs in the same place in memory, you can use a data breakpoint to break the debugger when that memory is changed. Unfortunately, you cannot set a data breakpoint in the managed C++ environment, presumably because the GC could move the object in memory. Not sure if this helps, but hopefully it leads you off in the right direction. A: Rational Purify for Windows supports .NET, so I guess that could be used.
{ "language": "en", "url": "https://stackoverflow.com/questions/34973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Branching Strategies The company I work for is starting to have issues with their current branching model and I was wondering what different kinds of branching strategies the community has been exposed to? Are there any good ones for different situations? What does your company use? What are the advantages and disadvantages of them?? A: Our repository looks like: /trunk /branches /sandbox /vendor /ccnet /trunk is your standard, bleeding edge development. We use CI so this must always build and pass tests. /branches this is where we put 'sanctioned' large changes, ie something we KNOW will make it into trunk but may need some work and would break CI. Also where we work on maintenance releases, which have their own CI projects. /sandbox each developer has their own sandbox, plus a shared sandbox. This is for things like "Lets add a LINQ provider to our product" type of tasks that you do when you are not doing your real work. It may eventually go into trunk, it may not, but it is there and under version control. No CI here. /vendor standard vendor branch for projects where we compile but it is not code that we maintain. /ccnet this is our CI tags, only the CI server can write in here. Hindsight would have told us to rename this to something more generic such as CI, BUILDS, etc. A: * *One branch for the active development (/main or master, depending on the jargon) *One branch for each maintenance release -> it will receive only really small fixes, while all major development goes to /main *One branch for each new task: create a new branch to work on every new entry on your Bugzilla/Jira/Rally. Commit often, self document the change using inch pebble checkins, and merge it back to its "parent" branch only when it's finished and well tested. Take a look at this http://codicesoftware.blogspot.com/2010/03/branching-strategies.html for a better explanation A: The first thing: KISS (Keep it simple stupid!) /branches /RB-1.0 (*1) /RB-1.1 (*1) /RB-2.0 (*1) /tags /REL-1.0 (or whatever your version look like e.g. 1.0.0.123 *2) /REL-1.1 /REL-2.0 /trunk current development with cool new features ;-) *1) Keep version maintainable - e.g. Service Packs, Hotfixes, Bugfixes which may be merged to trunk if necessary and/or needed) *2) major.minor.build.revision Rules of the thumb: * *The Tags folder need not to be checked out *Only few coding in release branches (makes merging simpler) - no code cleanup etc. *Never to coding in tags folder *Never put concrete version information into source files. Use Place-holders or 0.0.0.0 which the build mechanism will replace by the version number you're building *Never put third party libraries into your source control (also no one will add STL, MFC etc. libraries to SVN ;-)) *Only commit code that compiles *Prefer using environment variables instead of hard-coded paths (absolute and relative paths) --hfrmobile A: Here is the method I've used in the past with good success: /trunk - bleeding edge. Next major release of the code. May or may not work at any given time. /branches/1.0, 1.1, etc. Stable maintenance branches of the code. Used to fix bugs, stabilize new releases. If a maintenance branch, it should compile (if applicable) and be ready for QA/shipping at any given time. If a stabilization branch, it should compile and be feature complete. No new features should be added, no refactoring, and no code cleanups. You can add a pre- prefix to indicate stabilization branches vs maintenance branches. /branches/cool_feature. Used for highly experimental or destructive work that may or may not make it into trunk (or a maintenance branch). No guarantees about code compiling, working, or otherwise behaving sanely. Should last the minimum time as possible before merging into the mainline branch. /tags/1.0.1, 1.0.2, 1.1.3a, etc. Used for tagging a packaged & shipped release. Never EVER changes. Make as many tags as you want, but they're immutable. A: We branch when a release is ready for final QA. If any issues are discovered during the QA process, the bugs are fixed in the branch, validated and then merged to the trunk. Once the branch passes QA we tag it as a release. Any hotfixes for that release are also done to the branch, validated, merged to the trunk and then tagged as a separate release. The folder structure would look like this (1 QA line, 2 hotfix releases, and the trunk): /branches /REL-1.0 /tags /REL-1.0 /REL-1.0.1 /REL-1.0.2 /trunk A: We use the wild, wild, west style of git-branches. We have some branches that have well-known names defined by convention, but in our case, tags are actually more important for us to meet our corporate process policy requirements. I saw below that you use Subversion, so I'm thinking you probably should check out the section on branching in the Subversion Book. Specifically, look at the "repository layout" section in Branch Maintenance and Common Branch Patterns. A: The alternative I'm not seeing here is a "Branch on Change" philosophy. Instead of having your trunk the "Wild West", what if the trunk is the "Current Release"? This works well when there is only one version of the application released at a time - such as a web site. When a new feature or bug fix is necessary a branch is made to hold that change. Often this allows the fixes to be migrated to release individually and prevents your cowboy coders from accidentally adding a feature to release that you didn't intend. (Often it's a backdoor - "Just for development/testing") The pointers from Ben Collins are quite useful in determining what style would work well for your situation. A: Gnat has written this excellent break down on the various bits of advice your can find on branching strategies. There's not one branching strategy, it's what works for: * *Your team size *Your product and the lifecycle periods *The technology you're using (web, embedded, windows apps) *Your source control, e.g. Git, TFS, Hg Jeff Atwood's post breaks down a lot of possibilities. Another to add is the concept of promotion (from Ryan Duffield's link). In this setup you have a dev branch, test bracnh and release branch. You promote your code up until it reaches the release branch and is deployed. A: We currently have one branch for ongoing maintenance, one branch for "new initiatives" which just means "stuff that will come out sometime in the future; we're not sure when." We have also occasionally had two maintenance branches going on: one to provide fixes for what is currently in production and one that is still in QA. The main advantage we've seen is the ability to react to user requests and emergencies more rapidly. We can do the fix on the branch that is in production and release it without releasing anything extra that may have already been checked in. The main disadvantage is that we end up doing a lot of merging between branches, which increases the chance that something will get missed or merged incorrectly. So far, that hasn't been a problem, but it is definitely something to keep in mind. Before we instituted this policy, we generally did all development in the trunk and only branched when we released code. We then did fixes against that branch as needed. It was simpler, but not as flexible. A: The philosophy that we follow at work is to keep the trunk in a state where you can push at any time without drastic harm to the site. This is not to say that the trunk will always be in a perfect state. There will of course be bugs in it. But the point is to never, ever leave it broken drastically. If you have a feature to add, branch. A design change, branch. There have been so many times where I thought, "oh I can just do this in the trunk it isn't going to take that long", and then 5 hours later when I can't figure out the bug that is breaking things I really wished that I had branched. When you keep the trunk clean you allow the opportunity to quickly apply and push out bug fixes. You don't have to worry about the broken code you have that you conveniently branched off. A: For Subversion, I agree with Ryan Duffield's comment. The chapter he refers to provides a good analyses on which system to use. The reason I asked is that Perforce provides a completely different way to create branches from SVN or CVS. Plus, there are all the DVCSs that give it's own philosophy on branching. Your branching strategy would be dictated by which tool(s) you're using. FYI, Svnmerge.py is a tool to assist with merging branches in SVN. It works very well as long as you use it frequently ( every 10-30 ) commits, otherwise the tool can get confused. A: No matter which branching pattern chosen, you should try to keep your branches in a binary tree form like this: trunk - tags | next / \ \ bugfix f1 f2 / \ \ f11 f21 f22 * *Child nodes should only merge with the direct parent. *Try ur best to merge only the whole branch with the parent branch. never merge subfolders within a branch. *You may cherry pick commits when needed as long as you only merge and pick from whole branch. *The next branch in the above figure is only for illustration, you may not need it.
{ "language": "en", "url": "https://stackoverflow.com/questions/34975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: Byte level length description I have a protocol that requires a length field up to 32-bits, and it must be generated at runtime to describe how many bytes are in a given packet. The code below is kind of ugly but I am wondering if this can be refactored to be slightly more efficient or easily understandable. The problem is that the code will only generate enough bytes to describe the length of the packet, so less than 255 bytes = 1 byte of length, less than 65535 = 2 bytes of length etc... { extern char byte_stream[]; int bytes = offset_in_packet; int n = length_of_packet; /* Under 4 billion, so this can be represented in 32 bits. */ int t; /* 32-bit number used for temporary storage. */ /* These are the bytes we will break up n into. */ unsigned char first, second, third, fourth; t = n & 0xFF000000; /* We have used AND to "mask out" the first byte of the number. */ /* The only bits which can be on in t are the first 8 bits. */ first = t >> 24; if (t) { printf("byte 1: 0x%02x\n",first ); byte_stream[bytes] = first; bytes++; write_zeros = 1; } /* Now we shift t so that it is between 0 and 255. This is the first, highest byte of n. */ t = n & 0x00FF0000; second = t >> 16; if (t || write_zeros) { printf("byte 2: 0x%02x\n", second ); byte_stream[bytes] = second; bytes++; write_zeros = 1; } t = n & 0x0000FF00; third = t >> 8; if ( t || write_zeros) { printf("byte 3: 0x%02x\n", third ); byte_stream[bytes] = third; bytes++; write_zeros = 1; } t = n & 0x000000FF; fourth = t; if (t || write_zeros) { printf("byte 4: 0x%02x\n", fourth); byte_stream[bytes] = fourth; bytes++; } } A: You should really use a fixed-width field for your length. * *When the program on the receiving end has to read the length field of your packet, how does it know where the length stops? *If the length of a packet can potentially reach 4 GB, does a 1-3 byte overhead really matter? *Do you see how complex your code has already become? A: Try this loop: { extern char byte_stream[]; int bytes = offset_in_packet; int n = length_of_packet; /* Under 4 billion, so this can be represented in 32 bits. */ int t; /* 32-bit number used for temporary storage. */ int i; unsigned char curByte; for (i = 0; i < 4; i++) { t = n & (0xFF000000 >> (i * 16)); curByte = t >> (24 - (i * 8)); if (t || write_zeros) { printf("byte %d: 0x%02x\n", i, curByte ); byte_stream[bytes] = curByte; bytes++; write_zeros = 1; } } } A: I'm not sure I understand your question. What exactly are you trying to count? If I understand correctly you're trying to find the Most Significant non-zero byte. You're probably better off using a loop like this: int i; int write_zeros = 0; for (i = 3; i >=0 ; --i) { t = (n >> (8 * i)) & 0xff; if (t || write_zeros) { write_zeros = 1; printf ("byte %d : 0x%02x\n", 4-i, t); byte_stream[bytes++] = t; } } A: Really you're only doing four calculations, so readability seems way more important here than efficiency. My approach to make something like this more readable is to * *Extract common code to a function *Put similar calculations together to make the patterns more obvious *Get rid of the intermediate variable print_zeroes and be explicit about the cases in which you output bytes even if they're zero (i.e. the preceding byte was non-zero) I've changed the random code block into a function and changed a few variables (underscores are giving me trouble in the markdown preview screen). I've also assumed that bytes is being passed in, and that whoever is passing it in will pass us a pointer so we can modify it. Here's the code: /* append byte b to stream, increment index */ /* really needs to check length of stream before appending */ void output( int i, unsigned char b, char stream[], int *index ) { printf("byte %d: 0x%02x\n", i, b); stream[(*index)++] = b; } void answer( char bytestream[], unsigned int *bytes, unsigned int n) { /* mask out four bytes from word n */ first = (n & 0xFF000000) >> 24; second = (n & 0x00FF0000) >> 16; third = (n & 0x0000FF00) >> 8; fourth = (n & 0x000000FF) >> 0; /* conditionally output each byte starting with the */ /* first non-zero byte */ if (first) output( 1, first, bytestream, bytes); if (first || second) output( 2, second, bytestream, bytes); if (first || second || third) output( 3, third, bytestream, bytes); if (first || second || third || fourth) output( 4, fourth, bytestream, bytes); } Ever so slightly more efficient, and maybe easier to understand would be this modification to the last four if statements: if (n>0x00FFFFFF) output( 1, first, bytestream, bytes); if (n>0x0000FFFF) output( 2, second, bytestream, bytes); if (n>0x000000FF) output( 3, third, bytestream, bytes); if (1) output( 4, fourth, bytestream, bytes); I agree, however, that compressing this field makes the receiving state machine overly complicated. But if you can't change the protocol, this code is much easier to read.
{ "language": "en", "url": "https://stackoverflow.com/questions/34977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Scrum: Resistance is (not) futile I'm the second dev and a recent hire here at a PHP/MySQL shop. I was hired mostly due to my experience in wrangling some sort of process out of a chaotic mess. At least, that's what I did at my last company. ;) Since I've been here (a few months now), I've brought on board my boss, my product manager and several other key figures (But mostly chickens, if you pardon the Scrum-based stereotyping). I've also helped bring in some visibility to the development cycle of a major product that has been lagging for over a year. People are loving it! However, my coworker (the only other dev here for now) is not into it. She prefers to close her door and focus on her work and be left alone. Me? I'm into the whole Agile approach of collaboration, cooperation and openness. Without her input, I started the Scrum practices (daily scrums, burndown charts and other things I've found that worked for me and my previous teams (ala H. Kniberg's cool wall chart). During our daily stand up she slinks by and ignores us as if we actually weren't standing right outside her door (we are actually). It's pretty amazing. I've never seen such resistance. Question... how do I get her onboard? Peer pressure is not working. Thanks from fellow Scrum-borg, beaudetious A: beaudetious, buddy, I would really suggest you read Steve Yegge's blog called "Good Agile, Bad Agile". It's an oldy but a goody, and I think it's a must read for anyone - like myself about 2 months ago - who gets a little let's say "over-eager" to agile-up their workplace. Agile offers a lot of good practices, but you have to take them all with a grain of salt and adopt what you're lacking and skip out on all the other crud that might be unuseful for a particular situation - e.g. the daily scrum. If your co-worker would just like to code in quiet (read Peopleware for why this is a good thing) and she's being a productive team member quit bugging her with your scrumming a let her work in whatever way she likes most. People are usually less "hostile" about these practices if you just approach them and simply say "Do you have a sec? Listen, communication is really a problem right now, I feel like I don't know what you're doing and I really don't want to step on your toes again and spend two days writing something you already did like last week, so let's work on this. I'd like to try X, what do you think?". Be compassionate and don't tolerate "bad apples", that's literally how I agiled up my workplace, and many problems have started evaporating. We're by no means an 100% XP or 100% Scrum compliant place, because we just use whatever works and was needed. A: Simple. Don't talk about scrum. Don't use scrum on her. Instead take the underlying principles of scrum (e.g. the purpose as opposed to the application) and create different approaches that accommodate her way of working but have subtle tints of scrum. All humans are different and a lot of programmers dislike scrum. I wouldn't force it upon them as that would just be counter-productive. I'd suggest identifying the problems in the development process (in a non-scrum fashion), see if you can get her to agree that the issues exist, then ask her what she thinks would be a good solution. Her co-operation and input into the process is essential to her co-operation, if she doesn't have buy-in she wont become a citizen. From there on in you can hopefully create some sort of quasi-hybrid scrum + her approach to the process where you can both agree on the way forward. A: I think the key would be to help her understand why you are doing Scrum in the first place. I guess you have your reasons, so why not tell her? You are likely to get resistance towards any change if the people involved don't understand why there is change or what they will benefit from it. If you can explain your reasons for using Scrum, and the following benefits, to her in a way that relates to her everyday work, I think she is more likely to adapt a more positive attitude towards it. If she sees no value in the Scrum process, or doesn't understand how it relates to her, she probably won't care about it. I think one of the most important concepts for someone to understand regarding Scrum is the fact that you are working as a group and commit to your project as a group, not as individuals. For many people, this is the hardest thing to grasp, since they are so used to living in "their own World". A: While Scrum other agile methodologies like it embody a lot of good practices, sometimes giving it a name and making it (as many bloggers have commented on) a "religion" that must be adopted in the workplace is rather offputting to a lot of people, including myself. It depends on what your options and commitments are, but I know I'd be a lot more keen on accepting ideas because they are good ideas, not because they are a bandwagon. Try implementing/drawing her in to the practices one at a time, by showing her how they can improve her life and workflow as well. Programmers love cool things that help them get stuff done. They hate being preached at or being asked to board what they see as a bandwagon. Present it as the former rather than the latter. (It goes without saying, make sure it actually IS the former) Edit: another question I've never actually worked for a place that used a specific agile methodology, though I'm pretty happy where I'm at now in that we incorporate a lot of agile practices without the hype and the dogma (best of both worlds, IMHO). But I was just reading about Scrum and, is a system like that even beneficial for a 2 person team? Scrum does add a certain amount of overhead to a project, it seems, and that might outweigh the benefits when you have a very small team where communication and planning is already easy. A: Without her input, I started the Scrum practices (daily scrums, burndown charts and other things I've found that worked for me and my previous teams (ala H. Kniberg's cool wall chart). During out daily stand up she slinks by and ignores us as if we actually weren't standing right outside her door (we are actually). It's pretty amazing. I've never seen such resistance. Question... how do I get her onboard? Peer pressure is not working. Yikes! Who would ever want to work in such an oppressive environment? If you're lucky, she's sending around her resume and you'll be able to hire someone who is on board with your development process. Assuming you want to hang on to her, I'd turn down (or off) the rhetoric and work on being a friend and co-worker first. If the project is a year late, she can't be feeling good about herself and it sounds like you aren't afraid to trumpet your success. That can be intimidating. I know nothing about Scrum, however. I'm just imagining what it would be like to walk around in your co-worker's shoes. A: I'm not sure Scrum is the central issue here; I'm guessing she feels threatened by the new guy bringing in a lot of new ideas and stirring things up. I've been in that situation before as the new person bringing in a new perspective on things, and sometimes it's just difficult to immediately bring those existing people around to a new way of thinking. It often requires a culture shift which doesn't happen overnight. Try to get her input and opinion on things as much as possible, and try to show that you respect that she has been on the team longer than you. If after a while she still doesn't participate, then all you can do is mention it to your Manager and let them take it from there. A: Continue your efforts to involve the other developer. Remember you are the one who wants to make this change. Ask for help with problems you have. Invite them to the daily stand up meeting. I currently do the planning for the daily stand up and I make sure all the pigs and chickens are invited. If you are the lead on the project it is up to you to address the situation and take a risk. Put yourself out there.
{ "language": "en", "url": "https://stackoverflow.com/questions/34981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to declare an array of strings in C++? I am trying to iterate over all the elements of a static array of strings in the best possible way. I want to be able to declare it on one line and easily add/remove elements from it without having to keep track of the number. Sounds really simple, doesn't it? Possible non-solutions: vector<string> v; v.push_back("abc"); b.push_back("xyz"); for(int i = 0; i < v.size(); i++) cout << v[i] << endl; Problems - no way to create the vector on one line with a list of strings Possible non-solution 2: string list[] = {"abc", "xyz"}; Problems - no way to get the number of strings automatically (that I know of). There must be an easy way of doing this. A: Declare an array of strings in C++ like this : char array_of_strings[][] For example : char array_of_strings[200][8192]; will hold 200 strings, each string having the size 8kb or 8192 bytes. use strcpy(line[i],tempBuffer); to put data in the array of strings. A: One possiblity is to use a NULL pointer as a flag value: const char *list[] = {"dog", "cat", NULL}; for (char **iList = list; *iList != NULL; ++iList) { cout << *iList; } A: You can use the begin and end functions from the Boost range library to easily find the ends of a primitive array, and unlike the macro solution, this will give a compile error instead of broken behaviour if you accidentally apply it to a pointer. const char* array[] = { "cat", "dog", "horse" }; vector<string> vec(begin(array), end(array)); A: You can concisely initialize a vector<string> from a statically-created char* array: char* strarray[] = {"hey", "sup", "dogg"}; vector<string> strvector(strarray, strarray + 3); This copies all the strings, by the way, so you use twice the memory. You can use Will Dean's suggestion to replace the magic number 3 here with arraysize(str_array) -- although I remember there being some special case in which that particular version of arraysize might do Something Bad (sorry I can't remember the details immediately). But it very often works correctly. Also, if you're really gung-ho about the one line thingy, you can define a variadic macro so that a single line such as DEFINE_STR_VEC(strvector, "hi", "there", "everyone"); works. A: Here's an example: #include <iostream> #include <string> #include <vector> #include <iterator> int main() { const char* const list[] = {"zip", "zam", "bam"}; const size_t len = sizeof(list) / sizeof(list[0]); for (size_t i = 0; i < len; ++i) std::cout << list[i] << "\n"; const std::vector<string> v(list, list + len); std::copy(v.begin(), v.end(), std::ostream_iterator<string>(std::cout, "\n")); } A: You can use Will Dean's suggestion [#define arraysize(ar) (sizeof(ar) / sizeof(ar[0]))] to replace the magic number 3 here with arraysize(str_array) -- although I remember there being some special case in which that particular version of arraysize might do Something Bad (sorry I can't remember the details immediately). But it very often works correctly. The case where it doesn't work is when the "array" is really just a pointer, not an actual array. Also, because of the way arrays are passed to functions (converted to a pointer to the first element), it doesn't work across function calls even if the signature looks like an array β€” some_function(string parameter[]) is really some_function(string *parameter). A: Problems - no way to get the number of strings automatically (that i know of). There is a bog-standard way of doing this, which lots of people (including MS) define macros like arraysize for: #define arraysize(ar) (sizeof(ar) / sizeof(ar[0])) A: Instead of that macro, might I suggest this one: template<typename T, int N> inline size_t array_size(T(&)[N]) { return N; } #define ARRAY_SIZE(X) (sizeof(array_size(X)) ? (sizeof(X) / sizeof((X)[0])) : -1) 1) We want to use a macro to make it a compile-time constant; the function call's result is not a compile-time constant. 2) However, we don't want to use a macro because the macro could be accidentally used on a pointer. The function can only be used on compile-time arrays. So, we use the defined-ness of the function to make the macro "safe"; if the function exists (i.e. it has non-zero size) then we use the macro as above. If the function does not exist we return a bad value. A: #include <boost/foreach.hpp> const char* list[] = {"abc", "xyz"}; BOOST_FOREACH(const char* str, list) { cout << str << endl; } A: C++ 11 added initialization lists to allow the following syntax: std::vector<std::string> v = {"Hello", "World"}; Support for this C++ 11 feature was added in at least GCC 4.4 and only in Visual Studio 2013. A: #include <iostream> #include <string> #include <vector> #include <boost/assign/list_of.hpp> int main() { const std::vector< std::string > v = boost::assign::list_of( "abc" )( "xyz" ); std::copy( v.begin(), v.end(), std::ostream_iterator< std::string >( std::cout, "\n" ) ); } A: You can directly declare an array of strings like string s[100];. Then if you want to access specific elements, you can get it directly like s[2][90]. For iteration purposes, take the size of string using the s[i].size() function.
{ "language": "en", "url": "https://stackoverflow.com/questions/34987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "91" }
Q: How to transform a WebService call that is using behaviours? We have some really old code that calls WebServices using behaviours (webservice.htc), and we are having some strange problems... since they've been deprecated a long time ago, I want to change the call. What's the correct way of doing it? It's ASP.NET 1.1 A: You should be able to generate a proxy class using wsdl.exe. Then just use the web service as you normally would. A: While I'm not 100% sure what the Web Service behavior does, I recall it allows client-side script to call Web Services, which would make AJAX it's contemporary replacement. Since you're using .NET 1.1 how about using Ajax.NET Professional to connect to the web services?
{ "language": "en", "url": "https://stackoverflow.com/questions/34988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Does C# have a way of giving me an immutable Dictionary? Is there anything built into the core C# libraries that can give me an immutable Dictionary? Something along the lines of Java's: Collections.unmodifiableMap(myMap); And just to clarify, I am not looking to stop the keys / values themselves from being changed, just the structure of the Dictionary. I want something that fails fast and loud if any of IDictionary's mutator methods are called (Add, Remove, Clear). A: No, but a wrapper is rather trivial: public class ReadOnlyDictionary<TKey, TValue> : IDictionary<TKey, TValue> { IDictionary<TKey, TValue> _dict; public ReadOnlyDictionary(IDictionary<TKey, TValue> backingDict) { _dict = backingDict; } public void Add(TKey key, TValue value) { throw new InvalidOperationException(); } public bool ContainsKey(TKey key) { return _dict.ContainsKey(key); } public ICollection<TKey> Keys { get { return _dict.Keys; } } public bool Remove(TKey key) { throw new InvalidOperationException(); } public bool TryGetValue(TKey key, out TValue value) { return _dict.TryGetValue(key, out value); } public ICollection<TValue> Values { get { return _dict.Values; } } public TValue this[TKey key] { get { return _dict[key]; } set { throw new InvalidOperationException(); } } public void Add(KeyValuePair<TKey, TValue> item) { throw new InvalidOperationException(); } public void Clear() { throw new InvalidOperationException(); } public bool Contains(KeyValuePair<TKey, TValue> item) { return _dict.Contains(item); } public void CopyTo(KeyValuePair<TKey, TValue>[] array, int arrayIndex) { _dict.CopyTo(array, arrayIndex); } public int Count { get { return _dict.Count; } } public bool IsReadOnly { get { return true; } } public bool Remove(KeyValuePair<TKey, TValue> item) { throw new InvalidOperationException(); } public IEnumerator<KeyValuePair<TKey, TValue>> GetEnumerator() { return _dict.GetEnumerator(); } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return ((System.Collections.IEnumerable)_dict).GetEnumerator(); } } Obviously, you can change the this[] setter above if you want to allow modifying values. A: I know this is a very old question, but I somehow found it in 2020 so I suppose it may be worth noting that there is a way to create immutable dictionary now: https://learn.microsoft.com/en-us/dotnet/api/system.collections.immutable.immutabledictionary.toimmutabledictionary?view=netcore-3.1 Usage: using System.Collections.Immutable; public MyClass { private Dictionary<KeyType, ValueType> myDictionary; public ImmutableDictionary<KeyType, ValueType> GetImmutable() { return myDictionary.ToImmutableDictionary(); } } A: Adding onto dbkk's answer, I wanted to be able to use an object initializer when first creating my ReadOnlyDictionary. I made the following modifications: private readonly int _finalCount; /// <summary> /// Takes a count of how many key-value pairs should be allowed. /// Dictionary can be modified to add up to that many pairs, but no /// pair can be modified or removed after it is added. Intended to be /// used with an object initializer. /// </summary> /// <param name="count"></param> public ReadOnlyDictionary(int count) { _dict = new SortedDictionary<TKey, TValue>(); _finalCount = count; } /// <summary> /// To allow object initializers, this will allow the dictionary to be /// added onto up to a certain number, specifically the count set in /// one of the constructors. /// </summary> /// <param name="key"></param> /// <param name="value"></param> public void Add(TKey key, TValue value) { if (_dict.Keys.Count < _finalCount) { _dict.Add(key, value); } else { throw new InvalidOperationException( "Cannot add pair <" + key + ", " + value + "> because " + "maximum final count " + _finalCount + " has been reached" ); } } Now I can use the class like so: ReadOnlyDictionary<string, string> Fields = new ReadOnlyDictionary<string, string>(2) { {"hey", "now"}, {"you", "there"} }; A: The open-source PowerCollections library includes a read-only dictionary wrapper (as well as read-only wrappers for pretty much everything else), accessible via a static ReadOnly() method on the Algorithms class. A: As far as I know, there is not. But maybe you can copy some code (and learn a lot) from these articles: * *Immutability in C# Part One: Kinds of Immutability *Immutability in C# Part Two: A Simple Immutable Stack *Immutability in C# Part Three: A Covariant Immutable Stack *Immutability in C# Part Four: An Immutable Queue *Immutability in C# Part Five: LOLZ *Immutability in C# Part Six: A Simple Binary Tree *Immutability in C# Part Seven: More on Binary Trees *Immutability in C# Part Eight: Even More On Binary Trees *Immutability in C# Part Nine: Academic? Plus my AVL tree implementation *Immutability in C# Part 10: A double-ended queue *Immutability in C# Part Eleven: A working double-ended queue A: I don't think so. There is a way to create a read-only List and read only Collection, but I don't think there's a built in read only Dictionary. System.ServiceModel has a ReadOnlyDictinoary implementation, but its internal. Probably wouldn't be too hard to copy it though, using Reflector, or to simply create your own from scratch. It basically wraps an Dictionary and throws when a mutator is called. A: One workaround might be, throw a new list of KeyValuePair from the Dictionary to keep the original unmodified. var dict = new Dictionary<string, string>(); dict.Add("Hello", "World"); dict.Add("The", "Quick"); dict.Add("Brown", "Fox"); var dictCopy = dict.Select( item => new KeyValuePair<string, string>(item.Key, item.Value)); // returns dictCopy; This way the original dictionary won't get modified. A: With the release of .NET 4.5, there is a new ReadOnlyDictionary class. You simply pass an IDictionary to the constructor to create the immutable dictionary. Here is a helpful extension method which can be used to simplify creating the readonly dictionary. A: "Out of the box" there is not a way to do this. You can create one by deriving your own Dictionary class and implementing the restrictions you need. A: I've found an implementation of an Inmutable (not READONLY) implementation of a AVLTree for C# here. An AVL tree has logarithmic (not constant) cost on each operation, but stills fast. http://csharpfeeds.com/post/7512/Immutability_in_Csharp_Part_Nine_Academic_Plus_my_AVL_tree_implementation.aspx A: You could try something like this: private readonly Dictionary<string, string> _someDictionary; public IEnumerable<KeyValuePair<string, string>> SomeDictionary { get { return _someDictionary; } } This would remove the mutability problem in favour of having your caller have to either convert it to their own dictionary: foo.SomeDictionary.ToDictionary(kvp => kvp.Key); ... or use a comparison operation on the key rather than an index lookup, e.g.: foo.SomeDictionary.First(kvp => kvp.Key == "SomeKey"); A: In general it is a much better idea to not pass around any dictionaries in the first place (if you don't HAVE to). Instead - create a domain-object with an interface that doesn't offer any methods modifying the dictionary (that it wraps). Instead offering required LookUp-method that retrieves element from the dictionary by key (bonus is it makes it easier to use than a dictionary as well). public interface IMyDomainObjectDictionary { IMyDomainObject GetMyDomainObject(string key); } internal class MyDomainObjectDictionary : IMyDomainObjectDictionary { public IDictionary<string, IMyDomainObject> _myDictionary { get; set; } public IMyDomainObject GetMyDomainObject(string key) {.._myDictionary .TryGetValue..etc...}; } A: Since Linq, there is a generic interface ILookup. Read more in MSDN. Therefore, To simply get immutable dictionary you may call: using System.Linq; // (...) var dictionary = new Dictionary<string, object>(); // (...) var read_only = dictionary.ToLookup(kv => kv.Key, kv => kv.Value); A: There's also another alternative as I have described at: http://www.softwarerockstar.com/2010/10/readonlydictionary-tkey-tvalue/ Essentially it's a subclass of ReadOnlyCollection>, which gets the work done in a more elegant manner. Elegant in the sense that it has compile-time support for making the Dictionary read-only rather than throwing exceptions from methods that modify the items within it.
{ "language": "en", "url": "https://stackoverflow.com/questions/35002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73" }
Q: How to expose a collection property? Every time I create an object that has a collection property I go back and forth on the best way to do it? * *public property with a getter that returns a reference to private variable *explicit get_ObjList and set_ObjList methods that return and create new or cloned objects every time *explicit get_ObjList that returns an IEnumerator and a set_ObjList that takes IEnumerator Does it make a difference if the collection is an array (i.e., objList.Clone()) versus a List? If returning the actual collection as a reference is so bad because it creates dependencies, then why return any property as a reference? Anytime you expose an child object as a reference the internals of that child can be changed without the parent "knowing" unless the child has a property changed event. Is there a risk for memory leaks? And, don't options 2 and 3 break serialization? Is this a catch 22 or do you have to implement custom serialization anytime you have a collection property? The generic ReadOnlyCollection seems like a nice compromise for general use. It wraps an IList and restricts access to it. Maybe this helps with memory leaks and serialization. However it still has enumeration concerns Maybe it just depends. If you don't care that the collection is modified, then just expose it as a public accessor over a private variable per #1. If you don't want other programs to modify the collection then #2 and/or #3 is better. Implicit in the question is why should one method be used over another and what are the ramifications on security, memory, serialization, etc.? A: How you expose a collection depends entirely on how users are intended to interact with it. 1) If users will be adding and removing items from an object's collection, then a simple get-only collection property is best (option #1 from the original question): private readonly Collection<T> myCollection_ = new ...; public Collection<T> MyCollection { get { return this.myCollection_; } } This strategy is used for the Items collections on the WindowsForms and WPF ItemsControl controls, where users add and remove items they want the control to display. These controls publish the actual collection and use callbacks or event listeners to keep track of items. WPF also exposes some settable collections to allow users to display a collection of items they control, such as the ItemsSource property on ItemsControl (option #3 from the original question). However, this is not a common use case. 2) If users will only be reading data maintained by the object, then you can use a readonly collection, as Quibblesome suggested: private readonly List<T> myPrivateCollection_ = new ...; private ReadOnlyCollection<T> myPrivateCollectionView_; public ReadOnlyCollection<T> MyCollection { get { if( this.myPrivateCollectionView_ == null ) { /* lazily initialize view */ } return this.myPrivateCollectionView_; } } Note that ReadOnlyCollection<T> provides a live view of the underlying collection, so you only need to create the view once. If the internal collection does not implement IList<T>, or if you want to restrict access to more advanced users, you can instead wrap access to the collection through an enumerator: public IEnumerable<T> MyCollection { get { foreach( T item in this.myPrivateCollection_ ) yield return item; } } This approach is simple to implement and also provides access to all the members without exposing the internal collection. However, it does require that the collection remain unmodfied, as the BCL collection classes will throw an exception if you try to enumerate a collection after it has been modified. If the underlying collection is likely to change, you can either create a light wrapper that will enumerate the collection safely, or return a copy of the collection. 3) Finally, if you need to expose arrays rather than higher-level collections, then you should return a copy of the array to prevent users from modifying it (option #2 from the orginal question): private T[] myArray_; public T[] GetMyArray( ) { T[] copy = new T[this.myArray_.Length]; this.myArray_.CopyTo( copy, 0 ); return copy; // Note: if you are using LINQ, calling the 'ToArray( )' // extension method will create a copy for you. } You should not expose the underlying array through a property, as you will not be able to tell when users modify it. To allow modifying the array, you can either add a corresponding SetMyArray( T[] array ) method, or use a custom indexer: public T this[int index] { get { return this.myArray_[index]; } set { // TODO: validate new value; raise change event; etc. this.myArray_[index] = value; } } (of course, by implementing a custom indexer, you will be duplicating the work of the BCL classes :) A: I usually go for this, a public getter that returns System.Collections.ObjectModel.ReadOnlyCollection: public ReadOnlyCollection<SomeClass> Collection { get { return new ReadOnlyCollection<SomeClass>(myList); } } And public methods on the object to modify the collection. Clear(); Add(SomeClass class); If the class is supposed to be a repository for other people to mess with then I just expose the private variable as per method #1 as it saves writing your own API, but I tend to shy away from that in production code. A: ReadOnlyCollection still has the disadvantage that the consumer can't be sure that the original collection won't be changed at an inopportune time. Instead you can use Immutable Collections. If you need to do a change then instead changing the original you are being given a modified copy. The way it is implemented it is competitive with the performance of the mutable collections. Or even better if you don't have to copy the original several times to make a number of different (incompatible) changes afterwards to each copy. A: I recommend to use the new IReadOnlyList<T> and IReadOnlyCollection<T> Interfaces to expose a collection (requires .NET 4.5). Example: public class AddressBook { private readonly List<Contact> contacts; public AddressBook() { this.contacts = new List<Contact>(); } public IReadOnlyList<Contact> Contacts { get { return contacts; } } public void AddContact(Contact contact) { contacts.Add(contact); } public void RemoveContact(Contact contact) { contacts.Remove(contact); } } If you need to guarantee that the collection can not be manipulated from outside then consider ReadOnlyCollection<T> or the new Immutable collections. Avoid using the interface IEnumerable<T> to expose a collection. This interface does not define any guarantee that multiple enumerations perform well. If the IEnumerable represents a query then every enumeration execute the query again. Developers that get an instance of IEnumerable do not know if it represents a collection or a query. More about this topic can be read on this Wiki page. A: If you're simply looking to expose a collection on your instance, then using a getter/setter to a private member variable seems like the most sensible solution to me (your first proposed option). A: I'm a java developer but I think this is the same for c#. I never expose a private collection property because other parts of the program can change it without parent noticing, so that in the getter method I return an array with the objects of the collection and in the setter method I call a clearAll() over the collection and then an addAll() A: Why do you suggest using ReadOnlyCollection(T) is a compromise? If you still need to get change notifications made on the original wrapped IList you could also use a ReadOnlyObservableCollection(T) to wrap your collection. Would this be less of a compromise in your scenario?
{ "language": "en", "url": "https://stackoverflow.com/questions/35007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: Register Multiple Assemblies to the GAC in Vista I've got a whole directory of dll's I need to register to the GAC. I'd like to avoid registering each file explicitly- but it appears that gacutil has no "register directory" option. Anyone have a fast/simple solution? A: GACUTIL doesn't register DLLs -- not in the "COM" sense. Unlike in COM, GACUTIL copies the file to an opaque directory under %SYSTEMROOT%\assembly and that's where they run from. It wouldn't make sense to ask GACUTIL "register a folder" (not that you can do that with RegSvr32 either). You can use a batch FOR command such as: FOR %a IN (C:\MyFolderWithAssemblies\*.dll) DO GACUTIL /i %a If you place that in a batch file, you must replace %a with %%a A: Use gacutil /il YourPathTo_A_TextFile.txt switch, if you have dlls in multiple different folders. Otherwise go with the for ... in loop mentioned by Euro. The text file should contain a list of assembly paths (one path per line) which should be installed. The paths can also be different folders all over the system. Run the command line as an administrator! Here an example of the YourPathTo_A_TextFile.txt: C:\...Microsoft.Practices.EnterpriseLibrary.Common.dll C:\...Microsoft.Practices.EnterpriseLibrary.Configuration.Design.HostAdapter.dll C:\...Microsoft.Practices.EnterpriseLibrary.Configuration.Design.HostAdapterV5.dll C:\...Microsoft.Practices.EnterpriseLibrary.Configuration.DesignTime.dll C:\...Microsoft.Practices.EnterpriseLibrary.Configuration.EnvironmentalOverrides.dll C:\...Microsoft.Practices.EnterpriseLibrary.Data.dll A: Here is the script you would put into a batch file to register all of the files in the current directory with Gacutil. You don't need to put it in a batch file (you can just copy/paste it to a Command Prompt) to do it. FOR %1 IN (*) DO Gacutil /i %1 Edit: Bah, sorry I was late. I didn't see the previous post when I posted mine. A: I was helped by Daniel answer but i would also add that we need to run Command Prompt as Administrator, and I also added a .dll to avoid trying to add other files FOR %1 IN (*.dll) DO Gacutil /i %1
{ "language": "en", "url": "https://stackoverflow.com/questions/35011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: File database suggestion with support for multiple concurrent users I need a database that could be stored network drive and would allow multiple users (up to 20) to use it without any server software. I'm considering MS Access or Berkeley DB. Can you share your experience with file databases? Which one did you use, did you have any problems with it? A: I really don't think that file-based databases can scale past half a dozen users. The last time I had an Access database (admittedly this was quite a while ago) I had to work really hard to get it to work for 8-9 people. It is really much easier to install Ubuntu on an old junk computer with PostgreSQL or MySQL. That's what I had to do even when I kept my Access front-end. A: I would suggest SQLite because the entire database is stored in a single file, and it quite safely handles multiple users accessing it at the same time. There are several different libraries that you can use for your client application and there is no server software needed. One of the strengths is that it mimics SQL servers so closely that if you need to convert from using a database file to a full-fledged SQL Server, most of your queries in your client won't need to change. You'll just need to migrate the data over to the new server database (which I wouldn't be surprised if there are programs to convert SQLite databases to MySQL databases, for example.) A: Beware of any file based database, they are all likely to have the same problems. Your situation really calls for a Client/Server solution. From SQLite FAQ A good rule of thumb is that you should avoid using SQLite in situations where the same database will be accessed simultaneously from many computers over a network filesystem. http://www.sqlite.org/whentouse.html A: Access can be a bitch. Ive been in the position where i had to go around and tell 20-50 people to close access so I could go to "design mode" to change the design of the forms and maybe a column. No fun at all. (Old access, and it might just be a bad setup) A: Ayende was recently trying to make a similar decision, and tried a bunch of so-called embedded databases. Hopefully his observations can help you. A: I have been using Access for some time and in a variety of situations, including on-line. I have found that Access works well if it is properly set up according to the guidelines. One advantage of Access is that it includes everything in one package: Forms, Query Building, Reports, Database Management, and VBA. In addition, it works well with all other Office applications. The Access 2007 runtime can be obtained free from here, which makes distribution less expensive. Access is certainly unsuitable for large operations, but it should be quite suitable for twenty users. EDIT: Microsoft puts the number of concurrent users at 255. A: The original question makes no sense to me, in that the options don't belong together. BerkeleyDB is a database engine only, while Access is an application development tool that ships with a default file-based (i.e., non-server) database engine (Jet). By virtue of putting Access with Berkeley, it seems obvious that what is needed is only a database engine, and no application at all, but how end users use Berkeley DB without a front end, I don't know (I've only used it from the command line). Those who cannot run a Jet MDB with 20 simultaneous users are simply not competent to be giving advice on using Jet as a data store. It is completely doable as long as best practices are followed. I would recommend in addition to Microsoft's Best Practices web page, Tony Toews's Best Practices, and Tony's Corruption FAQ (i.e., things you want to avoid doing in order to have a stable application). I strongly doubt that the original questioner is building no front end application, but since he doesn't indicate what kind of front end is involved, it's hard to recommend a back end that will go with it. Access has the advantage of giving you both parts of the equation, and when used properly, is perfectly reliable for multiple users. A: Can Access be set up to support 10-20 users? Yes. It, as well as all file-based databases use the file system for locking and concurrency control, however. And, Access data files are more susceptible to database corruption than are database servers. And, while you can set it up for this, you MUST, as David Fenton mentions above, follow best practices, if you want to end up with a reliable system. Personally, I find that, given the hoops that you need to jump through to ensure that an Access solution is reasonably trouble-free, it is much less trouble to implement an instance of MSDE/SQL Server Express, or postgreSql. A: Berkeley DB supports a high degree of concurrency (far more then 20), but it does so primarily by utilizing shared memory and mutexes (possibly even replication) - facilities that do not work well when BDB is deployed as a file stored on a network drive. In order to take advantage of DBD concurrency capabilities you will have to build an application around it.
{ "language": "en", "url": "https://stackoverflow.com/questions/35017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SQL Server, convert a named instance to default instance? I need to convert a named instance of SQL server 2005, to a default instance. Is there a way to do this without a reinstall? The problem is, 2 out of 6 of the developers, installed with a named instance. So its becoming a pain changing connection strings for the other 4 of us. I am looking for the path of least resistance to getting these 2 back on to our teams standard setup. Each has expressed that this is going to be, too much trouble and that it will take away from their development time. I assumed that it would take some time to resolve, in the best interest of all involved, I tried combing through configuration apps installed and didn't see anything, so I figured someone with more knowledge of the inner workings would be here. A: I also wanted to convert a named instance to default - my reason was to access it with just the machine name from various applications. If you want to access a named instance from any connection string without using the instance name, and using only the server name and/or IP address, then you can do the following: * *Open SQL Server Configuration Manager *Click SQL Server Network Configuration *Click Protocols for INSTANCENAME you want to make available (i.e. SQLExpress) *Right-click TCP/IP and click Enabled *Right-click TCP/IP and go to Properties * *Go to the IP Addresses tab *Scroll down to the IPAll section *Clear the field TCP Dynamic Ports (i.e. empty/blank) *Set TCP Port to 1433 *Click Ok *Go to SQL Server Services *Right-click your SQL Server (INSTANCENAME) and click Restart This will make the named instance listen on the default port. Note : You can have only one instance configured like this - no two instances can have same port on the IP All section unless the instance is a failover cluster. A: As far as I know, no. One reason is the folder structure on the hard drive; they will have a name like MSSQL10.[instancename] A: The only way to change the instance name is to re-install - uninstall and install as default instance. A: A lot of times I'll use client alias to point an application at a different sql server than the ones it's connection string is for, esp. handy when working on DTS or an application with a hard coded connection string. Have everybody use a commonly named alias, use the alias in the connection string and point the alias’s on each dev box to the to the different instances. That way you won't have to worry about if the server is the default instance or not. A: This is why a lot of companies store their applications' connection strings at the machine level instead of the application level. Just take the connection string out of the source code entirely. Then have everyone put their connection string in their machine.config. This has the added benefit of avoiding unnecessary app-specific environment logic, i.e. when you copy your application to the staging server, the staging server already "knows" what database to use. A: You shouldn't ever really need to do this. Most software that claims to require the default instance (like Great Plains or Dynamics) doesn't actually. If you repost with your situation (installed X, then Y, but need to accomplish Z) I bet you'll get some good workarounds. A: I think you can migrate your data from Sql Server without having default instance installed. You can just specify the port number of your Sql Server instance in Oracle Sql Developer and you can connect just using the server name, not using the server name and the instance. Like this: connect to "MYSERVER, 1433"
{ "language": "en", "url": "https://stackoverflow.com/questions/35026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "149" }
Q: How to convert complex filename into HTML link? I have a directory with PDF files that I need to create an index for. It is a PHP page with a list of links: <A HREF="path to file">filename</A> The filenames can be complicated: LVD 2-1133 - Γ„ndring av dumpningslina (1984-11-20).pdf What is the correct way to link to this file on a Linux/Apache server? Is there a PHP function to do this conversion? A: You can use rawurlencode() to convert a string according to the RFC 1738 spec. This function replaces all non-alphanumeric characters by their associated code. The difference with urlencode() is that spaces are encoded as plus signs. You'll probably want to use the last one. This technique is called Percent or URL encoding. See Wikipedia for more details. A: The urlencode() function will convert spaces into plus signs (+), so it won't work. The rawurlencode does the trick. Thanks. Be sure to convert each part of the path separately, otherwise path/file will be converted into path%2Ffile. (which was what I missed) A: urlencode() should probably do what you want. Edit: urlencode() works fine on swedish characters. <?php echo urlencode("Γ₯Àâ"); ?> converts to: %E5%E4%F6 A: URL encoding. I think it's urlencode() in PHP. A: rawurlencode will encode "exotic" characters in a URL.
{ "language": "en", "url": "https://stackoverflow.com/questions/35037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Migrating database changes from development to live Perhaps the biggest risk in pushing new functionality to live lies with the database modifications required by the new code. In Rails, I believe they have 'migrations', in which you can programmatically make changes to your development host, and then make the same changes live along with the code that uses the revised schema. And roll both backs if needs be, in a synchronized fashion. Has anyone come across a similar toolset for PHP/MySQL? Would love to hear about it, or any programmatic or process solutions to help make this less risky... A: I don't trust programmatic migrations. If it's a simple change, such as adding a NULLable column, I'll just add it directly to the live server. If it's more complex or requires data changes, I'll write a pair of SQL migration files and test them against a replica database. When using migrations, always test the rollback migration. It is your emergency "oh shit" button. A: I've never come across a tool that would do the job. Instead I've used individual files, numbered so that I know which order to run them: essentially, a manual version of Rails migrations, but without the rollback. Here's the sort of thing I'm talking about: 000-clean.sql # wipe out everything in the DB 001-schema.sql # create the initial DB objects 002-fk.sql # apply referential integrity (simple if kept separate) 003-reference-pop.sql # populate reference data 004-release-pop.sql # populate release data 005-add-new-table.sql # modification 006-rename-table.sql # another modification... I've never actually run into any problems doing this, but it's not very elegant. It's up to you to track which scripts need to run for a given update (a smarter numbering scheme could help). It also works fine with source control. Dealing with surrogate key values (from autonumber columns) can be a pain, since the production database will likely have different values than the development DB. So, I try never to reference a literal surrogate key value in any of my modification scripts if at all possible. A: I've used this tool before and it worked perfectly. http://www.mysqldiff.org/ It takes as an input either a DB connection or a SQL file, and compares it to the same (either another DB connection or another SQL file). It can spit out the SQL to make the changes or make the changes for you. A: @[yukondude] I'm using Perl myself, and I've gone down the route of Rails-style migrations semi-manually in the same way. What I did was have a single table "version" with a single column "version", containing a single row of one number which is the current schema version. Then it was (quite) trivial to write a script to read that number, look in a certain directory and apply all the numbered migrations to get from there to here (and then updating the number). In my dev/stage environment I frequently (via another script) pull the production data into the staging database, and run the migration script. If you do this before you go live you'll be pretty sure the migrations will work. Obviously you test extensively in your staging environment. I tag up the new code and the required migrations under one version control tag. To deploy to stage or live you just update everything to this tag and run the migration script fairly quick. (You might want to have arranged a short downtime if it's really wacky schema changes.) A: The solution I use (originally developed by a friend of mine) is another addendum to yukondude. * *Create a schema directory under version control and then for each db change you make keep a .sql file with the SQL you want executed along with the sql query to update the db_schema table. *Create a database table called "db_schema" with an integer column named version. *In the schema directory create two shell scripts, "current" and "update". Executing current tells you which version of the db schema the database you're connected to is currently at. Running update executes each .sql file numbered greater than the version in the db_schema table sequentially until you're up to the greatest numbered file in your schema dir. Files in the schema dir: 0-init.sql 1-add-name-to-user.sql 2-add-bio.sql What a typical file looks like, note the db_schema update at the end of every .sql file: BEGIN; -- comment about what this is doing ALTER TABLE user ADD COLUMN bio text NULL; UPDATE db_schema SET version = 2; COMMIT; The "current" script (for psql): #!/bin/sh VERSION=`psql -q -t <<EOF \set ON_ERROR_STOP on SELECT version FROM db_schema; EOF ` [ $? -eq 0 ] && { echo $VERSION exit 0 } echo 0 the update script (also psql): #!/bin/sh CURRENT=`./current` LATEST=`ls -vr *.sql |egrep -o "^[0-9]+" |head -n1` echo current is $CURRENT echo latest is $LATEST [[ $CURRENT -gt $LATEST ]] && { echo That seems to be a problem. exit 1 } [[ $CURRENT -eq $LATEST ]] && exit 0 #SCRIPT_SET="-q" SCRIPT_SET="" for (( I = $CURRENT + 1 ; I <= $LATEST ; I++ )); do SCRIPT=`ls $I-*.sql |head -n1` echo "Adding '$SCRIPT'" SCRIPT_SET="$SCRIPT_SET $SCRIPT" done echo "Applying updates..." echo $SCRIPT_SET for S in $SCRIPT_SET ; do psql -v ON_ERROR_STOP=TRUE -f $S || { echo FAIL exit 1 } done echo OK My 0-init.sql has the full initial schema structure along with the initial "UPDATE db_schema SET version = 0;". Shouldn't be too hard to modify these scripts for MySQL. In my case I also have export PGDATABASE="dbname" export PGUSER="mike" in my .bashrc. And it prompts for password with each file that's being executed. A: Symfony has a plugin called sfMigrationsLight that handles basic migrations. CakePHP also has migrations. For whatever reason, migration support has never really been a high priority for most of the PHP frameworks and ORMs out there. A: I use SQLyog to copy the structure, and I ALWAYS, let me repeat ALWAYS make a backup first. A: Pretty much what Lot105 described. Each migration needs an apply and rollback script, and you have some kind of control script which checks which migration(s) need to be applied and applies them in the appropriate order. Each developer then keeps their db in sync using this scheme, and when applied to production the relevant changes are applied. The rollback scripts can be kept to back out a change if that becomes necessary. Some changes can't be done with a simple ALTER script such as a tool like sqldiff would produce; some changes don't require a schema change but a programmatic change to existing data. So you can't really generalise, which is why you need a human-edited script. A: I've always preferred to keep my development site pointing to the same DB as the live site. This may sound risky at first but in reality it solves many problems. If you have two sites on the same server pointing to the same DB, you get a real time and accurate view of what your users will see when it goes live. You will only ever have 1 database and so long as you make it a policy to never delete a column from a table, you know your new code will match up with the database you are using. There is also significantly less havoc when migrating. You only need to move over the PHP scripts and they are already tested using the same DB. I also tend to create a symlink to any folder that is a target for user uploads. This means there is no confusion on which user files have been updated. Another side affect is the option of porting over a small group of 'beta-testers' to use the site in everyday use. This can lead to a lot of feedback that you can implement before the public launch. This may not work in all cases but I've started moving all my updates to this model. It's caused much smoother development and launches. A: In the past I have used LiquiBase, a Java-based tool where you configure your migrations as XML files. You can generate the necessary SQL with it. Today I'd use the Doctrine 2 library which has migration facilities similar to Ruby. The Symfony 2 framework also has a nice way to deal with schema changes - its command line tool can analyze the existing schema and generate SQL to match the database to the changed schema definitions.
{ "language": "en", "url": "https://stackoverflow.com/questions/35047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: custom action dll in managed code How can I call a custom action dll written in managed code (.net) from an installer without using an unmanaged intermediary? A: The answer to your question depends on how your are authoring your installer. For Visual Studio setup projects, create an installer class in one of your deployed assemblies. This is covered in the MSDN documentation, eg http://msdn.microsoft.com/en-us/library/d9k65z2d(VS.80).aspx For Wix projects, you can use DTF to build managed custom actions which have complete access to the contents of the MSI. Wix is available at http://wix.sourceforge.net/. A: There is support for exactly this in .NET/Windows Installer Create an assembly using VS.NET. Add an installer class to the project - select 'Add - New Item'. Select intaller class. This class derives from System.Configuration.Install.Installer. It has a number of virtual methods such as Install(). This will be called when by the windows installer engine at install time. This assembly can then be added to your Windows Installer project as a custom action. The method used to declare the custom action as a .NET installer class depends on the tool you are using to create the installation. Sorry this is a little brief. A: I wish I knew more about your specific situation, but as general piece of advice, you might look into C++/CLI for issues involving unmanaged/managed interoperability: C++: The Most Powerful Language for .NET Framework Programming I know from experience that I can "lift" unmanaged code into a C++/CLI project where I can use it from any C#/.NET/managed code, but it sounds like you want to do the opposite: lift managed code into a C++/CLI project, and then link that with some unmanaged code and expose that as a traditional unmanaged DLL (or installer binary). I'm not sure if this is possible in C++/CLI. A: Jared - Can you provide some information on how you accomplished this? Blockquote I wish I knew more about your specific situation, but as general piece of advice, you might look into C++/CLI for issues involving unmanaged/managed interoperability: C++: The Most Powerful Language for .NET Framework Programming I know from experience that I can "lift" unmanaged code into a C++/CLI project where I can use it from any C#/.NET/managed code, but it sounds like you want to do the opposite: lift managed code into a C++/CLI project, and then link that with some unmanaged code and expose that as a traditional unmanaged DLL (or installer binary). I'm not sure if this is possible in C++/CLI. Blockquote I am stuck in a problem with this issue right now. I have a project that works in standard mfc VC but I am trying to take it into he c++/cli world. My main issue I am having is that I am failing to load the Interface on the CoCreateInstance() call.
{ "language": "en", "url": "https://stackoverflow.com/questions/35049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Comparison of Javascript libraries After the suggestion to use a library for my ajax needs I am going to use one, the problem is that there are so many and I've no idea how to even begin telling them apart. Thus, can anybody A) Give a rundown of the differences or B) Point me (and others like me) somewhere that has such a list. Failing that plan C is to go with whichever gets mentioned the most here. A: jQuery, easy to learn, easy to use, small footprint, active plugin developer community. Can't go wrong with jQuery. A: For what its worth jQuery's website redesign launched this morning (Friday August 29, 2008). Good fun fact. And of course +1 to its mention. A: Wikipedia? A: Stackoverflow uses jquery I think, and I hear that jquery is all the rage A: I have to put in another vote for jQuery. It is dead-simple to use and makes your javascript much cleaner. As an example, if you want to add an onclick event to all the divs inside an element with id "clickdivs", you just do this: function clickedme(event) { alert('Someone clicked me!'); } $('#clickdivs div').click(clickedme); Your HTML would look like this: <div id="clickdivs"> <div>Click Here</div> <div>And Here</div> <p>Not here</p> <div>Click Here Too</div> </div> Viola! A: Related thread here, with some good contributions: What JavaScript library would you choose for a new project and why? A: We are starting to use jQuery where I work. I'm not big on JavaScript, but everyone else likes it a lot. I don't know if that helps at all... A: To answer B: Comparison of JavaScript frameworks EDIT: Although everyone and their mom is apparently riding the jQuery bandwagon (I use MochiKit), there are many libraries which provide the same functionality - the problem set which most libraries solve (async client-server communication, DOM manipulation, etc.) is the same, and there are few that don't have what you will need to get the job done. The important thing to determine for yourself is whether or not a library will fit your particular style and sensibilities. Wide-spread ignorance about how JavaScript, the language, actually works, coupled with the negative press resulting thereby, coupled with the now-immense popularity of jQuery leads most people down that road. Thankfully, it isn't a bad road to be on as there are a lot of travellers to keep you company when the abstractions leak and you need help. You probably can't go wrong choosing jQuery. A: I've been using Prototype + Scriptaculous. They have good API documentation and work great for me! The biggest benefits are: * *Cleans up messy javascript code *Cross browser compatibility *Simplifies AJAX handling *Smooth UI effects A: I suggest restricting yourself to a library which can be pulled from a free CDN such as Google's AJAX CDN or Microsoft's AJAX CDN. Availability on a CDN indicates a certain minimum level of popularity and using one will allow you to load your web pages faster. jQuery is my preferred library and is available on both the Google and MS CDNs. A: jQuery is the best framework I have seen out there... it is embracing modern methods of coding such as simple, clean and fast code. Go for jQuery...
{ "language": "en", "url": "https://stackoverflow.com/questions/35050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: programmatically merge .reg file into win32 registry What's the best way to programmatically merge a .reg file into the registry? This is for unit testing; the .reg file is a test artifact which will be added then removed at the start and end of testing. Or, if there's a better way to unit test against the registry... A: It is possible to remove registry keys using a .reg file, although I'm not sure how well it's documented. Here's how: REGEDIT4 [-HKEY_CURRENT_USER\Software\<otherpath>] The - in front of the key name tells Regedit that you want to remove the key. To run this silently, type: regedit /s "myfile.reg" A: If you're shelling out, I'd use the reg command (details below). If you can tell us what language you're working with, we could provide language specific code. C:>reg /? REG Operation [Parameter List] Operation [ QUERY | ADD | DELETE | COPY | SAVE | LOAD | UNLOAD | RESTORE | COMPARE | EXPORT | IMPORT | FLAGS ] Return Code: (Except for REG COMPARE) 0 - Successful 1 - Failed For help on a specific operation type: REG ADD /? REG DELETE /? [snipped] A: I looked into it by checking out my file associations. It seems that a .reg file is just called as the first parameter to the regedit.exe executable on Windows. So you can just say regedit.exe "mytest.reg". What I'm not sure of is how to get rid of the dialog box that pops up that asks for your confirmation. A: Use the Win32 API function ShellExecute() or ShellExecuteEx(). If the comment is 'open' it should merge the .reg file. I haven't tested it, but it should work. A: One of the most frustrating things about writing unit tests is dealing with dependencies. One of the greatest things about Test-Driven Development is that it produces code that is decoupled from its dependencies. Cool, huh? When I find myself asking questions like this one, I look for ways to decouple the code I'm writing from the dependency. Separate out the reading of the registry from the complexity that you'd like to test.
{ "language": "en", "url": "https://stackoverflow.com/questions/35070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Strategy for identifying unused tables in SQL Server 2000? I'm working with a SQL Server 2000 database that likely has a few dozen tables that are no longer accessed. I'd like to clear out the data that we no longer need to be maintaining, but I'm not sure how to identify which tables to remove. The database is shared by several different applications, so I can't be 100% confident that reviewing these will give me a complete list of the objects that are used. What I'd like to do, if it's possible, is to get a list of tables that haven't been accessed at all for some period of time. No reads, no writes. How should I approach this? A: MSSQL2000 won't give you that kind of information. But a way you can identify what tables ARE used (and then deduce which ones are not) is to use the SQL Profiler, to save all the queries that go to a certain database. Configure the profiler to record the results to a new table, and then check the queries saved there to find all the tables (and views, sps, etc) that are used by your applications. Another way I think you might check if there's any "writes" is to add a new timestamp column to every table, and a trigger that updates that column every time there's an update or an insert. But keep in mind that if your apps do queries of the type select * from ... then they will receive a new column and that might cause you some problems. A: Another suggestion for tracking tables that have been written to is to use Red Gate SQL Log Rescue (free). This tool dives into the log of the database and will show you all inserts, updates and deletes. The list is fully searchable, too. It doesn't meet your criteria for researching reads into the database, but I think the SQL Profiler technique will get you a fair idea as far as that goes. A: If you have lastupdate columns you can check for the writes, there is really no easy way to check for reads. You could run profiler, save the trace to a table and check in there What I usually do is rename the table by prefixing it with an underscrore, when people start to scream I just rename it back A: If by not used, you mean your application has no more references to the tables in question and you are using dynamic sql, you could do a search for the table names in your app, if they don't exist blow them away. I've also outputted all sprocs, functions, etc. to a text file and done a search for the table names. If not found, or found in procedures that will need to be deleted too, blow them away. A: It looks like using the Profiler is going to work. Once I've let it run for a while, I should have a good list of used tables. Anyone who doesn't use their tables every day can probably wait for them to be restored from backup. Thanks, folks. A: Probably too late to help mogrify, but for anybody doing a search; I would search for all objects using this object in my code, then in SQL Server by running this : select distinct '[' + object_name(id) + ']' from syscomments where text like '%MY_TABLE_NAME%'
{ "language": "en", "url": "https://stackoverflow.com/questions/35076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the role of the buried-buffer-list frame parameter in Emacs In emacs, I've read the following code snippet in simple.el: (frame-parameter frame 'buried-buffer-list) What is the exact meaning of the 'buried-buffer-list parameter? What it is used for? A: The result of M-x describe function RET frame-parameter is: frame-parameter is a built-in function. (frame-parameter FRAME PARAMETER) Return FRAME's value for parameter PARAMETER. If FRAME is nil, describe the currently selected frame. Also, have a look in the Elisp info manual for the node called "Frame/Frame Parameters". There isn't a specific reference to 'buried-buffer-list that I could find. You might be able to get the value of it by evaluating: (cdr (frame-parameter FRAME 'buffer-list)) since a "buried buffer" is just a buffer that's been pushed to the back of the list of buffers for a particular frame. See the documentation for bury-buffer: bury-buffer is an interactive compiled Lisp function in `window.el'. (bury-buffer &optional BUFFER-OR-NAME) Put BUFFER-OR-NAME at the end of the list of all buffers. There it is the least likely candidate for `other-buffer' to return; thus, the least likely buffer for C-x b to select by default. You can specify a buffer name as BUFFER-OR-NAME, or an actual buffer object. If BUFFER-OR-NAME is nil or omitted, bury the current buffer. Also, if BUFFER-OR-NAME is nil or omitted, remove the current buffer from the selected window if it is displayed there. A: A quick look at http://www.update.uu.se/~ams/slask/emacs/src/frame.h returns: List of buffers that were viewed, then buried in this frame. The most recently buried buffer is first. So in theory you can use cdr to obtain the same list as Ben Collins said.
{ "language": "en", "url": "https://stackoverflow.com/questions/35102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I store information in my executable in .Net I'd like to bind a configuration file to my executable. I'd like to do this by storing an MD5 hash of the file inside the executable. This should keep anyone but the executable from modifying the file. Essentially if someone modifies this file outside of the program the program should fail to load it again. EDIT: The program processes credit card information so being able to change the configuration in any way could be a potential security risk. This software will be distributed to a large number of clients. Ideally client should have a configuration that is tied directly to the executable. This will hopefully keep a hacker from being able to get a fake configuration into place. The configuration still needs to be editable though so compiling an individual copy for each customer is not an option. It's important that this be dynamic. So that I can tie the hash to the configuration file as the configuration changes. A: A better solution is to store the MD5 in the configuration file. But instead of the MD5 being just of the configuration file, also include some secret "key" value, like a fixed guid, in the MD5. write(MD5(SecretKey + ConfigFileText)); Then you simply remove that MD5 and rehash the file (including your secret key). If the MD5's are the same, then no-one modified it. This prevents someone from modifying it and re-applying the MD5 since they don't know your secret key. Keep in mind this is a fairly weak solution (as is the one you are suggesting) as they could easily track into your program to find the key or where the MD5 is stored. A better solution would be to use a public key system and sign the configuration file. Again that is weak since that would require the private key to be stored on their local machine. Pretty much anything that is contained on their local PC can be bypassed with enough effort. If you REALLY want to store the information in your executable (which I would discourage) then you can just try appending it at the end of the EXE. That is usually safe. Modifying executable programs is virus like behavior and most operating system security will try to stop you too. If your program is in the Program Files directory, and your configuration file is in the Application Data directory, and the user is logged in as a non-administrator (in XP or Vista), then you will be unable to update the EXE. Update: I don't care if you are using Asymmetric encryption, RSA or Quantum cryptography, if you are storing your keys on the user's computer (which you must do unless you route it all through a web service) then the user can find your keys, even if it means inspecting the registers on the CPU at run time! You are only buying yourself a moderate level of security, so stick with something that is simple. To prevent modification the solution I suggested is the best. To prevent reading then encrypt it, and if you are storing your key locally then use AES Rijndael. Update: The FixedGUID / SecretKey could alternatively be generated at install time and stored somewhere "secret" in the registry. Or you could generate it every time you use it from hardware configuration. Then you are getting more complicated. How you want to do this to allow for moderate levels of hardware changes would be to take 6 different signatures, and hash your configuration file 6 times - once with each. Combine each one with a 2nd secret value, like the GUID mentioned above (either global or generated at install). Then when you check you verify each hash separately. As long as they have 3 out of 6 (or whatever your tolerance is) then you accept it. Next time you write it you hash it with the new hardware configuration. This allows them to slowly swap out hardware over time and get a whole new system. . . Maybe that is a weakness. It all comes down to your tolerance. There are variations based on tighter tolerances. UPDATE: For a Credit Card system you might want to consider some real security. You should retain the services of a security and cryptography consultant. More information needs to be exchanged. They need to analyze your specific needs and risks. Also, if you want security with .NET you need to first start with a really good .NET obfuscator (just Google it). A .NET assembly is way to easy to disassemble and get at the source code and read all your secrets. Not to sound a like a broken record, but anything that depends on the security of your user's system is fundamentally flawed from the beginning. A: Out of pure curiosity, what's your reasoning for never wanting to load the file if it's been changed? Why not just keep all of the configuration information compiled in the executable? Why bother with an external file at all? Edit I just read your edit about this being a credit card info program. That poses a very interesting challenge. I would think, for that level of security, some sort of pretty major encryption would be necessary but I don't know anything about handling that sort of thing in such a way that the cryptographic secrets can't just be extracted from the executable. Is authenticating against some sort of online source a possibility? A: I'd suggest you use a Assymmetric Key Encryption to encrypt your configuration file, wherever they are stored, inside the executable or not. If I remember correctly, RSA is one the variants. For the explanation of it, see Public-key cryptography on Wikipedia Store the "reading" key in your executable and keep to yourself the "writing" key. So no one but you can modify the configuration. This has the advantages of: * *No-one can modify the configuration unless they have the "writing" key because any modification will corrupt it entirely, even if they know the "reading" key it would takes ages to compute the other key. *Modification guarantee. *It's not hard - there are plenty of libraries available these days. There're also a lot of key-generation programs that can generate really, really long keys. Do take some research on how to properly implement them though. A: just make a const string that holds the md5 hash and compile it into your app ... your app can then just refer to this const string when validating the configuration file
{ "language": "en", "url": "https://stackoverflow.com/questions/35103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Is there a built in way in .Net AJAX to manually serialize an object to a JSON string? I've found ScriptingJsonSerializationSection but I'm not sure how to use it. I could write a function to convert the object to a JSON string manually, but since .Net can do it on the fly with the <System.Web.Services.WebMethod()> and <System.Web.Script.Services.ScriptMethod()> attributes so there must be a built-in way that I'm missing. PS: using Asp.Net 2.0 and VB.Net - I put this in the tags but I think people missed it. A: I think what you're looking for is this class: System.ServiceModel.Web.DataContractJsonSerializer Here's an example from Rick Strahl: DataContractJsonSerializer in .NET 3.5 A: Since the JavaScriptSerializer class is technically being deprecated, I believe DataContractJsonSerializer is the preferable way to go if you're using 3.0+. A: Well, I am currently using the following extension methods to serialize and deserialize objects: using System.Web.Script.Serialization; public static string ToJSON(this object objectToSerialize) { JavaScriptSerializer jss = new JavaScriptSerializer(); return jss.Serialize(objectToSerialize); } /// <typeparam name="T">The type we are deserializing the JSON to.</typeparam> public static T FromJSON<T>(this string json) { JavaScriptSerializer jss = new JavaScriptSerializer(); return jss.Deserialize<T>(json); } I use this quite a bit - be forewarned, this implementation is a bit naive (i.e. there are some potential problems with it, depending on what you are serializing and how you use it on the client, particularly with DateTimes). A: In the System.Web.Extensions assembly, version 3.5.0.0, there's a JavaScriptSerializer class that should handle what you want. A: This should do the trick Dim jsonSerialiser As New System.Web.Script.Serialization.JavaScriptSerializer Dim jsonString as String = jsonSerialiser.Serialize(yourObject) A: Try System.Web.Script.Serialization.JavaScriptSerializer or Check out JSON.org there is a whole list of libraries written to do exactly what you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/35106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Image processing in Silverlight 2 Is it possible to do image processing in silverlight 2.0? What I want to do is take an image, crop it, and then send the new cropped image up to the server. I know I can fake it by clipping the image, but that only effects the rendering of the image. I want to create a new image. After further research I have answered my own question. Answer: No. Since all apis would be in System.Windows.Media.Imaging and that namespace does not have the appropriate classes in Silverlight I'm going to use fjcore. http://code.google.com/p/fjcore/ Thanks Jonas A: Well, you can actually do local image processing in Silverlight 2... But there are no built in classes to help you. But you can load any image into a byte array, and start manipulating it, or implement your own image encoder. Joe Stegman got lots of great information about "editable images" in Silverlight over at http://blogs.msdn.com/jstegman/. He does things like applying filters to images, generating mandlebrots and more. This blog discuss a JPEG Silverilght Encoder (FJCore) you can use to resize and recompress photos client size: http://fluxcapacity.net/2008/07/14/fjcore-to-the-rescue/ Another tool is "Fluxify" which lets you resize and upload photos using Silverilght 2. Can be found over at http://fluxtools.net/ So yes, client side image processing can definetly be done in Silverilght 2. Happy hacking! A: I know this doesn't directly answer your question, but what if you do all of the clipping on the client side to crop the image, then send the server the original image and the coordinates for clipping. Then on the server side, which will probably more suited for image manipulation like this (e.g. PHP it's very easy) you'll do the actual cropping of the image and storing the cropped version. A: There is first-class support for bitmap surfaces in Silverlight 3: http://blogs.msdn.com/kaevans/archive/2009/03/20/some-silverlight-3-goodness-using-writeablebitmap.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/35120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Prevent SWT ScrolledComposite from eating part of it's children What did I do wrong? Here is an excerpt from my code: public void createPartControl(Composite parent) { parent.setLayout(new FillLayout()); ScrolledComposite scrollBox = new ScrolledComposite(parent, SWT.V_SCROLL); scrollBox.setExpandHorizontal(true); mParent = new Composite(scrollBox, SWT.NONE); scrollBox.setContent(mParent); FormLayout layout = new FormLayout(); mParent.setLayout(layout); // Adds a bunch of controls here mParent.layout(); mParent.setSize(mParent.computeSize(SWT.DEFAULT, SWT.DEFAULT, true)); } ...but it clips the last button: bigbrother82: That didn't work. SCdF: I tried your suggestion, and now the scrollbars are gone. I need to work some more on that. A: If I am not mistaken you need to swap the mParent.layout(); and mParent.setSize(mParent.computeSize(SWT.DEFAULT, SWT.DEFAULT, true)); so that you have: public void createPartControl(Composite parent) { parent.setLayout(new FillLayout()); ScrolledComposite scrollBox = new ScrolledComposite(parent, SWT.V_SCROLL); scrollBox.setExpandHorizontal(true); mParent = new Composite(scrollBox, SWT.NONE); scrollBox.setContent(mParent); FormLayout layout = new FormLayout(); mParent.setLayout(layout); // Adds a bunch of controls here mParent.setSize(mParent.computeSize(SWT.DEFAULT, SWT.DEFAULT, true)); mParent.layout(); } A: This is a common hurdle when using ScrolledComposite. When it gets so small that the scroll bar must be shown, the client control has to shrink horizontally to make room for the scroll bar. This has the side effect of making some labels wrap lines, which moved the following controls farther down, which increased the minimum height needed by the content composite. You need to listen for width changes on the content composite (mParent), compute the minimum height again given the new content width, and call setMinHeight() on the scrolled composite with new height. public void createPartControl(Composite parent) { parent.setLayout(new FillLayout()); ScrolledComposite scrollBox = new ScrolledComposite(parent, SWT.V_SCROLL); scrollBox.setExpandHorizontal(true); scrollBox.setExpandVertical(true); // Using 0 here ensures the horizontal scroll bar will never appear. If // you want the horizontal bar to appear at some threshold (say 100 // pixels) then send that value instead. scrollBox.setMinWidth(0); mParent = new Composite(scrollBox, SWT.NONE); FormLayout layout = new FormLayout(); mParent.setLayout(layout); // Adds a bunch of controls here mParent.addListener(SWT.Resize, new Listener() { int width = -1; public void handleEvent(Event e) { int newWidth = mParent.getSize().x; if (newWidth != width) { scrollBox.setMinHeight(mParent.computeSize(newWidth, SWT.DEFAULT).y); width = newWidth; } } } // Wait until here to set content pane. This way the resize listener will // fire when the scrolled composite first resizes mParent, which in turn // computes the minimum height and calls setMinHeight() scrollBox.setContent(mParent); } In listening for size changes, note that we ignore any resize events where the width stays the same. This is because changes in the height of the content do not affect the minimum height of the content, as long as the width is the same. A: Don't you need to recompute the size of the scrollBox after the layout? A: Try setting .setMinWidth and .setMinHeight on the ScrolledComposite once the layout has been done, passing it the size of the main composite.
{ "language": "en", "url": "https://stackoverflow.com/questions/35123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Is there a way to perform a circular bit shift in C#? I know that the following is true int i = 17; //binary 10001 int j = i << 1; //decimal 34, binary 100010 But, if you shift too far, the bits fall off the end. Where this happens is a matter of the size of integer you are working with. Is there a way to perform a shift so that the bits rotate around to the other side? I'm looking for a single operation, not a for loop. A: Sincce .NET Core 3.0 and up there's BitOperations.RotateLeft() and BitOperations.RotateRight() so you can just use something like BitOperations.RotateRight(12, 3); BitOperations.RotateLeft(34L, 5); In previous versions you can use BitRotator.RotateLeft() and BitRotator.RotateRight() in Microsoft.VisualStudio.Utilities A: If you know the size of type, you could do something like: uint i = 17; uint j = i << 1 | i >> 31; ... which would perform a circular shift of a 32 bit value. As a generalization to circular shift left n bits, on a b bit variable: /*some unsigned numeric type*/ input = 17; var result = input << n | input >> (b - n); @The comment, it appears that C# does treat the high bit of signed values differently. I found some info on this here. I also changed the example to use a uint. A: Just as reference on how to do it, these two functions work perfectly for rotating the bits of 1/2word: static public uint ShiftRight(uint z_value, int z_shift) { return ((z_value >> z_shift) | (z_value << (16 - z_shift))) & 0x0000FFFF; } static public uint ShiftLeft(uint z_value, int z_shift) { return ((z_value << z_shift) | (z_value >> (16 - z_shift))) & 0x0000FFFF; } It would be easy to extend it for any given size. A: One year ago I've to implement MD4 for my undergraduate thesis. Here it is my implementation of circular bit shift using a UInt32. private UInt32 RotateLeft(UInt32 x, Byte n) { return UInt32((x << n) | (x >> (32 - n))); } A: The extension methods for rotating bits of a uint (32 bits): public static uint ROR(this uint x, int nbitsShift) => (x >> nbitsShift) | (x << (32 - nbitsShift)); public static uint ROL(this uint x, int nbitsShift) => (x << nbitsShift) | (x >> (32 - nbitsShift));
{ "language": "en", "url": "https://stackoverflow.com/questions/35167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: Verilog automatic task What does it mean if a task is declared with the automatic keyword in Verilog? task automatic do_things; input [31:0] number_of_things; reg [31:0] tmp_thing; begin // ... end endtask; Note: This question is mostly because I'm curious if there are any hardware programmers on the site. :) A: "automatic" does in fact mean "re-entrant". The term itself is stolen from software languages -- for example, C has the "auto" keyword for declaring variables as being allocated on the stack when the scope it's in is executed, and deallocated afterwards, so that multiple invocations of the same scope do not see persistent values of that variable. The reason you may not have heard of this keyword in C is that it is the default storage class for all types :-) The alternatives are "static", which means "allocate this variable statically (to a single global location in memory), and refer to this same memory location throughout the execution of the program, regardless of how many times the function is invoked", and "volatile", which means "this is a register elsewhere on my SoC or something on another device which I have no control over; compiler, please don't optimize reads to me away, even when you think you know my value from previous reads with no intermediate writes in the code". "automatic" is intended for recursive functions, but also for running the same function in different threads of execution concurrently. For instance, if you "fork" off N different blocks (using Verilog's fork->join statement), and have them all call the same function at the same time, the same problems arise as a function calling itself recursively. In many cases, your code will be just fine without declaring the task or function as "automatic", but it's good practice to put it in there unless you specifically need it to be otherwise. A: The "automatic" keyword also allows you to write recursive functions (since verilog 2001). I believe they should be synthesisable if they bottom out, but I'm not sure if they have tool support. I too, do verilog! A: It means that the task is re-entrant - items declared within the task are dynamically allocated rather than shared between different invocations of the task. You see - some of us do Verilog... (ugh) A: As Will and Marty say, the automatic was intended for recursive functions. If a normal (i.e. not automatic) function is called with different values and processed by the simulator in the same time slice, the returned value is indeterminate. That can be quite a tricky bug to spot! This is only a simulation issue, when synthesised the logic will be correct. Making the function automatic fixes this. A: In computing, a computer program or subroutine is called re-entrant if multiple invocations can safely run concurrently (Wikipedia). In simple words, the keyword automatic makes it safe, when multiple instances of a task run at a same time. :D A: Automatic is just opposite to static in usual programming. So is the case with Verilog. Think of static variables, they cannot be re-initialized. See the Verilog description below: for (int i = 0; i < 3; i++) begin static int f = 0; f = f + 1; end Result of the above program will be f = 3. Also, see the program below: for (int i = 0; i < 3; i++) begin int f = 0; f = f + 1; end The result of above program is f = 1. What makes a difference is static keyword. Conclusion is tasks in Verilog should be automatic because they are invoked (called) so many times. If they were static (if not declared explicitly, they are static), they could have used the result from the previous call which often we do not want.
{ "language": "en", "url": "https://stackoverflow.com/questions/35170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Regex to replace Boolean with bool I am working on a C++ code base that was recently moved from X/Motif to Qt. I am trying to write a Perl script that will replace all occurrences of Boolean (from X) with bool. The script just does a simple replacement. s/\bBoolean\b/bool/g There are a few conditions. 1) We have CORBA in our code and \b matches CORBA::Boolean which should not be changed. 2) It should not match if it was found as a string (i.e. "Boolean") Updated: For #1, I used lookbehind s/(?<!:)\bBoolean\b/bool/g; For #2, I used lookahead. s/(?<!:)\bBoolean\b(?!")/bool/g</pre> This will most likely work for my situation but how about the following improvements? 3) Do not match if in the middle of a string (thanks nohat). 4) Do not match if in a comment. (// or /**/) A: s/[^:]\bBoolean\b(?!")/bool/g This does not match strings where Boolean is at that the beginning of the line becuase [^:] is "match a character that is not :". A: Watch out with that quote-matching lookahead assertion. That'll only match if Boolean is the last part of a string, but not in the middle of the string. You'll need to match an even number of quote marks preceding the match if you want to be sure you're not in a string (assuming no multi-line strings and no escaped embedded quote marks). A: s/[^:]\bBoolean\b[^"]/bool/g Edit: Rats, beaten again. +1 for beating me, good sir. A: #define Boolean bool Let the preprocesser take care of this. Every time you see a Boolean you can either manually fix it or hope a regex doesn't make a mistake. Depending on how many macros you use you can you could dump the out of cpp. A: To fix condition 1 try: s/[^:]\bBoolean\b(?!")/bool/g The [^:] says to match any character other than ":". A: 3) Do not match if in the middle of a string (thanks nohat). You can perhaps write a reg ex to check ".*Boolean.*". But what if you have quote(") inside the string? So, you have more work to not exclude (\") pattern. 4) Do not match if in a comment. (// or /* */) For '//', you can have a regex to exclude //.* But, better could be to first put a regex to compare the whole line for the // comments ((.*)(//.*)) and then apply replacement only on $1 (first matching pattern). For /* */, it is more complex as this is multiline pattern. One approach can be to first run whole of you code to match multiline comments and then take out only the parts not matching ... something like ... (.*)(/*.**/)(.*). But, the actual regex would be even more complex as you would have not one but more of multi-line comments. Now, what if you have /* or */ inside // block? (I dont know why would you have it.. but Murphy's law says that you can have it). There is obviously some way out but my idea is to emphasize how bad-looking the regex will become. My suggestion here would be to use some lexical tool for C++ and replace the token Boolean with bool. Your thoughts? A: In order to avoid writing a full C parser in perl, you're trying to strike a balance. Depending on how much needs changing, I would be inclined to do something like a very restrictive s/// and then anything that still matches /Boolean/ gets written to an exception file for human decision making. That way you're not trying to parse the C middle strings, multi-line comment, conditional compiled out text, etc. that could be present. A: * *… *… *Do not match if in the middle of a string (thanks nohat). *Do not match if in a comment. (// or /**/) No can do with a simple regex. For that, you need to actually look at every single character left-to-right and decide what kind of thing it is, at least well enough to tell apart comments from multi-line comments from strings from other stuff, and then you need to see if the β€œother stuff” part contains things you want to change. Now, I don’t know the exact syntactical rules for comments and strings in C++ so the following is going to be imprecise and completely undebugged, but it’ll give you an idea of the complexity you’re up against. my $line_comment = qr! (?> // .* \n? ) !x; my $multiline_comment = qr! (?> /\* [^*]* (?: \* (?: [^/*] [^*]* )? )* )* \*/ ) !x; my $string = qr! (?> " [^"\\]* (?: \\ . [^"\\]* )* " ) !x; my $boolean_type = qr! (?<!:) \b Boolean \b !x; $code =~ s{ \G ( $line_comment | $multiline_comment | $string | ( $boolean_type ) | . ) }{ defined $2 ? 'bool' : $1 }gex; Please don’t ask me to explain this in all its intricacies, it would take me a day and another. Just buy and read JeffΒ Friedl’s Mastering Regular Expressions if you want to understand exactly what is going on here. A: The "'Boolean' in the middle of a string" part sounds a bit unlikely, I'd check first if there is any occurrence of it in the code with something like m/"[^"]*Boolean[^"]*"/ And if there is none or a few, just ignore that case.
{ "language": "en", "url": "https://stackoverflow.com/questions/35178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Finding a single number in a list What would be the best algorithm for finding a number that occurs only once in a list which has all other numbers occurring exactly twice. So, in the list of integers (lets take it as an array) each integer repeats exactly twice, except one. To find that one, what is the best algorithm. A: Kyle's solution would obviously not catch situations were the data set does not follow the rules. If all numbers were in pairs the algorithm would give a result of zero, the exact same value as if zero would be the only value with single occurance. If there were multiple single occurance values or triples, the result would be errouness as well. Testing the data set might well end up with a more costly algorithm, either in memory or time. Csmba's solution does show some errouness data (no or more then one single occurence value), but not other (quadrouples). Regarding his solution, depending on the implementation of HT, either memory and/or time is more then O(n). If we cannot be sure about the correctness of the input set, sorting and counting or using a hashtable counting occurances with the integer itself being the hash key would both be feasible. A: By the way, you can expand on this idea to very quickly find two unique numbers among a list of duplicates. Let's call the unique numbers a and b. First take the XOR of everything, as Kyle suggested. What we get is a^b. We know a^b != 0, since a != b. Choose any 1 bit of a^b, and use that as a mask -- in more detail: choose x as a power of 2 so that x & (a^b) is nonzero. Now split the list into two sublists -- one sublist contains all numbers y with y&x == 0, and the rest go in the other sublist. By the way we chose x, we know that a and b are in different buckets. We also know that each pair of duplicates is still in the same bucket. So we can now apply ye olde "XOR-em-all" trick to each bucket independently, and discover what a and b are completely. Bam. A: The fastest (O(n)) and most memory efficient (O(1)) way is with the XOR operation. In C: int arr[] = {3, 2, 5, 2, 1, 5, 3}; int num = 0, i; for (i=0; i < 7; i++) num ^= arr[i]; printf("%i\n", num); This prints "1", which is the only one that occurs once. This works because the first time you hit a number it marks the num variable with itself, and the second time it unmarks num with itself (more or less). The only one that remains unmarked is your non-duplicate. A: O(N) time, O(N) memory HT= Hash Table HT.clear() go over the list in order for each item you see if(HT.Contains(item)) -> HT.Remove(item) else ht.add(item) at the end, the item in the HT is the item you are looking for. Note (credit @Jared Updike): This system will find all Odd instances of items. comment: I don't see how can people vote up solutions that give you NLogN performance. in which universe is that "better" ? I am even more shocked you marked the accepted answer s NLogN solution... I do agree however that if memory is required to be constant, then NLogN would be (so far) the best solution. A: I would say that using a sorting algorithm and then going through the sorted list to find the number is a good way to do it. And now the problem is finding "the best" sorting algorithm. There are a lot of sorting algorithms, each of them with its strong and weak points, so this is quite a complicated question. The Wikipedia entry seems like a nice source of info on that. A: Implementation in Ruby: a = [1,2,3,4,123,1,2,.........] t = a.length-1 for i in 0..t s = a.index(a[i])+1 b = a[s..t] w = b.include?a[i] if w == false puts a[i] end end A: You need to specify what you mean by "best" - to some, speed is all that matters and would qualify an answer as "best" - for others, they might forgive a few hundred milliseconds if the solution was more readable. "Best" is subjective unless you are more specific. That said: Iterate through the numbers, for each number search the list for that number and when you reach the number that returns only a 1 for the number of search results, you are done. A: Seems like the best you could do is to iterate through the list, for every item add it to a list of "seen" items or else remove it from the "seen" if it's already there, and at the end your list of "seen" items will include the singular element. This is O(n) in regards to time and n in regards to space (in the worst case, it will be much better if the list is sorted). The fact that they're integers doesn't really factor in, since there's nothing special you can do with adding them up... is there? Question I don't understand why the selected answer is "best" by any standard. O(N*lgN) > O(N), and it changes the list (or else creates a copy of it, which is still more expensive in space and time). Am I missing something? A: Depends on how large/small/diverse the numbers are though. A radix sort might be applicable which would reduce the sorting time of the O(N log N) solution by a large degree. A: The sorting method and the XOR method have the same time complexity. The XOR method is only O(n) if you assume that bitwise XOR of two strings is a constant time operation. This is equivalent to saying that the size of the integers in the array is bounded by a constant. In that case you can use Radix sort to sort the array in O(n). If the numbers are not bounded, then bitwise XOR takes time O(k) where k is the length of the bit string, and the XOR method takes O(nk). Now again Radix sort will sort the array in time O(nk). A: You could simply put the elements in the set into a hash until you find a collision. In ruby, this is a one-liner. def find_dupe(array) h={} array.detect { |e| h[e]||(h[e]=true; false) } end So, find_dupe([1,2,3,4,5,1]) would return 1. This is actually a common "trick" interview question though. It is normally about a list of consecutive integers with one duplicate. In this case the interviewer is often looking for you to use the Gaussian sum of n-integers trick e.g. n*(n+1)/2 subtracted from the actual sum. The textbook answer is something like this. def find_dupe_for_consecutive_integers(array) n=array.size-1 # subtract one from array.size because of the dupe array.sum - n*(n+1)/2 end
{ "language": "en", "url": "https://stackoverflow.com/questions/35185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: How do I fix a NoSuchMethodError? I'm getting a NoSuchMethodError error when running my Java program. What's wrong and how do I fix it? A: Note that in the case of reflection, you get an NoSuchMethodException, while with non-reflective code, you get NoSuchMethodError. I tend to go looking in very different places when confronted with one versus the other. A: I had the same error: Exception in thread "main" java.lang.NoSuchMethodError: com.fasterxml.jackson.core.JsonGenerator.writeStartObject(Ljava/lang/Object;)V at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:151) at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:292) at com.fasterxml.jackson.databind.ObjectMapper._configAndWriteValue(ObjectMapper.java:3681) at com.fasterxml.jackson.databind.ObjectMapper.writeValueAsString(ObjectMapper.java:3057) To solve it I checked, firstly, Module Dependency Diagram (click in your POM the combination -> Ctrl+Alt+Shift+U or right click in your POM -> Maven -> Show dependencies) to understand where exactly was the conflict between libraries (Intelij IDEA). In my particular case, I had different versions of Jackson dependencies. 1) So, I added directly in my POM of the project explicitly the highest version - 2.8.7 of these two. In properties: <jackson.version>2.8.7</jackson.version> And as dependency: <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>${jackson.version}</version> </dependency> 2) But also it can be solved using Dependency Exclusions. By the same principle as below in example: <dependency> <groupId>group-a</groupId> <artifactId>artifact-a</artifactId> <version>1.0</version> <exclusions> <exclusion> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> </exclusion> </exclusions> </dependency> Dependency with unwanted version will be excluded from your project. A: If you have access to change the JVM parameters, adding verbose output should allow you to see what classes are being loaded from which JAR files. java -verbose:class <other args> When your program is run, the JVM should dump to standard out information such as: ... [Loaded junit.framework.Assert from file:/C:/Program%20Files/junit3.8.2/junit.jar] ... A: This can also be the result of using reflection. If you have code that reflects on a class and extracts a method by name (eg: with Class.getDeclaredMethod("someMethodName", .....)) then any time that method name changes, such as during a refactor, you will need to remember to update the parameters to the reflection method to match the new method signature, or the getDeclaredMethod call will throw a NoSuchMethodException. If this is the reason, then the stack trace should show the point that the reflection method is invoked, and you'll just need to update the parameters to match the actual method signature. In my experience, this comes up occasionally when unit testing private methods/fields, and using a TestUtilities class to extract fields for test verification. (Generally with legacy code that wasn't designed with unit testing in mind.) A: If you are writing a webapp, ensure that you don't have conflicting versions of a jar in your container's global library directory and also in your app. You may not necessarily know which jar is being used by the classloader. e.g. * *tomcat/common/lib *mywebapp/WEB-INF/lib A: For me it happened because I changed argument type in function, from Object a, to String a. I could resolve it with clean and build again A: In my case I had a multi module project and scenario was like com.xyz.TestClass was in module A and as well as in module B and module A was dependent on module B. So while creating a assembly jar I think only one version of class was retained if that doesn't have the invoked method then I was getting NoSuchMethodError runtime exception, but compilation was fine. Related : https://reflectoring.io/nosuchmethod/ A: Why anybody doesn't mention dependency conflicts? This common problem can be related to included dependency jars with different versions. Detailed explanation and solution: https://dzone.com/articles/solving-dependency-conflicts-in-maven Short answer; Add this maven dependency; <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-enforcer-plugin</artifactId> <version>3.0.0-M3</version> <configuration> <rules> <dependencyConvergence /> </rules> </configuration> </plugin> Then run this command; mvn enforcer:enforce Maybe this is the cause your the issue you faced. A: It means the respective method is not present in the class: * *If you are using jar then decompile and check if the respective version of jar have proper class. *Check if you have compiled proper class from your source. A: I have just solved this error by restarting my Eclipse and run the applcation. The reason for my case may because I replace my source files without closing my project or Eclipse. Which caused different version of classes I was using. A: Try this way: remove all .class files under your project directories (and, of course, all subdirectories). Rebuild. Sometimes mvn clean (if you are using maven) does not clean .class files manually created by javac. And those old files contain old signatures, leading to NoSuchMethodError. A: Just adding to existing answers. I was facing this issue with tomcat in eclipse. I had changed one class and did following steps, * *Cleaned and built the project in eclpise *mvn clean install *Restarted tomcat Still I was facing same error. Then I cleaned tomcat, cleaned tomcat working directory and restarted server and my issue is gone. Hope this helps someone A: Without any more information it is difficult to pinpoint the problem, but the root cause is that you most likely have compiled a class against a different version of the class that is missing a method, than the one you are using when running it. Look at the stack trace ... If the exception appears when calling a method on an object in a library, you are most likely using separate versions of the library when compiling and running. Make sure you have the right version both places. If the exception appears when calling a method on objects instantiated by classes you made, then your build process seems to be faulty. Make sure the class files that you are actually running are updated when you compile. A: If using Maven or another framework, and you get this error almost randomly, try a clean install like... clean install This is especially likely to work if you wrote the object and you know it has the method. A: To answer the original question. According to java docs here: "NoSuchMethodError" Thrown if an application tries to call a specified method of a class (either static or instance), and that class no longer has a definition of that method. Normally, this error is caught by the compiler; this error can only occur at run time if the definition of a class has incompatibly changed. * *If it happens in the run time, check the class containing the method is in class path. *Check if you have added new version of JAR and the method is compatible. A: These problems are caused by the use of the same object at the same two classes. Objects used does not contain new method has been added that the new object class contains. ex: filenotnull=/DayMoreConfig.conf 16-07-2015 05:02:10:ussdgw-1: Open TCP/IP connection to SMSC: 10.149.96.66 at 2775 16-07-2015 05:02:10:ussdgw-1: Bind request: (bindreq: (pdu: 0 9 0 [1]) 900 900 GEN 52 (addrrang: 0 0 2000) ) Exception in thread "main" java.lang.NoSuchMethodError: gateway.smpp.PDUEventListener.<init>(Lgateway/smpp/USSDClient;)V at gateway.smpp.USSDClient.bind(USSDClient.java:139) at gateway.USSDGW.initSmppConnection(USSDGW.java:274) at gateway.USSDGW.<init>(USSDGW.java:184) at com.vinaphone.app.ttn.USSDDayMore.main(USSDDayMore.java:40) -bash-3.00$ These problems are caused by the concomitant 02 similar class (1 in src, 1 in jar file here is gateway.jar) A: I fixed this problem in Eclipse by renaming a Junit test file. In my Eclipse work space I have an App project and a Test project. The Test project has the App project as a required project on the build path. Started getting the NoSuchMethodError. Then I realized the class in the Test project had the same name as the class in the App project. App/ src/ com.example/ Projection.java Test/ src/ com.example/ Projection.java After renaming the Test to the correct name "ProjectionTest.java" the exception went away. A: NoSuchMethodError : I have spend couple of hours fixing this issue, finally fixed it by just renaming package name , clean and build ... Try clean build first if it doesn't works try renaming the class name or package name and clean build...it should be fixed. Good luck. A: This is usually caused when using a build system like Apache Ant that only compiles java files when the java file is newer than the class file. If a method signature changes and classes were using the old version things may not be compiled correctly. The usual fix is to do a full rebuild (usually "ant clean" then "ant"). Sometimes this can also be caused when compiling against one version of a library but running against a different version. A: I was having your problem, and this is how I fixed it. The following steps are a working way to add a library. I had done the first two steps right, but I hadn't done the last one by dragging the ".jar" file direct from the file system into the "lib" folder on my eclipse project. Additionally, I had to remove the previous version of the library from both the build path and the "lib" folder. Step 1 - Add .jar to build path Step 2 - Associate sources and javadocs (optional) Step 3 - Actually drag .jar file into "lib" folder (not optional) A: I ran into a similar problem when I was changing method signatures in my application. Cleaning and rebuilding my project resolved the "NoSuchMethodError". A: I've had the same problem. This is also caused when there is an ambiguity in classes. My program was trying to invoke a method which was present in two JAR files present in the same location / class path. Delete one JAR file or execute your code such that only one JAR file is used. Check that you are not using same JAR or different versions of the same JAR that contain the same class. DISP_E_EXCEPTION [step] [] [Z-JAVA-105 Java exception java.lang.NoSuchMethodError(com.example.yourmethod)] A: Above answer explains very well ..just to add one thing If you are using using eclipse use ctrl+shift+T and enter package structure of class (e.g. : gateway.smpp.PDUEventListener ), you will find all jars/projects where it's present. Remove unnecessary jars from classpath or add above in class path. Now it will pick up correct one. A: I ran into similar issue. Caused by: java.lang.NoSuchMethodError: com.abc.Employee.getEmpId()I Finally I identified the root cause was changing the data type of variable. * *Employee.java --> Contains the variable (EmpId) whose Data Type has been changed from int to String. *ReportGeneration.java --> Retrieves the value using the getter, getEmpId(). We are supposed to rebundle the jar by including only the modified classes. As there was no change in ReportGeneration.java I was only including the Employee.class in Jar file. I had to include the ReportGeneration.class file in the jar to solve the issue. A: Most of the times java.lang.NoSuchMethodError is caught be compiler but sometimes it can occur at runtime. If this error occurs at runtime then the only reason could be the change in the class structure that made it incompatible. Best Explanation: https://www.journaldev.com/14538/java-lang-nosuchmethoderror A: I've encountered this error too. My problem was that I've changed a method's signature, something like void invest(Currency money){...} into void invest(Euro money){...} This method was invoked from a context similar to public static void main(String args[]) { Bank myBank = new Bank(); Euro capital = new Euro(); myBank.invest(capital); } The compiler was silent with regard to warnings/ errors, as capital is both Currency as well as Euro. The problem appeared due to the fact that I only compiled the class in which the method was defined - Bank, but not the class from which the method is being called from, which contains the main() method. This issue is not something you might encounter too often, as most frequently the project is rebuilt mannually or a Build action is triggered automatically, instead of just compiling the one modified class. My usecase was that I generated a .jar file which was to be used as a hotfix, that did not contain the App.class as this was not modified. It made sense to me not to include it as I kept the initial argument's base class trough inheritance. The thing is, when you compile a class, the resulting bytecode is kind of static, in other words, it's a hard-reference. The original disassembled bytecode (generated with the javap tool) looks like this: #7 = Methodref #2.#22 // Bank.invest:(LCurrency;)V After the ClassLoader loads the new compiled Bank.class, it will not find such a method, it appears as if it was removed and not changed, thus the named error. Hope this helps. A: The problem in my case was having two versions of the same library in the build path. The older version of the library didn't have the function, and newer one did. A: I had a similar problem with my Gradle Project using Intelij. I solved it by deleting the .gradle (see screenshot below) Package and rebuilding the Project. .gradle Package A: I had faced the same issue. I changed the return type of one method and ran the test code of that one class. That is when I faced this NoSuchMethodError. As a solution, I ran the maven builds on the entire repository once, before running the test code again. The issue got resolved in the next single test run. A: One such instance where this error occurs: I happened to make a silly mistake of accessing private static member variables in a non static method. Changing the method to static solved the problem. A: For me, none of the workarounds mentioned here did not work. Updating mockito-core from 3.3.3 to 3.4.3 fixed the problem. I think it is caused by that MockitoAnnotations.initMock() method is deprecated and replaced with MockitoAnnotations.openMocks() in Mockito JUnit 5 version 3. On the other hand, it may be worthy to check the local Maven Repository and delete unnecessary jars that may cause conflict. But when applying this step, be attention and don't delete manually installed ones (or get backup before the operation). A: For my case. I had to check other referenced methods. I needed to change the method signature everywhere to be the same as newly updated method signature. For example, changing the return type of a particular method from collection to an array list. A: If your file name is different than the class name which contain main method then it may be the possibility that this error may cause.
{ "language": "en", "url": "https://stackoverflow.com/questions/35186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "228" }
Q: Error using Team Foundation Server merge function When merging two code branches in Team Foundation Server I get the following error: The given key was not present in the dictionary. Some files are checked out and show up in "Pending Changes", but no changes are actually made. I have a workaround: * *Attempt to merge (fails with error) *Get latest from trunk *Undo all pending changes with "merge, edit" or "merge" *Merge Again (works this time) Any ideas on what's causing this error? Edit after answer: Seems like a bug. And it's extremely repeatable. Every single merge does it. I'll send a bug report to MS and see what happens. A: Sounds like a bug. If you can replicate this, I recommend you contact Microsoft Support or use the Microsoft Connect bug reporting web site. I did not find any mention of this in a preliminary search.
{ "language": "en", "url": "https://stackoverflow.com/questions/35191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Working in Visual Studio (2005 or 2008) on a networked drive Have you guys had any experiences (positive or negative) by placing your source code/solution on a network drive for Visual Studio 2005 or 2008? Please note I am not referring to placing your actual source control system on that drive, but rather your working folder. Thanks A: It works just fine. I have worked with source code from my "home" folder on many different systems (NFS, Samba, AD) and never had any problems. The only drawback is that you might experience somewhat longer compile times if your network is slow or there is much traffic on the network. Under normal circumstances this is not an issue though, since source code files are usually small and will be cached by the operating system anyway. A: Some folks in our company do that with their external dependencies, and they get occasional build errors, usually because a library or header can't be retrieved. When they rebuild again it all works. Of course the speed and traffic-level of your network would have a major effect on this.
{ "language": "en", "url": "https://stackoverflow.com/questions/35194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: requiredfield validator is preventing another form from submitting I have a page with many forms in panels and usercontrols, and a requiredfield validator I just added to one form is preventing all of my other forms from submitting. what's the rule that I'm not following? A: Are you using ValidationGroups? Try assigning each control with a validation group as well as the validator that you want to use. Something like: <asp:TextBox ID="txt1" ValidationGroup="Group1" ruant="server" /> <asp:RequiredFieldValidator ID="rfv1" ... ValidationGroup="Group1" /> Note, if a button doesn't specify a validation group it will validate all controls that aren't assigned to a validation group. A: You should be setting ValidationGroup property to a different value for each group of elements. Your validator's ValidationGroup must only be same with the control that submit its form.
{ "language": "en", "url": "https://stackoverflow.com/questions/35208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Identify an event via a Linq Expression tree The compiler usually chokes when an event doesn't appear beside a += or a -=, so I'm not sure if this is possible. I want to be able to identify an event by using an Expression tree, so I can create an event watcher for a test. The syntax would look something like this: using(var foo = new EventWatcher(target, x => x.MyEventToWatch) { // act here } // throws on Dispose() if MyEventToWatch hasn't fired My questions are twofold: * *Will the compiler choke? And if so, any suggestions on how to prevent this? *How can I parse the Expression object from the constructor in order to attach to the MyEventToWatch event of target? A: Edit: As Curt has pointed out, my implementation is rather flawed in that it can only be used from within the class that declares the event :) Instead of "x => x.MyEvent" returning the event, it was returning the backing field, which is only accessble by the class. Since expressions cannot contain assignment statements, a modified expression like "( x, h ) => x.MyEvent += h" cannot be used to retrieve the event, so reflection would need to be used instead. A correct implementation would need to use reflection to retrieve the EventInfo for the event (which, unfortunatley, will not be strongly typed). Otherwise, the only updates that need to be made are to store the reflected EventInfo, and use the AddEventHandler/RemoveEventHandler methods to register the listener (instead of the manual Delegate Combine/Remove calls and field sets). The rest of the implementation should not need to be changed. Good luck :) Note: This is demonstration-quality code that makes several assumptions about the format of the accessor. Proper error checking, handling of static events, etc, is left as an exercise to the reader ;) public sealed class EventWatcher : IDisposable { private readonly object target_; private readonly string eventName_; private readonly FieldInfo eventField_; private readonly Delegate listener_; private bool eventWasRaised_; public static EventWatcher Create<T>( T target, Expression<Func<T,Delegate>> accessor ) { return new EventWatcher( target, accessor ); } private EventWatcher( object target, LambdaExpression accessor ) { this.target_ = target; // Retrieve event definition from expression. var eventAccessor = accessor.Body as MemberExpression; this.eventField_ = eventAccessor.Member as FieldInfo; this.eventName_ = this.eventField_.Name; // Create our event listener and add it to the declaring object's event field. this.listener_ = CreateEventListenerDelegate( this.eventField_.FieldType ); var currentEventList = this.eventField_.GetValue( this.target_ ) as Delegate; var newEventList = Delegate.Combine( currentEventList, this.listener_ ); this.eventField_.SetValue( this.target_, newEventList ); } public void SetEventWasRaised( ) { this.eventWasRaised_ = true; } private Delegate CreateEventListenerDelegate( Type eventType ) { // Create the event listener's body, setting the 'eventWasRaised_' field. var setMethod = typeof( EventWatcher ).GetMethod( "SetEventWasRaised" ); var body = Expression.Call( Expression.Constant( this ), setMethod ); // Get the event delegate's parameters from its 'Invoke' method. var invokeMethod = eventType.GetMethod( "Invoke" ); var parameters = invokeMethod.GetParameters( ) .Select( ( p ) => Expression.Parameter( p.ParameterType, p.Name ) ); // Create the listener. var listener = Expression.Lambda( eventType, body, parameters ); return listener.Compile( ); } void IDisposable.Dispose( ) { // Remove the event listener. var currentEventList = this.eventField_.GetValue( this.target_ ) as Delegate; var newEventList = Delegate.Remove( currentEventList, this.listener_ ); this.eventField_.SetValue( this.target_, newEventList ); // Ensure event was raised. if( !this.eventWasRaised_ ) throw new InvalidOperationException( "Event was not raised: " + this.eventName_ ); } } Usage is slightly different from that suggested, in order to take advantage of type inference: try { using( EventWatcher.Create( o, x => x.MyEvent ) ) { //o.RaiseEvent( ); // Uncomment for test to succeed. } Console.WriteLine( "Event raised successfully" ); } catch( InvalidOperationException ex ) { Console.WriteLine( ex.Message ); } A: I too wanted to do this, and I have come up with a pretty cool way that does something like Emperor XLII idea. It doesn't use Expression trees though, as mentioned this can't be done as Expression trees do not allow the use of += or -=. We can however use a neat trick where we use .NET Remoting Proxy (or any other Proxy such as LinFu or Castle DP) to intercept a call to Add/Remove handler on a very short lived proxy object. The role of this proxy object is to simply have some method called on it, and to allow its method calls to be intercepted, at which point we can find out the name of the event. This sounds weird but here is the code (which by the way ONLY works if you have a MarshalByRefObject or an interface for the proxied object) Assume we have the following interface and class public interface ISomeClassWithEvent { event EventHandler<EventArgs> Changed; } public class SomeClassWithEvent : ISomeClassWithEvent { public event EventHandler<EventArgs> Changed; protected virtual void OnChanged(EventArgs e) { if (Changed != null) Changed(this, e); } } Then we can have a very simply class that expects an Action<T> delegate that will get passed some instance of T. Here is the code public class EventWatcher<T> { public void WatchEvent(Action<T> eventToWatch) { CustomProxy<T> proxy = new CustomProxy<T>(InvocationType.Event); T tester = (T) proxy.GetTransparentProxy(); eventToWatch(tester); Console.WriteLine(string.Format("Event to watch = {0}", proxy.Invocations.First())); } } The trick is to pass the proxied object to the Action<T> delegate provided. Where we have the following CustomProxy<T> code, who intercepts the call to += and -= on the proxied object public enum InvocationType { Event } public class CustomProxy<T> : RealProxy { private List<string> invocations = new List<string>(); private InvocationType invocationType; public CustomProxy(InvocationType invocationType) : base(typeof(T)) { this.invocations = new List<string>(); this.invocationType = invocationType; } public List<string> Invocations { get { return invocations; } } [SecurityPermission(SecurityAction.LinkDemand, Flags = SecurityPermissionFlag.Infrastructure)] [DebuggerStepThrough] public override IMessage Invoke(IMessage msg) { String methodName = (String) msg.Properties["__MethodName"]; Type[] parameterTypes = (Type[]) msg.Properties["__MethodSignature"]; MethodBase method = typeof(T).GetMethod(methodName, parameterTypes); switch (invocationType) { case InvocationType.Event: invocations.Add(ReplaceAddRemovePrefixes(method.Name)); break; // You could deal with other cases here if needed } IMethodCallMessage message = msg as IMethodCallMessage; Object response = null; ReturnMessage responseMessage = new ReturnMessage(response, null, 0, null, message); return responseMessage; } private string ReplaceAddRemovePrefixes(string method) { if (method.Contains("add_")) return method.Replace("add_",""); if (method.Contains("remove_")) return method.Replace("remove_",""); return method; } } And then we all that's left is to use this as follows class Program { static void Main(string[] args) { EventWatcher<ISomeClassWithEvent> eventWatcher = new EventWatcher<ISomeClassWithEvent>(); eventWatcher.WatchEvent(x => x.Changed += null); eventWatcher.WatchEvent(x => x.Changed -= null); Console.ReadLine(); } } Doing this I will see this output: Event to watch = Changed Event to watch = Changed A: A .NET event isn't actually an object, it's an endpoint represented by two functions -- one for adding and one for removing a handler. That's why the compiler won't let you do anything other than += (which represents the add) or -= (which represents the remove). The only way to refer to an event for metaprogramming purposes is as a System.Reflection.EventInfo, and reflection is probably the best way (if not the only way) to get ahold of one. EDIT: Emperor XLII has written some beautiful code which should work for your own events, provided you've declared them from C# simply as public event DelegateType EventName; That's because C# creates two things for you from that declaration: * *A private delegate field to serve as the backing storage for the event *The actual event along with implementation code that makes use of the delegate. Conveniently, both of these have the same name. That's why the sample code will work for your own events. However, you can't rely on this to be the case when using events implemented by other libraries. In particular, the events in Windows Forms and in WPF don't have their own backing storage, so the sample code will not work for them. A: While Emperor XLII already gave the answer for this, I thought it was worth while to share my rewrite of this. Sadly, no ability to get the Event via Expression Tree, I'm using the name of the Event. public sealed class EventWatcher : IDisposable { private readonly object _target; private readonly EventInfo _eventInfo; private readonly Delegate _listener; private bool _eventWasRaised; public static EventWatcher Create<T>(T target, string eventName) { EventInfo eventInfo = typeof(T).GetEvent(eventName); if (eventInfo == null) throw new ArgumentException("Event was not found.", eventName); return new EventWatcher(target, eventInfo); } private EventWatcher(object target, EventInfo eventInfo) { _target = target; _eventInfo = event; _listener = CreateEventDelegateForType(_eventInfo.EventHandlerType); _eventInfo.AddEventHandler(_target, _listener); } // SetEventWasRaised() // CreateEventDelegateForType void IDisposable.Dispose() { _eventInfo.RemoveEventHandler(_target, _listener); if (!_eventWasRaised) throw new InvalidOperationException("event was not raised."); } } And usage is: using(EventWatcher.Create(o, "MyEvent")) { o.RaiseEvent(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/35211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Optimizing/Customizing Sharepoint Search Crawling With SharePoint Server 2007, there is also a Search Feature and a Crawler. However, the Crawler is somewhat limited in that it only supports Basic Auth when crawling external sites and that there is no way to tell it to ignore no-index,no-follow attributes. Now, there is a site i'd like to index, unfortunately this site uses it's own Authentication System, and it uses no-index,no-follow on the pages. As I control that site, i can remove the Attributes, but it's a PITA to do so. Also, it does not solve the Authentication issue. So I just wonder if it's possible to extend Sharepoint's Crawler somehow? A: The limitation of MOSS crawling sites with different forms authentication should have been addressed in MOSS SP1. : http://www.microsoft.com/downloads/details.aspx?FamilyID=ad59175c-ad6a-4027-8c2f-db25322f791b&displaylang=en Here's a link to a post which describes how to get the hotfix for pre-SP1 MOSS to enable the crawling of sites with forms authentication: http://blogs.microsoft.co.il/blogs/adir_ron/archive/2007/10/11/moss-search-for-sso-form-based-authentication-sites.aspx Hope that helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/35219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Flex ComboBox, default value and dataproviders I have a Flex ComboBox that gets populated by a dataprovider all is well... I would now like to add a default " -- select a item --" option at the 0 index, how can I do this and still use a dataprovider? I have not seen any examples of such, but I can't imagine this being hard... A: I came across this problem today and wanted to share my solution. I have a ComboBox that has an ArrayCollection containing Objects as it's dataprovider. When the application runs, it uses a RemoteObject to go out and get the ArrayCollection/Objects. In my event handler for that call I just have it append another object to the beginning of the ArrayCollection and select it: var defaultOption:Object = {MyLabelField: "Select One"}; myDataProvider.addItemAt(defaultOption, 0); myComboBox.selectedIndex = 0; This is what my ComboBox looks like for reference: <mx:ComboBox id="myComboBox" dataProvider="{myDataProvider}" labelField="MyLabelField" /> A: If you don't need the default item to be selectable you can use the prompt property of ComboBox and set the selectedIndex to -1. That will show the string you set propmt to as the selected value until the user chooses another. It will not appear in the list of options, however. A: The way I've dealt with this in the past is to create a new collection to serve as the data provider for the combobox, and then I listen for changes to the original source (using an mx.BindingUtils.ChangeWatcher). When I get such a notification, I recreate my custom data provider. I wish I knew a better way to approach this; I'll monitor this question just in case. A: This can be used following code for selected default value of combobox var index:String = "foo"; for(var objIndex:int = 0; objIndex < comboBox.dataProvider.length; objIndex++) { if(comboBox.dataProvider[objIndex].label == index) { comboBox.selectedIndex = objIndex; break; } } <mx:ComboBox id="comboBox" dataProvider="{_pageIndexArray}" />
{ "language": "en", "url": "https://stackoverflow.com/questions/35224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Would building an application using a Sql Server Database File (mdf) be a terrible idea? I'm working on a side project that would be a simple web application to maintain a list of classes and their upcoming schedules. I would really like to use Linq to SQL for this project, but unfortunately the server environment I'm developing for only has MySql available. I've dabbled briefly with Subsonic but it just doesn't get the job done. The database requirements for this application aren't that great, though, so I'm curious if using an MDF file in App_Data would be a workable solution. Otherwise, it appears I'm going to have to hand-code sql queries which I want to avoid at all costs. A: To the best of my knowledge, you can attach directly to the MDF (called a "user instance", rather than attaching the MDF to a "server instance") only if SQL Server Express is installed on that machine. So your machine that has MySql on it would also have to run SQL Server Express. A: I've long since completed the project which prompted this question, but recently I've had another project come along with very minor data requirements, so I spent some more time experimenting with this. I had assumed that Sql Server Express required licensing fees to deploy, but this is not in fact the case. According to Microsoft's website, you are free to use it with certain restrictions: * *Maximum database size: 4 GB *Maximum memory used: 1 GB *Maximum CPUs used: 1 (complete procs, not cores) Sql Server Compact is a bad idea for web applications because it requires a hack to make it work, and it isn't built for the concurrent access you'd need for the web. But if your application can fit within the modest limitations of Sql Server Express, it works pretty well. And since it speaks regular T-SQL like its larger siblings, you can use Linq to SQL with it. I hear that Linq to Sql support is now in the Mono trunk for the 2.6 release, so L2S' tight-coupling to Sql Server will likely be a moot point in the near future. I will either end up porting my code to use Mono's superior Linq to Sql implementation on the db of my choice, or go another route entirely (SubSonic has improved by leaps and bounds since I last tried it). But for the time being, Sql Server Express is a valid choice for very small database-driven apps. A: Take a look at Microsoft SQL Server Compact Edition. I believe you can work with MDF files without having to run a server. All code runs in process. I believe it has some limitations but it may work for you and I think it's free. A: More likely you'd put an Access database in App_Data. If you're using a MSSQL MDF file, you'll definitely still need either MSSQL or MSSQL-Express. Your question is confusing, however. You seem to interchanging data access, ORM and the actual database. You can use SubSonic with MySQL, but you cannot use LINQ to SQL with non-MS databases or MS Access. A: One of the few differences between SQL Server Express and the "full" SQL Server is the ability to automatically attach to MDF files - what Microsoft call "xcopy deployment". SQL Server Express is free (as in beer) so unless you have no administrator rights on the box for installation, this should work fine. A: +1 for SQL Server Compact. It's free and there's no 'engine' in the sense of a full-time service, but you do have to deploy a runtime (it's just two .dll files). A: I don't understand... what do you mean by "having an MDF file in App_Data"? You need a proper SQL Server installation for that to work. You can always use the free SQL Server Express for developing the application, and then move the database to the proper SQL Server once you are done. Check here. A: It appears that I was misunderstanding how mdf files are accessed through .net. There is no MS SQL Server available on the server, so it looks like I'm screwed. A: +1 for SQL Server Compact. It's free and there's no 'engine' in the sense of a full-time service, but you do have to deploy a runtime (it's just two .dll files). Does linq to sql work with that though? A: you can't use SQL Server Compact with asp.net or web development
{ "language": "en", "url": "https://stackoverflow.com/questions/35232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Rewrite or repair? I'm sure you have all been there, you take on a project where there is a creaky old code base which is barely fit for purpose and you have to make the decision to either re-write it from scratch or repair what already exists. Conventional wisdom tends to suggest that you should never attempt a re-write from scratch as the risk of failure is very high. So what did you do when faced with this problem, how did you make the decision and how did it turn out? A: See Joel Spolsky's essay Things You Should Never Do. In summary, when you rewrite you lose all the lessons you learned to make your current code work the way it needs to work. See also: Big Ball of Mud A: It is rare for a re-write of anything complex to succeed. It's tempting, but a low percentage strategy. Get legacy code under unit tests and refactor it, and/or completely replace small portions of it incrementally when opportune. A: Refactor unless it is very bad indeed. Joel has a lot to say on this... At the very least, rewrite the code with the old code in front of you and don't just start over from scratch. The old code may be terrible, but it is the way it is for a reason and if you ignore it you'll end up seeing the same bugs that were probably fixed years ago in the old code. A: One reason for rewriting at one of my previous jobs was an inability to find developers with enough experience to work on the original code base. The decision was made to first clean up the underlying database structure, then rewrite in something that would make it easier to find full-time employees and/or contractors. I haven't heard yet how it worked out :) I think people have a tendency to go for rewrites because it seems more fun on the surface. We get to rebuild from scratch! We'll do it right this time! etc. A: There is a new book coming out, Brownfield Application Development in .NET by Baley and Belcham. The first chapter is free, and talks about these issues from a mostly platform agnostic perspective. A: Repair, or more importantly, refactor. Both because Joel said so and also because, if it's your code, you've probably learned a ton more stuff since you touched this code last. If you wrote it in .NET 1.1, you can upgrade it to 3.5 SP1. You get to go in and purge all the old commented out code. You're 100x better as a developer now than when you first wrote this code. The one exception I think is when the code uses really antiquated technologies - in which case you might be better served by writing a new version. If you're looking at some VB6 app with 10,000 lines of code with an Access database backend obviously set up by someone who didn't know much about how databases work (which could very well be you eight years ago) then you can probably pull off a quicker, C#/SQL-based solution in a fraction of the time and code. A: Just clean up the code a little bit every time you work with it. If there isn't one already, setup a unit testing framework. All new code should get tests written. Any old code you fix as a result of bugs, try to slide in tests too. As the cleanups progress, you'll be able to sweep more and more of the nasty code into encapsulated bins. Then you can pick those off one by one in the future. A tool like javadoc or doxygen, if not already in use, can also help improve code documentation and comprehensibility. The arguments against a complete rewrite a pretty strong. Those tons of "little bugs" and behaviors that were coded in over the time frame of the original project will sneak right back in again. A: It really depends on how bad it is. If it's a small system, and you fully understand it, then a rewrite is not crazy. On the other hand, if it's a giant legacy monster with ten million lines of undocumented mystery code, then you're really going to have a hard time with a full rewrite. Points to consider: * *If it looks good to the user, they won't care what kind of spaghetti mess it is for you. On the other hand, if it's bad for them too, then it's easier to get agreement (and patience). *If you do rewrite, try to do it one part at a time. A messy, disorganized codebase may make this difficult (i.e, replacing just one part requires a rewrite of large icebergs of dependency code), but if possible, this makes it a lot easier to gradually do the rewrite and get feedback from users along the way. I would really hesitate to take on a giant rewrite project for a large system without being able to release the new edition one part at a time. A: It's not so black and white... it really depends on a lot of factors (the more important being "what does the person paying you want you to do") Where I work we re-wrote a development framework, and on the other hand, we keep modifying some old systems that cannot be migrated (because of the client's technology and time restrictions). In this case, we try to mantain the coding style and sometimes you have to implement a lot of workarounds because of the way it was built A: Depending on your situation, you might have another option: in-license third-party code. I've consulted at a couple of companies where that would be the sensible choice, although seemingly "throwing away IP" can be a big barrier for management. At my current company, we seriously considered the viable option of using third-party code to replace our core framework, but that idea was ultimately rejected more for business reasons than technical reasons. To directly answer your question, we finally chose to rewrite the legacy framework - a decision we didn't take lightly! 14 months on, we don't regret this choice at all. Just considering the time spent fixing bugs, our new framework has nearly paid for itself. On the negative side, it is not quite feature-complete yet so we are in the unenviable position of maintaining two separate frameworks in parallel until we can port the last of our "front-end" applications. A: I highly recommend reading "Working Effectively with Legacy Code" by Michael Feathers. It's coaching advice on how to refactor your code so that it is unit testable.
{ "language": "en", "url": "https://stackoverflow.com/questions/35233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Retrieving HTTP status code from loaded iframe with Javascript I used the jQuery Form plugin for asynchronous form submission. For forms that contain files, it copies the form to a hidden iframe, submits it, and copies back the iframe's contents. The problem is that I can't figure out how to find what HTTP status code was returned by the server. For example, if the server returns 404, the data from the iframe will be copied as normal and treated as a regular response. I've tried poking around in the iframe objects looking for some sort of status_code attribute, but haven't been able to find anything like that. The $.ajax() function can't be used, because it does not support uploading files. The only way to asynchronously upload files that I know of is using the hidden iframe method. A: You can't get page headers by JS, but you can distinguish error from success: Try something like this: <script type="text/javascript"> var uploadStarted = false; function OnUploadStart(){ uploadStarted = true; } function OnUploadComplete(state,message){ if(state == 1) alert("Success: "+message); else if(state == 0 && uploadStarted) alert("Error:"+( message ? message : "unknow" )); } </script> <iframe id="uploader" name="uploader" onload="OnUploadComplete(0)" style="width:0px;height:0px;border:none;"></iframe> <form id="sender" action="/upload.php" method="post" target="uploader" enctype="multipart/form-data" onsubmit="OnUploadStart()"> <input type="file" name="files[upload]"/> <input type="submit" value="Upload"/> </form> On server side: /* file: upload.php */ <?php // do some stuff with file print '<script type="text/javascript">'; if(success) print 'window.parent.OnUploadComplete(1,"File uploaded!");'; else print 'window.parent.OnUploadComplete(0, "File too large!");'; print '</script>'; ?> A: You can't retrieving HTTP status code from loaded "iframe" directly. But when an http error occured, the server will returned nothing to the "iframe". So the iframe has not content. you can check the iframe body, when the body of iframe is blank, use ajax with the same url to get the response from server. Then you can retrieve the http status code from response.
{ "language": "en", "url": "https://stackoverflow.com/questions/35240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Large File Download Internet Explorer has a file download limit of 4GB (2 GB on IE6). Firefox does not have this problem (haven't tested safari yet) (More info here: http://support.microsoft.com/kb/298618) I am working on a site that will allow the user to download very large files (up to and exceeding 100GB) What is the best way to do this without using FTP. The end user must be able to download the file from there browser using HTTP. I don't think Flash or Silverlight can save files to the client so as far as I know they won't cut it. I'm guessing we will need an ActiveX or Java applet to pull this off. Something like the download manager that MSDN uses. Does anyone know of a commercial (or free) component that will do that? We do not want the user to have to install a "browser wide" download manager (like GetRight), we want it to only work with downloading on our site. Update: Here is some additional info to help clarify what I'm trying to do. Most of the files above the 4GB limit would be large HD video files (its for a video editing company). These will be downloaded by users across the internet, this isn't going to be people on a local network. We want the files to be available via HTTP (some users are going to be behind firewalls that aren't going to allow FTP, Bittorrent, etc.). The will be a library of files the end user could download, so we aren't talking about a one time large download. The will be download different large files on a semi-regular basis. So far Vault that @Edmund-Tay suggested is the closest solution. The only problem is that it doesn't work for files larger than 4GB (it instantly fails before starting the download, they are probably using a 32bit integer somewhere which is exceeded/overflown by the content length of the file). The best solution would be a java applet or ActiveX component, since the problem only exist in IE, that would work like the article @spoulson linked to. However, so far I haven't had any luck finding a solution that does anything like that (multipart downloads, resume, etc.). It looks like we might have to write our own. Another option would be to write a .Net application (maybe ClickOnce) that is associated with an extension or mime type. Then the user would actually be downloading a small file from the web server that opens in the exe/ClickOnce app that tells the application what file to download. That is how the MSDN downloader works. The end user would then only have to download/install an EXE once. That would be better than downloading an exe every time they wanted to download a large file. A: @levand: My actual preference, as a user, in these situations is to download a lightweight .exe file that downloads the file for you. That's a dealbreaker for many, many sites. Users either are or should be extremely reluctant to download .exe files from websites and run them willy-nilly. Even if they're not always that cautious, incautious behaviour is not something we should encourage as responsible developers. If you're working on something along the lines of a company intranet, a .exe is potentially an okay solution, but for the public web? No way. @TonyB: What is the best way to do this without using FTP. I'm sorry, but I have to ask why the requirement. Your question reads to me along the lines of "what's the best way to cook a steak without any meat or heat source?" FTP was designed for this sort of thing. A: bittorrent? There have been a few web-based versions already (bitlet, w3btorrent), and Azureus was built using java, so it's definitely possible. Edit: @TonyB is it limited to port 80? A: Please don't use ActiveX... I am so sick of sites that are only viewable in IE. My actual preference, as a user, in these situations is to download a lightweight .exe file that downloads the file for you. A: Can you split the files into pieces and then rejoin them after the download? A: If you don't want to write java code in-house, there are commercial applet solutions available: * *Vault *MyDownloder Both of them have eval versions that you can download and test. A: A few ideas: * *Blizzard use a light-weight .exe BitTorrent wrapper for their patches. I'm not entirely sure how it is done, but it looks like a branded version of the official BitTorrent client. *Upload to Amazon S3, provide the torrent link of the file (all S3 files are automatically BitTorrent-enabled), plus the full HTTP download link as alternative. See S3 documentation A: What about saying "We recommend that you install Free Download Manager to download this file. You will have the added benefit of being able to resume the file and accelerate the download." Personally I never download anything using the built in browser download tool unless I have to (e.g. Gmail attachments) A: @travis Unfortunately It has to be over HTTP inside the users browser. I'll update the question to be more clear about that. A: @levand The problem only exist in IE (it works in Firefox) so while ActiveX would only work on IE, IE is the only one we need the work around for. @travis - interesting idea. Not sure if it will work for what I need but I'll keep it in mind. I'm hoping to find something to integrate with the existing site instead of having to go out to a third party. It would also require me to setup a bittorrent tracker which wouldn't be as easy as it sounds for this application because different users will have different access to different files. A: @jjnguy I'm looking for a java applet or ActiveX component that will do that for me. These are non-technical users so we really just want to have them click download and the full file ends up in the specified location A: @ceejayoz I totally agree but its part of the requirement for our client. There will be FTP access but each user will have the option of downloading via HTTP or FTP. There are some users that will be behind corporate firewalls that don't permit FTP I have seen other sites do this in the past (MSDN, Adobe) so I was hoping there is something out there already instead of having to make one in house (and learning java and/or ActiveX) A: I say click-once installed download manager, similar to msdn. But becoming a CDN without a more optimized protocol for the job is no easy task. I can't imagine a business model that can be worthwhile enough to have such large file downloads as a core competency unless you are doing something like msdn. If you create a thick client, you at least get the chance to get some more face time with the users, for advertising or some other revenue model, since you will probably be paying in the hundreds of thousands of dollars to host such a service. A: The problem with the applet approach mentioned is that unless you have the end user modify their java security properties your applet will not have permission to save to the hard drive. It may be possible using Java Web Start (aka JNLP). I think that if it is a signed app it can get the extra permission to write to the hard drive. This is not too different from the download an exe approach. The problem with this is that the user has to have the correct version of Java installed and has to have the Java Web Start capability setup correctly. I would recommend the exe approach as it will be the easiest for the non-technical users to use. A: There are some users that will be behind corporate firewalls that don't permit FTP... Are users with restrictive firewalls like that likely to be permitted to install and run a .exe file from your website? A: Take a look at cURL. This article describes how to do a multi-part simultaneous download via HTTP. I've used cURL in the past to manage FTP downloads of files over 300GB. Another tip: You can boost download times even more if you increase the size of the TCP Window on the client's NIC configuration. Set it as high as the OS allows and you should see up to 2x improvement depending on your physical network. This worked for me on Windows 2000 and 2003 when FTPing over a WAN. The down side is it may increase overhead for all other network traffic that wants only a few KB for a network packet, but is now forced to send/recv in 64KB packets. Your mileage may vary. Edit: What exactly is this you're trying to accomplish? Who is the audience? I'm assumed for a bit that you're looking to do this over your own network; but you seem to imply the client side is someone on the internet. I think we need clearer requirements. A: * *Create a folder of files to be downloaded on the server where the document service is running (either using Linux commands or using java to execute shell commands) *Write the file to be downloaded to this folder (using Linux command or Java shell command is OK). Considering the efficiency of program execution, WGet command is used here *Package the downloaded folder as a zip file (using shell command), configure nginx agent, return the access file path of nginx to the front end, and then download from the front end.
{ "language": "en", "url": "https://stackoverflow.com/questions/35248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Managing multiple identical databases efficiently? How, if you have a database per client of a web application instead of one database used by all clients, do you go about providing updates and enhancements to all databases efficiently? How do you roll out changes to schema and code in such a scenario? A: It's kinda difficult for us. We have a custom program that writes a lot of the sql code for the different databases for us. Essentially it writes the code once and then copies it over and over again along with placing the change database commands etc. It also makes sure that the primary key identities etc are in sync when they need to be. Beyond that I would look at Red Gate's products. They have saved us more than once here. With them you can easily compare the dbs and see what is differnt. A must when dealing with multiple copies. A: Use a code generator / scripting language to implement the original schema and updates to it over time. A: I've used Red Gate's SQL Packager for this in the past. The beauty of this tool is that it creates a C# project for you that actually does the work so if you need to you can extend the functionality of the default package to do other things like insert default values into new columns that have been added to the db etc. In the end you have a nice tool that you can hand to a technician and all they have to do to upgrade multiple DBs is point it to the database and click a button. Red Gate also has a product called SQL multi-script that allows you to run scripts against multiple servers/dbs at the same time. I've never used this tool but I imagine if you're looking for something to use internally that doesn't need to be packaged up you'd want to look at that.
{ "language": "en", "url": "https://stackoverflow.com/questions/35256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I restore files to previous states in git? Given the following interaction: $ git add foo $ git commit -m "Initial import of 'foo'" $ rm foo # This could be any destructive action on foo, like editing it. How do I restore 'foo' in my working copy? I'm looking for something like: $ git <magic> foo Restored foo to revision <blah>.
{ "language": "en", "url": "https://stackoverflow.com/questions/35284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "83" }
Q: DOS filename escaping for use with *nix commands I want to escape a DOS filename so I can use it with sed. I have a DOS batch file something like this: set FILENAME=%~f1 sed 's/Some Pattern/%FILENAME%/' inputfile (Note: %~f1 - expands %1 to a Fully qualified path name - C:\utils\MyFile.txt) I found that the backslashes in %FILENAME% are just escaping the next letter. How can I double them up so that they are escaped? (I have cygwin installed so feel free to use any other *nix commands) Solution Combining Jeremy and Alexandru Nedelcu's suggestions, and using | for the delimiter in the sed command I have set FILENAME=%~f1 cygpath "s|Some Pattern|%FILENAME%|" >sedcmd.tmp sed -f sedcmd.tmp inputfile del /q sedcmd.tmp A: This will work. It's messy because in BAT files you can't use set var=`cmd` like you can in unix. The fact that echo doesn't understand quotes is also messy, and could lead to trouble if Some Pattern contains shell meta characters. set FILENAME=%~f1 echo s/Some Pattern/%FILENAME%/ | sed -e "s/\\/\\\\/g" >sedcmd.tmp sed -f sedcmd.tmp inputfile del /q sedcmd.tmp [Edited]: I am suprised that it didn't work for you. I just tested it, and it worked on my machine. I am using sed from http://sourceforge.net/projects/unxutils and using cmd.exe to run those commands in a bat file. A: You could try as alternative (from the command prompt) ... > cygpath -m c:\some\path c:/some/path As you can guess, it converts backslashes to slashes. A: @Alexandru & Jeremy, Thanks for your help. You both get upvotes @Jeremy Using your method I got the following error: sed: -e expression #1, char 8: unterminated `s' command If you can edit your answer to make it work I'd accept it. (pasting my solution doesn't count) Update: Ok, I tried it with UnixUtils and it worked. (For reference, the UnixUtils I downloaded was dated March 1, 2007, and uses GNU sed version 3.02, my Cygwin install has GNU sed version 4.1.5)
{ "language": "en", "url": "https://stackoverflow.com/questions/35286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Log files in massively distributed systems I do a lot of work in the grid and HPC space and one of the biggest challenges we have with a system distributed across hundreds (or in some case thousands) of servers is analysing the log files. Currently log files are written locally to the disk on each blade but we could also consider publishing logging information using for example a UDP Appender and collect it centally. Given that the objective is to be able to identify problems in as close to real time as possible, what should we do? A: First, synchronize all clocks in the system using NTP. Second, if you are collecting the logs in a single location (like the UDP appender you mention) make sure the logs have enough information to actually help. I would include at least the server that generated the log, the time it happened, and the message. If there is any sort of transaction id, or job id type concept, include that also. Since you mentioned a UDP Appender I am guessing you are using log4j (or one of it's siblings). Log4j has an MDC class that allows extra information to be passed along through a processing thread. it can help collect some of the extra information and pass it along. A: Are you using Apache? If so you could have a look at mod_log_spread Though you may have too big an infrastructure to make it maintainable. The other option is to look at "broadcasting" or "multicasting" your log messages and having dedicated logging servers subscribing to those feeds and collating them
{ "language": "en", "url": "https://stackoverflow.com/questions/35292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What is the difference between the | and || or operators? I have always used || (two pipes) in OR expressions, both in C# and PHP. Occasionally I see a single pipe used: |. What is the difference between those two usages? Are there any caveats when using one over the other or are they interchangeable? A: || is the logical OR operator. It sounds like you basically know what that is. It's used in conditional statements such as if, while, etc. condition1 || condition2 Evaluates to true if either condition1 OR condition2 is true. | is the bitwise OR operator. It's used to operate on two numbers. You look at each bit of each number individually and, if one of the bits is 1 in at least one of the numbers, then the resulting bit will be 1 also. Here are a few examples: A = 01010101 B = 10101010 A | B = 11111111 A = 00000001 B = 00010000 A | B = 00010001 A = 10001011 B = 00101100 A | B = 10101111 Hopefully that makes sense. So to answer the last two questions, I wouldn't say there are any caveats besides "know the difference between the two operators." They're not interchangeable because they do two completely different things. A: Just like the & and && operator, the double Operator is a "short-circuit" operator. For example: if(condition1 || condition2 || condition3) If condition1 is true, condition 2 and 3 will NOT be checked. if(condition1 | condition2 | condition3) This will check conditions 2 and 3, even if 1 is already true. As your conditions can be quite expensive functions, you can get a good performance boost by using them. There is one big caveat, NullReferences or similar problems. For example: if(class != null && class.someVar < 20) If class is null, the if-statement will stop after class != null is false. If you only use &, it will try to check class.someVar and you get a nice NullReferenceException. With the Or-Operator that may not be that much of a trap as it's unlikely that you trigger something bad, but it's something to keep in mind. No one ever uses the single & or | operators though, unless you have a design where each condition is a function that HAS to be executed. Sounds like a design smell, but sometimes (rarely) it's a clean way to do stuff. The & operator does "run these 3 functions, and if one of them returns false, execute the else block", while the | does "only run the else block if none return false" - can be useful, but as said, often it's a design smell. There is a Second use of the | and & operator though: Bitwise Operations. A: Simple example in java public class Driver { static int x; static int y; public static void main(String[] args) throws Exception { System.out.println("using double pipe"); if(setX() || setY()) {System.out.println("x = "+x); System.out.println("y = "+y); } System.out.println("using single pipe"); if(setX() | setY()) {System.out.println("x = "+x); System.out.println("y = "+y); } } static boolean setX(){ x=5; return true; } static boolean setY(){ y=5; return true; } } output : using double pipe x = 5 y = 0 using single pipe x = 5 y = 5 A: One is a "bitwise or". 10011b | 01000b => 11011b The other is a logic or. true or false => true A: Good question. These two operators work the same in PHP and C#. | is a bitwise OR. It will compare two values by their bits. E.g. 1101 | 0010 = 1111. This is extremely useful when using bit options. E.g. Read = 01 (0X01) Write = 10 (0X02) Read-Write = 11 (0X03). One useful example would be opening files. A simple example would be: File.Open(FileAccess.Read | FileAccess.Write); //Gives read/write access to the file || is a logical OR. This is the way most people think of OR and compares two values based on their truth. E.g. I am going to the store or I will go to the mall. This is the one used most often in code. For example: if(Name == "Admin" || Name == "Developer") { //allow access } //checks if name equals Admin OR Name equals Developer PHP Resource: http://us3.php.net/language.operators.bitwise C# Resources: http://msdn.microsoft.com/en-us/library/kxszd0kx(VS.71).aspx http://msdn.microsoft.com/en-us/library/6373h346(VS.71).aspx A: & - (Condition 1 & Condition 2): checks both cases even if first one is false && - (Condition 1 && Condition 2): dosen't bother to check second case if case one is false && - operator will make your code run faster, professionally & is rarely used | - (Condition 1 | Condition 2): checks both cases even if case 1 is true || - (Condition 1 || Condition 2): dosen't bother to check second case if first one is true || - operator will make your code run faster, professionally | is rarely used A: By their mathematical definition, OR and AND are binary operators; they verify the LHS and RHS conditions regardless, similarly to | and &. || and && alter the properties of the OR and AND operators by stopping them when the LHS condition isn't fulfilled. A: The | operator performs a bitwise OR of its two operands (meaning both sides must evaluate to false for it to return false) while the || operator will only evaluate the second operator if it needs to. http://msdn.microsoft.com/en-us/library/kxszd0kx(VS.71).aspx http://msdn.microsoft.com/en-us/library/6373h346(VS.71).aspx A: The singe pipe "|" is the "bitwise" or and should only be used when you know what you're doing. The double pipe "||" is a logical or, and can be used in logical statements, like "x == 0 || x == 1". Here's an example of what the bitwise or does: if a=0101 and b=0011, then a|b=0111. If you're dealing with a logic system that treats any non-zero as true, then the bitwise or will act in the same way as the logical or, but it's counterpart (bitwise and, "&") will NOT. Also the bitwise or does not perform short circuit evaluation. A: A single pipe (|) is the bitwise OR operator. Two pipes (||) is the logical OR operator. They are not interchangeable. A: The single pipe, |, is one of the bitwise operators. From Wikipedia: In the C programming language family, the bitwise OR operator is "|" (pipe). Again, this operator must not be confused with its Boolean "logical or" counterpart, which treats its operands as Boolean values, and is written "||" (two pipes). A: For bitwise | and Logicall || OR bitwise & and logicall && it means if( a>b | a==0) in this first left a>b will be evaluated and then a==0 will be evaluated then | operation will be done but in|| if a>b then if wont check for next RHS Similarly for & and && if(A>0 & B>0) it will evalue LHS and then RHS then do bitwise & but in(A>0 && B>0) if(A>0) is false(LHS) it will directly return false;
{ "language": "en", "url": "https://stackoverflow.com/questions/35301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "368" }
Q: FlexUnit component testing patterns: use addAsync or manually initialize? We've been using Flex for about 6 months here at work, and I found that my first batches of FlexUnit tests involving custom components would tend to follow this sort of pattern: import mx.core.Application; import mx.events.FlexEvent; import flexunit.framework.TestCase; public class CustomComponentTest extends TestCase { private var component:CustomComponent; public function testSomeAspect() : void { component = new CustomComponent(); // set some properties... component.addEventListener(FlexEvent.CREATION_COMPLETE, addAsync(verifySomeAspect, 5000)); component.height = 0; component.width = 0; Application.application.addChild(component); } public function verifySomeAspect(event:FlexEvent) : void { // Assert some things about component... } override public function tearDown() : void { try { if (component) { Application.application.removeChild(component); component = null; } } catch (e:Error) { // ok to ignore } } Basically, you need to make sure the component has been fully initialized before you can reliably verify anything about it, and in Flex this happens asynchronously after it has been added to the display list. So you need to setup a callback (using FlexUnit's addAsync function) to be notified when that's happened. Lately i've been just manually calling the methods that the runtime would call for you in the necessary places, so now my tests tend to look more like this: import flexunit.framework.TestCase; public class CustomComponentTest extends TestCase { public function testSomeAspect() : void { var component:CustomComponent = new CustomComponent(); component.initialize(); // set some properties... component.validateProperties(); // Assert some things about component... } This is much easier to follow, but it kinda feels like I'm cheating a little either way. The first case is slamming it into the current Application (which would be the unit test runner shell app), and the latter isn't a "real" environment. I was wondering how other people would handle this sort of situation? A: I see nothing wrong with using the async version. I can agree that the second version is shorter, but I'm not sure that I think it's easier to follow. The test does a lot of things that you wouldn't normally do, whereas the first example is more true to how you would use the component outside the test environment. Also, in the second form you have to make sure that you do exactly what the framework would do, miss one step and your test isn't relevant, and each test must repeat this code. Seems to me it's better to test it in a situation that is as close to the real thing as possible. You could have a look at dpUint's sequences, they made component testing a little more declarative: public function testLogin():void { var passThroughData:Object = new Object(); passThroughData.username = "myuser1"; passThroughData.password = "somepsswd"; var sequence:SequenceRunner = new SequenceRunner(this); sequence.addStep(new SequenceSetter(form.usernameTI, {text:passThroughData.username})); sequence.addStep(new SequenceWaiter(form.usernameTI, FlexEvent.VALUE_COMMIT, 100)); sequence.addStep(new SequenceSetter(form.passwordTI, {text:passThroughData.password})); sequence.addStep(new SequenceWaiter(form.passwordTI, FlexEvent.VALUE_COMMIT, 100)); sequence.addStep(new SequenceEventDispatcher(form.loginBtn, new MouseEvent("click", true, false))); sequence.addStep(new SequenceWaiter(form, "loginRequested", 100)); sequence.addAssertHandler(handleLoginEvent, passThroughData); sequence.run(); } (example from the dpUint wiki, see here for more info).
{ "language": "en", "url": "https://stackoverflow.com/questions/35304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you combine multiple result sets in SSRS? What's the best way to combine results sets from disparate data sources in SSRS? In my particular example, I need to write a report that pulls data from SQL Server and combines it with another set of data that comes from a DB2 database. In the end, I need to join these separate data sets together so I have one combined dataset with data from both sources combined on to the same rows. (Like an inner join if both tables were coming from the same SQL DB). I know that you can't do this "out of the box" in SSRS 2005. I'm not excited about having to pull the data into a temporary table on my SQL box because users need to be able to run this report on demand and it seems like having to use SSIS to get the data into the table on demand will be slow and hard to manage with multiple users trying to get at the report simultaneously. Are there any other, more elegant solutions out there? I know that the linked server solution mentioned below would technically work, however, for some reason our DBAs will simply not allow us to use linked servers. I know that you can add two different data sets to a report, however, I need to be able to join them together. Anybody have any ideas on how to best accomplish this? A: We had to do something similar (i.e. inner join 2 data sources from different servers). I believe the best way is to write your own custom Data Extension. It's not very difficult and it would give you the ability to do this and more. A: You could add the DB2 database as a linked server in sql server and just join the two tables in a view/sproc in sql. I've done it, it's not hard and you'll get data in realtime. A: You could create a linked server that would access the database directly or if you didn't want to strain the database during business hours, you could create a job to copy the data you need overnight. A: SSRS 2005 allows you to have multiple datasets for a report. Each dataset can refer to a different datasource, one can come from a SQL DB another can be a ODBC source etc. In the report designer view in Visual Studio go to the "Data" tab and add new data sources pointing to your different databases. Once you are done, when designing the report for each element you have to explicitly specify which dataset the data is coming from. If the above does not work, you can write managed code, refer to http://msdn.microsoft.com/en-us/msdntv/cc540036.aspx for more helpful information and videos. A: You could attach both the MSSQL tables and the DB2 tables to a Jet database and bind your report to the Jet database. I don't know the implications of the single threaded nature of Jet, or how much work would be delegated to the backing stores.
{ "language": "en", "url": "https://stackoverflow.com/questions/35315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Why does Vista complain about a dead process when I use Cygwin X11 ssh and how do I get it to shut up? When I log into a remote machine using ssh X11 forwarding, Vista pops up a box complaining about a process that died unexpectedly. Once I dismiss the box, everything is fine. So I really don't care if some process died. How do I get Vista to shut up about it? Specifically, the message reads: sh.exe has stopped working So it's not ssh itself that died, but some sub-process. The problem details textbox reads: Problem signature: Problem Event Name: APPCRASH Application Name: sh.exe Application Version: 0.0.0.0 Application Timestamp: 48a031a1 Fault Module Name: comctl32.dll_unloaded Fault Module Version: 0.0.0.0 Fault Module Timestamp: 4549bcb0 Exception Code: c0000005 Exception Offset: 73dc5b17 OS Version: 6.0.6000.2.0.0.768.3 Locale ID: 1033 Additional Information 1: fc4d Additional Information 2: d203a7335117760e7b4d2cf9dc2925f9 Additional Information 3: 1bc1 Additional Information 4: 7bc0b00964c4a1bd48f87b2415df3372 Read our privacy statement: http://go.microsoft.com/fwlink/?linkid=50163&clcid=0x0409 I notice the problem occurs when I use the -Y option to enable X11 forwarding in an X terminal under Vista. The dialog box that pops up doesn't automatically gain focus, so pressing Enter serves no purpose. I have to wait for the box to appear, grab it with the mouse, and dismiss it. Even forcing the error to receive focus would be a step in the right direction. Per DrPizza I have sent an email to the Cygwin mailing list. The trimmed down subject line represents my repeated attempts to bypass an over-aggressive spam filter and highlights the need for something like StackOverflow. A: The problem is, the process didn't just die, it died unexpectedly. Sounds like there's a bug in your SSH client that Vista is pointing out. A: I know this is going to be heresy for a cygwin user, but you could just use PuTTY instead. A: Well, I don't know what the original problem was, but when I update Cygwin recently the error message stopped popping up. My guess it that rebasing was necessary. A: What does unexpectedly mean in this context? Does it mean it core dumped or just exited non-zero? It means it died with an unhandled exception, i.e. it crashed. A: Fault Module Name: comctl32.dll_unloaded Exception Code: c0000005 Something had triggered loading of comctl32.dll, but it was later unloaded. c0000005 means 'access violation'. Probably something tried calling a function in the unloaded dll. I agree with one of the cygwin commentators that it's possibly a bug in some antivirus program or "desktop enhancement" software. Video card companies like to inject their stuff into every process, too. It's easy to use comctl32.dll for things without realizing it, however. Try downloading and installing WinDbg from Microsoft. http://www.microsoft.com/whdc/devtools/debugging/installx86.mspx Set it as the default JIT debugger "windbg.exe -I". Next time this happens you should get the nice debugger window pop up. Type "kv100" to get a stack trace. Look at the dlls listed in the calling path, there's a good chance one of them is the culprit. If you see a dll that's not from Microsoft or Cygwin there, uninstall that application and see if the problem goes away. Otherwise, the Cygwin list might be interested in the stack trace.
{ "language": "en", "url": "https://stackoverflow.com/questions/35317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do you get the last record generated in a recursive CTE? In the code below I am using a recursive CTE(Common Table Expression) in SQL Server 2005 to try and find the top level parent of a basic hierarchical structure. The rule of this hierarchy is that every CustID has a ParentID and if the CustID has no parent then the ParentID = CustID and it is the highest level. DECLARE @LookupID int --Our test value SET @LookupID = 1 WITH cteLevelOne (ParentID, CustID) AS ( SELECT a.ParentID, a.CustID FROM tblCustomer AS a WHERE a.CustID = @LookupID UNION ALL SELECT a.ParentID, a.CustID FROM tblCustomer AS a INNER JOIN cteLevelOne AS c ON a.CustID = c.ParentID WHERE c.CustID <> a.CustomerID ) So if tblCustomer looks like this: ParentID CustID 5 5 1 8 5 4 4 1 The result I get from the code above is: ParentID CustID 4 1 5 4 5 5 What I want is just the last row of that result: ParentID CustID 5 5 How do I just return the last record generated in the CTE (which would be highest level CustID)? Also note that there are multiple unrelated CustID hierarchies in this table so I can't just do a SELECT * FROM tblCustomer WHERE ParentID = CustID. I can't order by ParentID or CustID because the ID number is not related to where it is in the hierarchy. A: If you just want want the highest recursion depth couldn't you do something like this?Then, when you actually query the CTE just look for the row with max(Depth)? Like so: DECLARE @LookupID int --Our test value SET @LookupID = 1; WITH cteLevelOne (ParentID, CustID, Depth) AS ( SELECT a.ParentID, a.CustID, 1 FROM tblCustomer AS a WHERE a.CustID = @LookupID UNION ALL SELECT a.ParentID, a.CustID, c.Depth + 1 FROM tblCustomer AS a INNER JOIN cteLevelOne AS c ON a.CustID = c.ParentID WHERE c.CustID <> a.CustID ) select * from CTELevelone where Depth = (select max(Depth) from CTELevelone) or, adapting what trevor suggests, this could be used with the same CTE: select top 1 * from CTELevelone order by Depth desc I don't think CustomerID was necessarily what you wanted to order by in the case you described, but I wasn't perfectly clear on the question either. A: I'm not certain I fully understand the problem, but just to hack & slash at it you could try: SELECT TOP 1 FROM cteLevelOne ORDER BY CustID DESC That assumes that the CustID is also in order as in the example, and not something like a GUID. A: First the cte will not be finished if any of the parent child are same. As it is a recursive CTE it has to be terminated. Having Parent and cust id same , the loop will not end. Msg 530, Level 16, State 1, Line 15 The statement terminated. The maximum recursion 100 has been exhausted before statement completion.
{ "language": "en", "url": "https://stackoverflow.com/questions/35320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Bypass Forms Authentication auto redirect to login, How to? I'm writing an app using asp.net-mvc deploying to iis6. I'm using forms authentication. Usually when a user tries to access a resource without proper authorization I want them to be redirected to a login page. FormsAuth does this for me easy enough. Problem: Now I have an action being accessed by a console app. Whats the quickest way to have this action respond w/ status 401 instead of redirecting the request to the login page? I want the console app to be able to react to this 401 StatusCode instead of it being transparent. I'd also like to keep the default, redirect unauthorized requests to login page behavior. Note: As a test I added this to my global.asax and it didn't bypass forms auth: protected void Application_AuthenticateRequest(object sender, EventArgs e) { HttpContext.Current.SkipAuthorization = true; } @Dale and Andy I'm using the AuthorizeAttributeFilter provided in MVC preview 4. This is returning an HttpUnauthorizedResult. This result is correctly setting the statusCode to 401. The problem, as i understand it, is that asp.net is intercepting the response (since its taged as a 401) and redirecting to the login page instead of just letting it go through. I want to bypass this interception for certain urls. A: Ok, I worked around this. I made a custom ActionResult (HttpForbiddenResult) and custom ActionFilter (NoFallBackAuthorize). To avoid redirection, HttpForbiddenResult marks responses with status code 403. FormsAuthentication doesn't catch responses with this code so the login redirection is effectively skipped. The NoFallBackAuthorize filter checks to see if the user is authorized much like the, included, Authorize filter. It differs in that it returns HttpForbiddenResult when access is denied. The HttpForbiddenResult is pretty trivial: public class HttpForbiddenResult : ActionResult { public override void ExecuteResult(ControllerContext context) { if (context == null) { throw new ArgumentNullException("context"); } context.HttpContext.Response.StatusCode = 0x193; // 403 } } It doesn't appear to be possible to skip the login page redirection in the FormsAuthenticationModule. A: Might be a kludge (and may not even work) but on your Login page see if Request.QueryString["ReturnUrl"] != null and if so set Response.StatusCode = 401. Bear in mind that you'll still need to get your console app to authenticate somehow. You don't get HTTP basic auth for free: you have to roll your own, but there are plenty of implementations about. A: Did you write your own FormsAuth attribute for the action? If so, in the OnActionExecuting method, you get passed the FilterExecutingContext. You can use this to pass back the 401 code. public class FormsAuth : ActionFilterAttribute { public override void OnActionExecuting(FilterExecutingContext filterContext) { filterContext.HttpContext.Response.StatusCode = 401; filterContext.Cancel = true; } } This should work. I am not sure if you wrote the FormsAuth attribute or if you got it from somewhere else. A: I haven't used the AuthorizeAttribute that comes in Preview 4 yet. I rolled my own, because I have been using the MVC framework since the first CTP. I took a quick look at the attribute in reflector and it is doing what I mentioned above internally, except they use the hex equivalent of 401. I will need to look further up the call, to see where the exception is caught, because more than likely that is where they are doing the redirect. This is the functionality you will need to override. I am not sure if you can do it yet, but I will post back when I find it and give you a work around, unless Haacked sees this and posts it himself. A: I did some googling and this is what I came up with: HttpContext.Current.Response.StatusCode = 401; Not sure if it works or not, I haven't tested it. Either way, it's worth a try, right? :)
{ "language": "en", "url": "https://stackoverflow.com/questions/35322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Compiler Error C2143 when using a struct I'm compiling a simple .c in visual c++ with Compile as C Code (/TC) and i get this compiler error error C2143: syntax error : missing ';' before 'type' on a line that calls for a simple struct struct foo test; same goes for using the typedef of the struct. error C2275: 'FOO' : illegal use of this type as an expression A: I forgot that in C you have to declare all your variables before any code. A: Did you accidentally omit a semicolon on a previous line? If the previous line is an #include, you might have to look elsewhere for the missing semicolon. Edit: If the rest of your code is valid C++, then there probably isn't enough information to determine what the problem is. Perhaps you could post your code to a pastebin so we can see the whole thing. Ideally, in the process of making it smaller to post, it will suddenly start working and you'll then have discovered the problem! A: Because you've already made a typedef for the struct (because you used the 's1' version), you should write: foo test; rather than struct foo test; That will work in both C and C++ A: How is your structure type defined? There are two ways to do it: // This will define a typedef for S1, in both C and in C++ typedef struct { int data; int text; } S1; // This will define a typedef for S2 ONLY in C++, will create error in C. struct S2 { int data; int text; }; A: C2143 basically says that the compiler got a token that it thinks is illegal in the current context. One of the implications of this error is that the actual problem may exist before the line that triggers the compiler error. As Greg said I think we need to see more of your code to diagnose this problem. I'm also not sure why you think the fact that this is valid C++ code is helpful when attempting to figure out why it doesn't compile as C? C++ is (largely) a superset of C so there's any number of reasons why valid C++ code might not be syntactically correct C code, not least that C++ treats structs as classes!
{ "language": "en", "url": "https://stackoverflow.com/questions/35333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Russell's Paradox Let X be the set of all sets that do not contain themselves. Is X a member of X? A: The question is ill-posed in the standard ZFC (Zermelo-Fraenkel + axiom of Choice) set theory because the object thus defined is not a set. Since (again, assuming standard ZFC) your class {x : x\not\in x} is not a set, the answer becomes no, it's not an element of itself (even as a class) since only sets can be elements of classes or sets. By the way, as soon as you agree to the axiom of foundation, no set can be an element of itself. Of course the nice thing about math is you can choose whichever axioms you want :) but believing in paradoxes is just weird. A: The most elegant proof I've ever seen resembles Russell's paradox closely. Theorem (Cantor, I suppose). Let X be a set, and 2^X the set of its subsets. Then card(X) < card(2^X). Proof. Surely card(X) <= card(2^X), since there is a trivial bijection between X and the singletons in 2^X. We must prove that card(X) != card(2^X). Suppose there is a bijection between X and 2^X. Then each xk in X is mapped to a set Ak in 2^X. * *x1 ---> A1 *x2 ---> A2 *... *xk ---> Ak *... For each xk the chances are: either xk belongs to Ak, or it does not. Let M be the set of all those xk that do not belong to their corresponding set Ak. M is a subset of X, thus there must exist an element m of X which is mapped to M by the bijection. Does m belong to M? If it does, then it does not, for M is the set of those x that do not belong to the set they're mapped to. If it does not, then it does, for M contains all such x's. This contradiction stems from the assumption that a bijection exists. Thus a bijection cannot exist, the two cardinalities are different, and the theorem is proved. A: In ZFC, either the axiom of foundation [as mentioned] or the axiom (scheme) of comprehension will prohibit this. The first, for obvious reasons; the second, since it basically says that for given z and first-order property P, you can construct { x ∈ z : P(x) }, but to generate the Russell set, you would need z = V (the class of all sets), which is not a set (i.e. cannot be generated from any of the given axioms). In New Foundations (NF), "x βˆ‰ x" is not a stratified formula, and so again we cannot define the Russell set. Somewhat amusingly, however, V is a set in NF. In von Neumann--Bernays--GΓΆdel set theory (NBG), the class R = { x : x is a set and x βˆ‰ x } is definable. We then ask whether R ∈ R; if so, then also R βˆ‰ R, giving a contradiction. Thus we must have R βˆ‰ R. But there is no contradiction here, since for any given class A, A βˆ‰ R implies either A ∈ A or A is a proper class. Since R βˆ‰ R, we must simply have that R is a proper class. Of course, the class R = { x : x βˆ‰ x }, without the restriction, is simply not definable in NBG. Also of note is that the above procedure is formally constructable as a proof in NBG, whereas in ZFC one has to resort to meta-reasoning.
{ "language": "en", "url": "https://stackoverflow.com/questions/35339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-8" }
Q: Debug visualizer - the visualizer dll can't be loaded I am learning to write a debug visualizer in vs2008 C#. But keep getting the error saying that MyDebugVisualizer dll can't be loaded when I am in debug mode and click the magnifying glass icon. My app project is referencing the visualizer project. Before the type definition I have: [DebuggerVisualizer(typeof(MyVisualizer))] [Serializable] I tried putting the visualizer dll in: C:\Program Files\Microsoft Visual Studio 9.0\Common7\Packages\Debugger\Visualizers or C:\Documents and Settings\zlee\My Documents\Visual Studio 2008\Visualizers Is there something else missing? A: Is it signed? Also are you trying to use the debug visualizer in the same host process as the application you are trying to debug? Try compiling the visualizer and then just reference it by it's library and file location not the project. A: Have you tried using the fusion log tool to determine why the DLL is not loading? Fusion Log View
{ "language": "en", "url": "https://stackoverflow.com/questions/35355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Minimalistic Database Administration I am a developer. An architect on good days. Somehow I find myself also being the DBA for my small company. My background is fair in the DB arts but I have never been a full fledged DBA. My question is what do I have to do to ensure a realiable and reasonably functional database environment with as little actual effort as possible? I am sure that I need to make sure that backups are being performed and that is being done. That is an easy one. What else should I be doing on a consistant basis? A: I've been there. I used to have a job where I wrote code, did all the infrastructure stuff, wore the DBA hat, did user support, fixed the electric stapler when it jammed, and whatever else came up that might be remotely associated with IT. It was great! I learned a little about everything. As far as the care and feeding of your database box, I'd recommend that you do the following: * *Perform regular full backups. *Perform regular transaction log backups. *Monitor your backup jobs. There's a bunch of utilities out on the market that are relatively cheap that can automate this for you. In a small shop you're often too busy to remember to check on them daily. *Test your backups. Do a drill. Restore an old copy of your most important databases. Prove to yourself that your backups are working and that you know how to restore them properly. You'd be suprised how many people only think about this during their first real disaster. *Store backups off-site. With all the online backup providers out there today, there's not much excuse for not having an offsite backup. *Limit sa access to your boxes. *If your database platform supports it, use only role based security. Resist the temptation to have one-off user specific security. The basic idea here is that if you restrict who has access to the box, you'll have fewer problems. Secondly, if your backups are solid, there are few things that come up that you won't be able to deal with effectively. A: Who else is involved in the database? Are you the only person making schema changes (creating new objects, releasing new stored procedures, permissioning new users)? * *Make sure that the number of users doing anything that could impact performance is reduced to as close to zero as possible, ideally including you. *Make sure that you're testing your backups - ideally run a DEV box that is recreating the production environment periodically, 1. a DEV box is a good idea, 2. a backup is only useful if you can restore from it. *Create groups for the various apps that connect to your database, so when a new user comes along you don't guess what permissions they need, just add them to the group, meanwhile permission the database objects to only the groups that need them *Use indices, primary keys, foreign keys, constraints, stats and whatever other tools your database supports. Normalise. *Optimise the most common code against your box - bad stored procedures/data access code will kill you. A: I would suggest: * *A script to quickly restore the latest backup of a database, in case it gets corrupted *What kind of backups are you doing? Full backups each day, or incremental every hour, etc? *Some scripts to create new users and grant them basic access. However, the number one suggestion is to limit as much as possible the power other users have, this will greatly reduce the chance of stuff getting badly messed up. Servers that have everyone as an sa tend to get screwed up quicker than servers that are locked down.
{ "language": "en", "url": "https://stackoverflow.com/questions/35357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What are the main performance differences between varchar and nvarchar SQL Server data types? I'm working on a database for a small web app at my school using SQL Server 2005. I see a couple of schools of thought on the issue of varchar vs nvarchar: * *Use varchar unless you deal with a lot of internationalized data, then use nvarchar. *Just use nvarchar for everything. I'm beginning to see the merits of view 2. I know that nvarchar does take up twice as much space, but that isn't necessarily a huge deal since this is only going to store data for a few hundred students. To me it seems like it would be easiest not to worry about it and just allow everything to use nvarchar. Or is there something I'm missing? A: Generally speaking; Start out with the most expensive datatype that has the least constraints. Put it in production. If performance starts to be an issue, find out what's actually being stored in those nvarchar columns. Is there any characters in there that wouldn't fit into varchar? If not, switch to varchar. Don't try to pre-optimize before you know where the pain is. My guess is that the choice between nvarchar/varchar is not what's going to slow down your application in the foreseable future. There will be other parts of the application where performance tuning will give you much more bang for the bucks. A: For that last few years all of our projects have used NVARCHAR for everything, since all of these projects are multilingual. Imported data from external sources (e.g. an ASCII file, etc.) is up-converted to Unicode before being inserted into the database. I've yet to encounter any performance-related issues from the larger indexes, etc. The indexes do use more memory, but memory is cheap. Whether you use stored procedures or construct SQL on the fly ensure that all string constants are prefixed with N (e.g. SET @foo = N'Hello world.';) so the constant is also Unicode. This avoids any string type conversion at runtime. YMMV. A: I can speak from experience on this, beware of nvarchar. Unless you absolutely require it this data field type destroys performance on larger database. I inherited a database that was hurting in terms of performance and space. We were able to reduce a 30GB database in size by 70%! There were some other modifications made to help with performance but I'm sure the varchar's helped out significantly with that as well. If your database has the potential for growing tables to a million + records stay away from nvarchar at all costs. A: Be consistent! JOIN-ing a VARCHAR to NVARCHAR has a big performance hit. A: nvarchar is going to have significant overhead in memory, storage, working set and indexing, so if the specs dictate that it really will never be necessary, don't bother. I would not have a hard and fast "always nvarchar" rule because it can be a complete waste in many situations - particularly ETL from ASCII/EBCDIC or identifiers and code columns which are often keys and foreign keys. On the other hand, there are plenty of cases of columns, where I would be sure to ask this question early and if I didn't get a hard and fast answer immediately, I would make the column nvarchar. A: I hesitate to add yet another answer here as there are already quite a few, but a few points need to be made that have either not been made or not been made clearly. First: Do not always use NVARCHAR. That is a very dangerous, and often costly, attitude / approach. And it is no better to say "Never use cursors" since they are sometimes the most efficient means of solving a particular problem, and the common work-around of doing a WHILE loop will almost always be slower than a properly done Cursor. The only time you should use the term "always" is when advising to "always do what is best for the situation". Granted that is often difficult to determine, especially when trying to balance short-term gains in development time (manager: "we need this feature -- that you didn't know about until just now -- a week ago!") with long-term maintenance costs (manager who initially pressured team to complete a 3-month project in a 3-week sprint: "why are we having these performance problems? How could we have possibly done X which has no flexibility? We can't afford a sprint or two to fix this. What can we get done in a week so we can get back to our priority items? And we definitely need to spend more time in design so this doesn't keep happening!"). Second: @gbn's answer touches on some very important points to consider when making certain data modeling decisions when the path isn't 100% clear. But there is even more to consider: * *size of transaction log files *time it takes to replicate (if using replication) *time it takes to ETL (if ETLing) *time it takes to ship logs to a remote system and restore (if using Log Shipping) *size of backups *length of time it takes to complete the backup *length of time it takes to do a restore (this might be important some day ;-) *size needed for tempdb *performance of triggers (for inserted and deleted tables that are stored in tempdb) *performance of row versioning (if using SNAPSHOT ISOLATION, since the version store is in tempdb) *ability to get new disk space when the CFO says that they just spent $1 million on a SAN last year and so they will not authorize another $250k for additional storage *length of time it takes to do INSERT and UPDATE operations *length of time it takes to do index maintenance *etc, etc, etc. Wasting space has a huge cascade effect on the entire system. I wrote an article going into explicit detail on this topic: Disk Is Cheap! ORLY? (free registration required; sorry I don't control that policy). Third: While some answers are incorrectly focusing on the "this is a small app" aspect, and some are correctly suggesting to "use what is appropriate", none of the answers have provided real guidance to the O.P. An important detail mentioned in the Question is that this is a web page for their school. Great! So we can suggest that: * *Fields for Student and/or Faculty names should probably be NVARCHAR since, over time, it is only getting more likely that names from other cultures will be showing up in those places. *But for street address and city names? The purpose of the app was not stated (it would have been helpful) but assuming the address records, if any, pertain to just to a particular geographical region (i.e. a single language / culture), then use VARCHAR with the appropriate Code Page (which is determined from the Collation of the field). *If storing State and/or Country ISO codes (no need to store INT / TINYINT since ISO codes are fixed length, human readable, and well, standard :) use CHAR(2) for two letter codes and CHAR(3) if using 3 letter codes. And consider using a binary Collation such as Latin1_General_100_BIN2. *If storing postal codes (i.e. zip codes), use VARCHAR since it is an international standard to never use any letter outside of A-Z. And yes, still use VARCHAR even if only storing US zip codes and not INT since zip codes are not numbers, they are strings, and some of them have a leading "0". And consider using a binary Collation such as Latin1_General_100_BIN2. *If storing email addresses and/or URLs, use NVARCHAR since both of those can now contain Unicode characters. *and so on.... Fourth: Now that you have NVARCHAR data taking up twice as much space than it needs to for data that fits nicely into VARCHAR ("fits nicely" = doesn't turn into "?") and somehow, as if by magic, the application did grow and now there are millions of records in at least one of these fields where most rows are standard ASCII but some contain Unicode characters so you have to keep NVARCHAR, consider the following: * *If you are using SQL Server 2008 - 2016 RTM and are on Enterprise Edition, OR if using SQL Server 2016 SP1 (which made Data Compression available in all editions) or newer, then you can enable Data Compression. Data Compression can (but won't "always") compress Unicode data in NCHAR and NVARCHAR fields. The determining factors are: *NCHAR(1 - 4000) and NVARCHAR(1 - 4000) use the Standard Compression Scheme for Unicode, but only starting in SQL Server 2008 R2, AND only for IN ROW data, not OVERFLOW! This appears to be better than the regular ROW / PAGE compression algorithm. *NVARCHAR(MAX) and XML (and I guess also VARBINARY(MAX), TEXT, and NTEXT) data that is IN ROW (not off row in LOB or OVERFLOW pages) can at least be PAGE compressed, but not ROW compressed. Of course, PAGE compression depends on size of the in-row value: I tested with VARCHAR(MAX) and saw that 6000 character/byte rows would not compress, but 4000 character/byte rows did. *Any OFF ROW data, LOB or OVERLOW = No Compression For You! *If using SQL Server 2005, or 2008 - 2016 RTM and not on Enterprise Edition, you can have two fields: one VARCHAR and one NVARCHAR. For example, let's say you are storing URLs which are mostly all base ASCII characters (values 0 - 127) and hence fit into VARCHAR, but sometimes have Unicode characters. Your schema can include the following 3 fields: ... URLa VARCHAR(2048) NULL, URLu NVARCHAR(2048) NULL, URL AS (ISNULL(CONVERT(NVARCHAR([URLa])), [URLu])), CONSTRAINT [CK_TableName_OneUrlMax] CHECK ( ([URLa] IS NOT NULL OR [URLu] IS NOT NULL) AND ([URLa] IS NULL OR [URLu] IS NULL)) ); In this model you only SELECT from the [URL] computed column. For inserting and updating, you determine which field to use by seeing if converting alters the incoming value, which has to be of NVARCHAR type: INSERT INTO TableName (..., URLa, URLu) VALUES (..., IIF (CONVERT(VARCHAR(2048), @URL) = @URL, @URL, NULL), IIF (CONVERT(VARCHAR(2048), @URL) <> @URL, NULL, @URL) ); *You can GZIP incoming values into VARBINARY(MAX) and then unzip on the way out: * *For SQL Server 2005 - 2014: you can use SQLCLR. SQL# (a SQLCLR library that I wrote) comes with Util_GZip and Util_GUnzip in the Free version *For SQL Server 2016 and newer: you can use the built-in COMPRESS and DECOMPRESS functions, which are also GZip. *If using SQL Server 2017 or newer, you can look into making the table a Clustered Columnstore Index. *While this is not a viable option yet, SQL Server 2019 introduces native support for UTF-8 in VARCHAR / CHAR datatypes. There are currently too many bugs with it for it to be used, but if they are fixed, then this is an option for some scenarios. Please see my post, "Native UTF-8 Support in SQL Server 2019: Savior or False Prophet?", for a detailed analysis of this new feature. A: I deal with this question at work often: * *FTP feeds of inventory and pricing - Item descriptions and other text were in nvarchar when varchar worked fine. Converting these to varchar reduced file size almost in half and really helped with uploads. *The above scenario worked fine until someone put a special character in the item description (maybe trademark, can't remember) I still do not use nvarchar every time over varchar. If there is any doubt or potential for special characters, I use nvarchar. I find I use varchar mostly when I am in 100% control of what is populating the field. A: Why, in all this discussion, has there been no mention of UTF-8? Being able to store the full unicode span of characters does not mean one has to always allocate two-bytes-per-character (or "code point" to use the UNICODE term). All of ASCII is UTF-8. Does SQL Server check for VARCHAR() fields that the text is strict ASCII (i.e. top byte bit zero)? I would hope not. If then you want to store unicode and want compatibility with older ASCII-only applications, I would think using VARCHAR() and UTF-8 would be the magic bullet: It only uses more space when it needs to. For those of you unfamiliar with UTF-8, might I recommend a primer. A: Disk space is not the issue... but memory and performance will be. Double the page reads, double index size, strange LIKE and = constant behaviour etc Do you need to store Chinese etc script? Yes or no... And from MS BOL "Storage and Performance Effects of Unicode" Edit: Recent SO question highlighting how bad nvarchar performance can be... SQL Server uses high CPU when searching inside nvarchar strings A: For your application, nvarchar is fine because the database size is small. Saying "always use nvarchar" is a vast oversimplification. If you're not required to store things like Kanji or other crazy characters, use VARCHAR, it'll use a lot less space. My predecessor at my current job designed something using NVARCHAR when it wasn't needed. We recently switched it to VARCHAR and saved 15 GB on just that table (it was highly written to). Furthermore, if you then have an index on that table and you want to include that column or make a composite index, you've just made your index file size larger. Just be thoughtful in your decision; in SQL development and data definitions there seems to rarely be a "default answer" (other than avoid cursors at all costs, of course). A: There'll be exceptional instances when you'll want to deliberately restrict the data type to ensure it doesn't contain characters from a certain set. For example, I had a scenario where I needed to store the domain name in a database. Internationalisation for domain names wasn't reliable at the time so it was better to restrict the input at the base level, and help to avoid any potential issues. A: Always use nvarchar. You may never need the double-byte characters for most applications. However, if you need to support double-byte languages and you only have single-byte support in your database schema it's really expensive to go back and modify throughout your application. The cost of migrating one application from varchar to nvarchar will be much more than the little bit of extra disk space you'll use in most applications. A: Since your application is small, there is essentially no appreciable cost increase to using nvarchar over varchar, and you save yourself potential headaches down the road if you have a need to store unicode data. A: If you are using NVARCHAR just because a system stored procedure requires it, the most frequent occurrence being inexplicably sp_executesql, and your dynamic SQL is very long, you would be better off from performance perspective doing all string manipulations (concatenation, replacement etc.) in VARCHAR then converting the end result to NVARCHAR and feeding it into the proc parameter. So no, do not always use NVARCHAR!
{ "language": "en", "url": "https://stackoverflow.com/questions/35366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "247" }
Q: How can I change the way my Drupal theme displays the front page I am trying to build an website for my college's magazine. I used the "views" module to show a block of static content I created on the front page. My question is: how can I edit the theme's css so it changes the way that block of static content is displayed? For reference, here's the link to the site (in portuguese, and with almost zero content for now). A: I can't access your site at the moment, so I'm basing this on fairly limited information. But if the home page is static content, the views module might not be appropriate. It might be better to create a page (In the menu, go to: Create content > page), make a note of the page's url, and then change the default home page to that url (Administer > Site Configuration > Site information, 'Default front page' is at the bottom). Although I might be misunderstanding what you mean by 'static content'. But however you're creating the front page, don't edit the css in the theme - it'll get overwritten next time you upgrade. Instead you need to create a sub-theme. As an example, if you want to subtheme Garland, in drupal 6. You first need to setup a directory for your themes. Go to sites/all/ in your drupal installation, and create a subdirectory called themes if it doesn't already exist. Go into that directory, and create a directory for your subtheme, say mytheme (i.e. sites/all/themes/mytheme/). Then use your text editor to create a file called mytheme.info in that directory, with the contents: name = My Theme version = 0.1 core = 6.x base theme = garland stylesheets[all][] = mytheme.css And then use your text editor to create a file called mytheme.css in that directory, and put the extra CSS in there. For more information, there's the druapl documentation on .info files and style sheets. Although, you might want to buy a book, as the online documentation isn't great. A: The main css file that drives your content is the styles.css file located in your currently selected theme. In your case that means that most of your site styling is driven by this file: /aroda/roda/themes/garland/style.css with basic coloring effects handled by this file: /aroda/roda/files/color/garland-d3985506/style.css You're currently using Garland, the default Drupal theme included with the core download, so for best practices you shouldn't edit the included style.css file directly. Instead, you should, as Daniel James said, create a subdirectory in /sites/all called "themes". If you're using Drupal 6, I'd follow Daniel James directions from there. If you're using Drupal 5, I'd go ahead and copy the garland directory into the themes directory and rename it for something specific to your site (aroda_v1) so you would have something like /sites/all/themes/aroda_v1 which would contain styles.css. At that point, you can edit the styles.css file directly to make any changes you see fit. Hope that helps! A: It looks like most of your CSS info is in some *.css files. There is also some inline Style info on the page. Your style for the static info comes from the in-line stuff. I am not sure how Drupal generates the page but the place to start looking is for any properties for "ultima-edicao". That is what the surrounding DIV is called.
{ "language": "en", "url": "https://stackoverflow.com/questions/35372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Best practices for signing .NET assemblies? I have a solution consisting of five projects, each of which compile to separate assemblies. Right now I'm code-signing them, but I'm pretty sure I'm doing it wrong. What's the best practice here? * *Sign each with a different key; make sure the passwords are different *Sign each with a different key; use the same password if you want *Sign each with the same key *Something else entirely Basically I'm not quite sure what "signing" does to them, or what the best practices are here, so a more generally discussion would be good. All I really know is that FxCop yelled at me, and it was easy to fix by clicking the "Sign this assembly" checkbox and generating a .pfx file using Visual Studio (2008). A: The most obvious difference between signed and unsigned assemblies is in a ClickOnce application. If you don't sign it, then users will get a scary "Unknown Publisher" warning dialog the first time they run your application. If you've signed it with a certificate from a trusted authority, then they see a dialog that's less scary. As far as I know, signing with a certificate you generate yourself doesn't affect the "Unknown Publisher" warning. Instant SSL from Comodo has examples of the dialogs. There are some subtler differences. You have to sign an assembly before it can be installed in the global assembly cache (GAC) where it can be shared by multiple applications. Signing is integral to code access security (CAS), but I haven't found anyone who could get CAS working. I'm pretty sure that both the GAC and CAS work fine with certificates you generate yourself. A: If your only objective is to stop FxCop from yelling at you, then you have found the best practice. The best practice for signing your assemblies is something that is completely dependent on your objectives and needs. We would need more information like your intended deployment: * *For personal use *For use on corporate network PC's as a client application *Running on a web server *Running in SQL Server *Downloaded over the internet *Sold on a CD in shrink wrap *Uploaded straight into a cybernetic brain *Etc. Generally you use code signing to verify that the Assemblies came from a specific trusted source and have not been modified. So each with the same key is fine. Now how that trust and identity is determined is another story. UPDATE: How this benefits your end users when you are deploying over the web is if you have obtained a software signing certificate from a certificate authority. Then when they download your assemblies they can verify they came from Domenic's Software Emporium, and they haven't been modified or corrupted along the way. You will also want to sign the installer when it is downloaded. This prevents the warning that some browsers display that it has been obtained from an unknown source. Note, you will pay for a software signing certificate. What you get is the certificate authority become the trusted 3rd party who verifies you are who you say you are. This works because of a web of trust that traces its way back to a root certificate that is installed in their operating system. There are a few certificate authorities to choose from, but you will want to make sure they are supported by the root certificates on the target operating system. A: It helps because the executable is expecting a strongly named assembly. It stops anyone maliciously substituting in another assembly for one of yours. Also the user might grant an assembly CAS permissions based on the strong name. I don't think you should be distributing the .pfx file, you keep that safe for resigning the assembly. A: It is important to keep your PFX file secret as it contains the private key. If that key is made available to others then anyone can sign assemblies or programs that masquerade as you. To associate your name with your assemblies (in the eyes of Windows) you'll need to get a digital certificate (the portion of the PFX file containing your name) signed by a trusted authority. Actually you'll get a new certificate, but with the same information. You'll have to pay for this certificate (probably annually), but the certificate authority will effectively vouch for your existence (after you've faxed them copies of your passport or driver's permit and a domestic bill). A: Signing is used to uniquely identify an assembly. More details are in How to: Sign an Assembly (Visual Studio). In terms of best practice, it's fine to use the same key as long as the assemblies have different names.
{ "language": "en", "url": "https://stackoverflow.com/questions/35373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: Internet Access in Ubuntu on VirtualBox I recently installed Ubuntu on a VirtualBox VM it installed just fine (much easier than on VirtualPC). However I'm unable to get internet access from the guest OS (ie. Ubuntu). Can anyone give me any pointers on how I might enable this? The Host OS is Windows Vista and the hardware is an IBM Lenovo. A: How did you configure networking when you created the guest? The easiest way is to set the network adapter to NAT, if you don't need to access the vm from another pc. A: it could be a problem with your specific network adapter. I have a Dell 15R and there are no working drivers for ubuntu or ubuntu server; I even tried compiling wireless drivers myself, but to no avail. However, in virtualbox, I was able to get wireless working by using the default configuration. It automatically bridged my internal wireless adapter and hence used my native OS's wireless connection for wireless. If you are trying to get a separate wireless connection from within ubuntu in virtualbox, then it would take more configuring. If so, let me know, if not, I will not bother typing up instructions to something you are not looking to do, as it is quite complicated in some instances. p.s. you should be using Windows 7 if you have any technical inclination. Do you live under a rock? No offense intended. A: I could get away with the following solution (works with Ubuntu 14 guest VM on Windows 7 host or Ubuntu 9.10 Casper guest VM on host Windows XP x86): * *Go to network connections -> Virtual Box Host-Only Network -> Select "Properties" *Check VirtualBox Bridged Networking Driver *Come to VirtualBox Manager, choose the network adapter as Bridged Adapter and Name to the device in Step #1. *Restart the VM. A: I had the same problem. Solved by sharing internet connection (on the hosting OS). Network Connection Properties -> advanced -> Allow other users to connect... A: I had a similar issue in windows 7 + ubuntu 12.04 as guest. I resolved by * *open 'network and sharing center' in windows *right click 'nw-bridge' -> 'properties' *Select "virtual box host only network" for the option "select adapters you want to use to connect computers on your local network" *go to virtual box.. select the network type as NAT.
{ "language": "en", "url": "https://stackoverflow.com/questions/35375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: Print Odd-Even numbers using signals I need to to print natural nos. 1,2,...n such that the parent process prints all odd numbers and the child process prints all even numbers, and all of this needs to be done using POSIX signals. How would I go about accomplishing this? The output should be: Parent : 1 Child : 2 Parent : 3 ... A: It would probably be more beneficial for you provide what you have so far and explain what doesn't work as expected but here is what I came up with: #include <stdio.h> #include <sys/types.h> #include <unistd.h> #include <signal.h> #include <stdlib.h> #define READY_SIGNAL SIGUSR1 /* The ready flag is set when READY_SIGNAL is received. * It is needed so that when we wake up from sigsuspend * we know whether or not the signal received was READY_SIGNAL. */ volatile sig_atomic_t ready; void make_ready(int i) { ready = 1; } int main (int argc, char *argv[]) { pid_t cpid, ppid; /* pids of the child and parent */ /* Signal masks for sigprocmask and sigsuspend */ sigset_t block_mask, wait_mask; unsigned long c = 1; /* The counter */ unsigned long n = 100; /* The default max count value */ struct sigaction act; /* Override the default max count if provided */ if (argv[1]) n = strtoul(argv[1], NULL, 10); /* Prepare signal masks */ sigemptyset(&wait_mask); sigemptyset(&block_mask); sigaddset(&block_mask, READY_SIGNAL); /* Set the signal mask for the parent to ignore READY_SIGNAL until * we are ready to receive it, the mask will be inherited by the child, * needed to avoid race conditions */ sigprocmask(SIG_BLOCK, &block_mask, NULL); /* Register the signal handler, will be inherited by the child */ act.sa_flags = 0; act.sa_handler = make_ready; sigemptyset(&act.sa_mask); sigaction(READY_SIGNAL, &act, NULL); /* Get the parent's process id, needed for the child to send signals * to the parent process, could alternatively use getppid in the child */ ppid = getpid(); /* Call fork, storing the child's process id needed for the parent to * send signals to the child */ cpid = fork(); if (cpid < 0) { perror("Fork failed"); exit(EXIT_FAILURE); } if (cpid == 0) { /* Child */ c = 2; /* Child's first number will always be 2 */ if (c > n) exit(0); /* If c > n we have nothing to do */ do { /* Suspend until we receive READY_SIGNAL */ while (!ready) sigsuspend(&wait_mask); /* Print out number, flush for proper output sequencing when output is not a terminal. */ printf("Child: %lu\n", c); fflush(stdout); ready = 0; /* Reset ready flag */ c += 2; /* Increment counter */ /* Wake up parent process */ kill(ppid, READY_SIGNAL); } while (c <= n); } else { /* Parent */ for (;;) { /* Print out number, flush for proper output sequencing when output is not a terminal. */ printf("Parent: %lu\n", c); fflush(stdout); c += 2; /* Increment counter */ kill(cpid, READY_SIGNAL); /* Wake up child process */ if (c > n) break; /* Don't go back to sleep if we are done */ ready = 0; /* Reset ready flag */ /* Suspend until we receive READY_SIGNAL */ while (!ready) sigsuspend(&wait_mask); }; wait4(cpid, NULL, 0); /* Don't exist before child finishes */ } return 0; } This passes these basic tests: ./print_with_signals 100000|sort -n -k 2 -c && echo "Success" ./print_with_signals 100001|sort -n -k 2 -c && echo "Success" A: I think this homework assignment was given to you to make you try a lot of solution and conclude by yourself that signals are not a good synchronisation technique. This an invaluable lesson, remember it well and use semaphores from now on! :)
{ "language": "en", "url": "https://stackoverflow.com/questions/35380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Encapsulate multiple properties at once using Resharper 4.0 When using Resharper to encapsulate a class's properties, is there a way to get it to do more than one property at a time? A: You might or might not already know this (R# does suffer from a lack of discoverability, unless you get the one-page key-shortcut page printed out), but ALT-INS opens a box which can at least mass-generate properties for fields. Not sure if that's any use - it's not the same as a retrospective encapsulation. A: I don't think there such a feature out of the box. However, you could write a RS plugin that does this. But this would be another question...
{ "language": "en", "url": "https://stackoverflow.com/questions/35402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Trouble with parallel make not always starting another job when one finishes I'm working on a system with four logical CPS (two dual-core CPUs if it matters). I'm using make to parallelize twelve trivially parallelizable tasks and doing it from cron. The invocation looks like: make -k -j 4 -l 3.99 -C [dir] [12 targets] The trouble I'm running into is that sometimes one job will finish but the next one won't startup even though it shouldn't be stopped by the load average limiter. Each target takes about four hours to complete and I'm wondering if this might be part of the problem. Edit: Sometimes a target does fail but I use the -k option to have the rest of the make still run. I haven't noticed any correlation with jobs failing and the next job not starting. A: I'd drop the '-l' If all you plan to run the the system is this build I think the -j 4 does what you want. Based on my memory, if you have anything else running (crond?), that can push the load average over 4. GNU make ref A: Does make think one of the targets is failing? If so, it will stop the make after the running jobs finish. You can use -k to tell it to continue even if an error occurs. A: @BCS I'm 99.9% sure that the -l isn't causeing the problem because I can watch the load average on the machine and it drops down to about three and sometimes as low as one (!) without starting the next job.
{ "language": "en", "url": "https://stackoverflow.com/questions/35407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I oversee my MySQL replication server? I've had a tough time setting up my replication server. Is there any program (OS X, Windows, Linux, or PHP no problem) that lets me monitor and resolve replication issues? (btw, for those following, I've been on this issue here, here, here and here) My production database is several megs in size and growing. Every time the database replication stops and the databases inevitably begin to slide out of sync i cringe. My last resync from dump took almost 4 hours roundtrip! As always, even after sync, I run into this kind of show-stopping error: Error 'Duplicate entry '252440' for key 1' on query. I would love it if there was some way to closely monitor whats going on and perhaps let the software deal with it. I'm even all ears for service companies which may help me monitor my data better. Or an alternate way to mirror altogether. Edit: going through my previous questions i found this which helps tremendously. I'm still all ears on the monitoring solution. A: To monitor the servers we use the free tools from Maatkit ... simple, yet efficient. The binary replication is available in 5.1, so I guess you've got some balls. We still use 5.0 and it works OK, but of course we had our share of issues with it. We use a Master-Master replication with a MySql Proxy as a load-balancer in front, and to prevent it from having errors: * *we removed all unique indexes *for the few cases where we really needed unique constraints we made sure we used REPLACE instead of INSERT (MySql Proxy can be used to guard for proper usage ... it can even rewrite your queries) *scheduled scripts doing intensive reports are always accessing the same server (not the load-balancer) ... so that dangerous operations are replicated safely Yeah, I know it sounds simple and stupid, but it solved 95% of all the problems we had. A: We use mysql replication to replicate data to close to 30 servers. We monitor them with nagios. You can probably check the replication status and use an event handler to restart it with 'SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; Start Slave;'. That will fix the error, but you'll lose the insert that caused the error. About the error, do you use memory tables on your slaves? I ask this because the only time we ever got a lot of these error they where caused by a bug in the latests releases of mysql. 'Delete From Table Where Field = Value' will delete only one row in memory tables even though they where multiple rows. mysql bug descritpion
{ "language": "en", "url": "https://stackoverflow.com/questions/35420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: nmake, visualstudio, and .mak files I was given a C++ project that was compiled using MS Visual Studio .net 2003 C++ compiler, and a .mak file that was used to compile it. I am able to build it from the command line using nmake project.mak, but the compiler complains that afxres.h was not found. I did a little searching around and the afxres.h is in the Visual Studio directory in an includes file. Where am I supposed to specify to nmake where to look for this header file? A: There should be an icon in your Start menu under Programs that opens a cmd.exe instance with all the correct MSVS environment variables set up for command line building. A: Another option is running the appropriate vars batch file from a regular command prompt. The name and location varies from version to version. For VS2003, I believe it's C:\Program Files\Microsoft Visual Studio .NET 2003\Common7\Tools\vsvars32.bat
{ "language": "en", "url": "https://stackoverflow.com/questions/35429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How can I point Visual Studio 2008 to a new path for projects? I didn't see the option to point the workspace (or it's VS equivalent, I'm still learning the terminology for Visual Studio, but it is called a workspace in Eclipse) to My Documents/Programming instead of -- well -- wherever it is now. A: What Craig said, plus if you do want to change the default it's in Tools -> Options -> Projects And Solutions. I've never changed the default and never created a solution/project in the default location, which might tell you something about how relevant it is... A: Tools -> Options -> Projects & Solutions. There is a Visual Studio Projects box. A: When you create the project you can specify whatever directory you want, you are not limited to the default.
{ "language": "en", "url": "https://stackoverflow.com/questions/35432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to divide two 64-bit numbers in Linux Kernel? Some code that rounds up the division to demonstrate (C-syntax): #define SINT64 long long int #define SINT32 long int SINT64 divRound(SINT64 dividend, SINT64 divisor) { SINT32 quotient1 = dividend / divisor; SINT32 modResult = dividend % divisor; SINT32 multResult = modResult * 2; SINT32 quotient2 = multResult / divisor; SINT64 result = quotient1 + quotient2; return ( result ); } Now, if this were User-space we probably wouldn't even notice that our compiler is generating code for those operators (e.g. divdi3() for division). Chances are we link with libgcc without even knowing it. The problem is that Kernel-space is different (e.g. no libgcc). What to do? Crawl Google for a while, notice that pretty much everyone addresses the unsigned variant: #define UINT64 long long int #define UINT32 long int UINT64 divRound(UINT64 dividend, UINT64 divisor) { UINT32 quotient1 = dividend / divisor; UINT32 modResult = dividend % divisor; UINT32 multResult = modResult * 2; UINT32 quotient2 = multResult / divisor; UINT64 result = quotient1 + quotient2; return ( result ); } I know how to fix this one: Override udivdi3() and umoddi3() with do_div() from asm/div64.h. Done right? Wrong. Signed is not the same as unsigned, sdivdi3() does not simply call udivdi3(), they are separate functions for a reason. Have you solved this problem? Do you know of a library that will help me do this? I'm really stuck so whatever you might see here that I just don't right now would be really helpful. Thanks, Chad A: This functionality is introduced in /linux/lib/div64.c as early as kernel v2.6.22. A: Here's my really naive solution. Your mileage may vary. Keep a sign bit, which is sign(dividend) ^ sign(divisor). (Or *, or /, if you're storing your sign as 1 and -1, as opposed to false and true. Basically, negative if either one is negative, positive if none or both are negative.) Then, call the unsigned division function on the absolute values of both. Then tack the sign back onto the result. P.S. That is actually how __divdi3 is implemented in libgcc2.c (from GCC 4.2.3, the version that's installed on my Ubuntu system). I just checked. :-) A: ldiv ? Edit: reread title, so you might want to ignore this. Or not, depending on if it has an appropriate non-library version. A: I don't think (at least can't find a way to make) Chris' answer work in this case because do_div() actually changes the dividend in-place. Getting the absolute value implies a temporary variable whose value will change the way I require but can't be passed out of my __divdi3() override. I don't see a way around the parameter-by-value signature of __divdi3() at this point except to mimic the technique used by do_div(). It might seem like I'm bending over backwards here and should just come up with an algorithm to do the 64-bit/32-bit division I actually need. The added complication here though is that I have a bunch of numerical code using the '/' operator and would need to go through that code and replace every '/' with my function calls. I'm getting desperate enough to do just that though. Thanks for any follow-up, Chad
{ "language": "en", "url": "https://stackoverflow.com/questions/35463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Exception in Web Service locks DLL and prevents publishing. Workaround? I'm using a native DLL (FastImage.dll) in a C# ASP.NET Web Service that sometimes locks (can't delete it---says access denied); this requires stopping IIS to delete the DLL. The inability to delete this DLL in the bin folder of my published Web Service prevents me from publishing successfully (even though it thinks it published successfully!), which makes development and fixing the bug difficult (especially when it just happily runs old code since my changes may not be reflected on the server!). Note that the bug causing the Web Service to bomb and lock up the DLL is in my code, which is outside of said DLL, so I think this is a more general problem than just the FreeImage library (not to bring them any heat). * *Has anyone experienced this? *Is there a way to make sure that when it says "Publish succeeded" from the VS IDE that it really means it, or to run sort of script to check that the files are really deleted before it attempts to Publish (like a pre-build step in VC++). (Right now I manually delete the files before publishing to make sure that I know the DLLs were replaced, instead of running old DLLs. It's still a problem, though if I can't delete the DLL.) *How would I detect whether a file was successfully deleted from a batch file? (so I can stop and start IIS if it fails) *Is it possible to stop and start IIS from a script (preferably from the Publish... action in the VS IDE) and if so, how? A: Using the IISReset command line tool will only restart IIS on the local machine, not on a remote server to which you are publishing. Assuming that you are publishing to a Windows 2003 server, I'd suggest trying the slightly less drastic step of stopping and restarting the IIS AppPool in the web site or virtual folder in which the web service runs. (That way you are not taking all sites that run on the target server offline.) This too assumes that the web service runs in its own app pool. Ideally it should, so you keep it isolated. I'd recommend getting away from using the Publishing process and to look into using a Web Deployment Project. Here is a post on ScottGu's blog detailing VS 2005 Web Deployment Projects. The advantage to the Web Deployment Project approach is that it provides you with all the power and capability of MSbuild, as it is really just a convenience wrapper around MSBuild. Here's a post from the MSBuild team about pre-build and post-build capabilities Hope this helps. A: You could use the IISReset command line tool to stop/restart iis. So you could write a simple batch file to stop iis, copy your files, and then restart iis. I'm not sure how to integrate this with the VS publish feature however.
{ "language": "en", "url": "https://stackoverflow.com/questions/35479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I clear Class::DBI's internal cache? I'm currently working on a large implementation of Class::DBI for an existing database structure, and am running into a problem with clearing the cache from Class::DBI. This is a mod_perl implementation, so an instance of a class can be quite old between times that it is accessed. From the man pages I found two options: Music::DBI->clear_object_index(); And: Music::Artist->purge_object_index_every(2000); Now, when I add clear_object_index() to the DESTROY method, it seems to run, but doesn't actually empty the cache. I am able to manually change the database, re-run the request, and it is still the old version. purge_object_index_every says that it clears the index every n requests. Setting this to "1" or "0", seems to clear the index... sometimes. I'd expect one of those two to work, but for some reason it doesn't do it every time. More like 1 in 5 times. Any suggestions for clearing this out? A: The "common problems" page on the Class::DBI wiki has a section on this subject. The simplest solution is to disable the live object index entirely using: $Class::DBI::Weaken_Is_Available = 0; A: $obj->dbi_commit(); may be what you are looking for if you have uncompleted transactions. However, this is not very likely the case, as it tends to complete any lingering transactions automatically on destruction. When you do this: Music::Artist->purge_object_index_every(2000); You are telling it to examine the object cache every 2000 object loads and remove any dead references to conserve memory use. I don't think that is what you want at all. Furthermore, Music::DBI->clear_object_index(); Removes all objects form the live object index. I don't know how this would help at all; it's not flushing them to disk, really. It sounds like what you are trying to do should work just fine the way you have it, but there may be a problem with your SQL or elsewhere that is preventing the INSERT or UPDATE from working. Are you doing error checking for each database query as the perldoc suggests? Perhaps you can begin there or in your database error logs, watching the queries to see why they aren't being completed or if they ever arrive. Hope this helps! A: I've used remove_from_object_index successfully in the past, so that when a page is called that modifies the database, it always explicitly reset that object in the cache as part of the confirmation page. A: I should note that Class::DBI is deprecated and you should port your code to DBIx::Class instead.
{ "language": "en", "url": "https://stackoverflow.com/questions/35480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Linux: What is the best way to estimate the code & static data size of program? I want to be able to get an estimate of how much code & static data is used by my C++ program? Is there a way to find this out by looking at the executable or object files? Or perhaps something I can do at runtime? Will objdump & readelf help? A: "size" is the traditional tool. "readelf" has a lot of options. $ size /bin/sh text data bss dec hex filename 712739 37524 21832 772095 bc7ff /bin/sh A: If you want to take the next step of identifying the functions and data structures to focus on for footprint reduction, the --size-sort argument to nm can show you: $ nm --size-sort /usr/bin/fld | tail -10 000000ae T FontLoadFontx 000000b0 T CodingByRegistry 000000b1 t ShmFont 000000ec t FontLoadw 000000ef T LoadFontFile 000000f6 T FontLoadDFontx 00000108 D fSRegs 00000170 T FontLoadMinix 000001e7 T main 00000508 T FontLoadBdf A: readelf will indeed help. You can use the -S option; that will show the sizes of all sections. .text is (the bulk of) your executable code. .data and .rodata is your static data. There are other sections too, some of which are used at runtime, others only at link time. A: size -A
{ "language": "en", "url": "https://stackoverflow.com/questions/35485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Fluid rounded corners with jQuery What is the best way to create fluid width/height rounded corners with jQuery? That plugin doesn't keep the height the same. I have a 10px high div that I want to round the corners on, when I use that script it adds about 10px onto whats there. A: I use: Jquery-roundcorners-canvas it handles borders, and keeps things the same size, in fact you have to pad in a bit to keep from having letters live in the crease. Its pretty fast, unless you are on ie 6. Same pretty syntax of the other corner packs, but just prettier in general. Edited to add new link for jQuery Roundcorners Canvas A: The way the jQuery UI Theming API accomplishes this in Firefox is with "Corner Radius Helpers". Here's what they look like in the CSS that was bundled in my copy of the UI: /* Corner radius */ .ui-corner-tl { -moz-border-radius-topleft: 4px; -webkit-border-top-left-radius: 4px; } .ui-corner-tr { -moz-border-radius-topright: 4px; -webkit-border-top-right-radius: 4px; } .ui-corner-bl { -moz-border-radius-bottomleft: 4px; -webkit-border-bottom-left-radius: 4px; } .ui-corner-br { -moz-border-radius-bottomright: 4px; -webkit-border-bottom-right-radius: 4px; } .ui-corner-top { -moz-border-radius-topleft: 4px; -webkit-border-top-left-radius: 4px; -moz-border-radius-topright: 4px; -webkit-border-top-right-radius: 4px; } .ui-corner-bottom { -moz-border-radius-bottomleft: 4px; -webkit-border-bottom-left-radius: 4px; -moz-border-radius-bottomright: 4px; -webkit-border-bottom-right-radius: 4px; } .ui-corner-right { -moz-border-radius-topright: 4px; -webkit-border-top-right-radius: 4px; -moz-border-radius-bottomright: 4px; -webkit-border-bottom-right-radius: 4px; } .ui-corner-left { -moz-border-radius-topleft: 4px; -webkit-border-top-left-radius: 4px; -moz-border-radius-bottomleft: 4px; -webkit-border-bottom-left-radius: 4px; } .ui-corner-all { -moz-border-radius: 4px; -webkit-border-radius: 4px; } Unfortunately, these don't appear to have any effect in IE7 as of this post. In jQuery code, one of these classes might be applied in a fashion something like this: $('#SomeElementID').addClass("ui-corner-all"); A: $(this).corner(); See: malsup.com/jquery/corner and github repository for future ref A: If you want full control about the border an d gradient, you can use my iQuery Background Canvas plugin. It works with a HTML5 Canvas element and allows to draw borders and backgrounds in any variation. But you should be able to program JavaScript This is a full featured sample with a background gradient and rounded corners. as you can see, the drawing is completely done in JavaScript, you can set every parameter you want. The drawing is redone on every resize (Due to the resize Event), you can adapt the background drawing to show wat you want on this specific size. $(document).ready(function(){ $(".Test").backgroundCanvas(); }); function DrawBackground() { $(".Test").backgroundCanvasPaint(TestBackgroundPaintFkt); } // Draw the background on load and resize $(window).load(function () { DrawBackground(); }); $(window).resize(function() { DrawBackground(); }); function TestBackgroundPaintFkt(context, width, height, elementInfo){ var options = {x:0, height: height, width: width, radius:14, border: 0 }; // Draw the red border rectangle context.fillStyle = "#FF0000"; $.canvasPaint.roundedRect(context,options); // Draw the gradient filled inner rectangle var backgroundGradient = context.createLinearGradient(0, 0, 0, height - 10); backgroundGradient.addColorStop(0 ,'#AAAAFF'); backgroundGradient.addColorStop(1, '#AAFFAA'); options.border = 5; context.fillStyle = backgroundGradient; $.canvasPaint.roundedRect(context,options); } Here is the plugin, and this site makes a vast use of it: jQuery Background Canvas Plugin
{ "language": "en", "url": "https://stackoverflow.com/questions/35486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Spread vs MPI vs zeromq? In one of the answers to Broadcast like UDP with the Reliability of TCP, a user mentions the Spread messaging API. I've also run across one called ØMQ. I also have some familiarity with MPI. So, my main question is: why would I choose one over the other? More specifically, why would I choose to use Spread or ØMQ when there are mature implementations of MPI to be had? A: MPI was deisgned tightly-coupled compute clusters with fast, reliable networks. Spread and ØMQ are designed for large distributed systems. If you're designing a parallel scientific application, go with MPI, but if you are designing a persistent distributed system that needs to be resilient to faults and network instability, use one of the others. MPI has very limited facilities for fault tolerance; the default error handling behavior in most implementations is a system-wide fail. Also, the semantics of MPI require that all messages sent eventually be consumed. This makes a lot of sense for simulations on a cluster, but not for a distributed application. A: You're addressing very different APIs here, with different notions about the kind of services provided and infrastructure for each of them. I don't know enough about MPI and Spread to answer for them, but I can help a little more with ZeroMQ. ZeroMQ is a simple messaging communication library. It does nothing else than send a message to different peers (including local ones) based on a restricted set of common messaging patterns (PUSH/PULL, REQUEST/REPLY, PUB/SUB, etc.). It handles client connection, retrieval, and basic congestion strictly based on those patterns and you have to do the rest yourself. Although appearing very restricted, this simple behavior is mostly what you would need for the communication layer of your application. It lets you scale very quickly from a simple prototype, all in memory, to more complex distributed applications in various environments, using simple proxies and gateways between nodes. However, don't expect it to do node deployment, network discovery, or server monitoring; You will have to do it yourself. Briefly, use zeromq if you have an application that you want to scale from the simple multithread process to a distributed and variable environment, or that you want to experiment and prototype quickly and that no solutions seems to fit with your model. Expect however to have to put some effort on the deployment and monitoring of your network if you want to scale to a very large cluster. A: I have not used any of these libraries, but I may be able to give some hints. * *MPI is a communication protocol while Spread and ØMQ are actual implementation. *MPI comes from "parallel" programming while Spread comes from "distributed" programming. So, it really depends on whether you are trying to build a parallel system or distributed system. They are related to each other, but the implied connotations/goals are different. Parallel programming deals with increasing computational power by using multiple computers simultaneously. Distributed programming deals with reliable (consistent, fault-tolerant and highly available) group of computers. The concept of "reliability" is slightly different from that of TCP. TCP's reliability is "give this packet to the end program no matter what." The distributed programming's reliability is "even if some machines die, the system as a whole continues to work in consistent manner." To really guarantee that all participants got the message, one would need something like 2 phase commit or one of faster alternatives.
{ "language": "en", "url": "https://stackoverflow.com/questions/35490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Mac OS X: What is the best way to estimate the code & static data size of program? I want to be able to get an estimate of how much code & static data is used by my C++ program? Is there a way to find this out by looking at the executable or object files? Or perhaps something I can do at runtime? Will otool help? A: * *"size" is the traditional tool and works on all unix flavors. *"otool" has a bit finer grain control and has a lot of options. . $ size python __TEXT __DATA __OBJC others dec hex 860160 159744 0 2453504 3473408 350000 A: I think otool can help. Specifically, "otool -s {segment} {section}" should print out the details. I'm not sure if you can get information about __DATA or __TEXT without specifying a section. Maybe those sizes are reported in the mach header: "otool -h"? otool -s __DATA __data MyApp.bundle/Contents/MacOS/MyApp otool -s __TEXT __text MyApp.bundle/Contents/MacOS/MyApp Anyway, Apple documents what gets copied into each section per-segment here: Apple's mach-o format documentation
{ "language": "en", "url": "https://stackoverflow.com/questions/35491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there anyway to run Ruby on Rails applications on a Windows box? I'm looking to run Redmine, a Ruby on Rails app, on a VPS windows box. The only thing I can really think of is running a virtual Linux machine and hosting it from there. If that is my only option, am I going to run into problems running a virtual machine inside of a virtual machine? Also, this will be an internal app, so performance isn't my number once concern. A: Windows is not the usual place to deploy production Rails apps, but there are people who do it. Mongrel was originally written to give better deployment options for Windows. As it turned out the UNIX deployment options weren't that good either. :) Start with the Ruby One Click installer so you have a sane installation of ruby and rubygems. From there, you install the rails gem and the gem for your database like you normally would. Most if not all of the databases have Windows gems. Make sure to install mongrel_service to be able to control each mongrel like a normal windows service. See mongrel_rails service::install -h for details. Once you have your mongrels set up, it's similar to a UNIX deployment. You set up a reverse proxy, such as Apache2 and you're set. You might run into some gems (such as BackgroundRB) that will not work under Windows because they have C code that either rely on UNIX libraries or expect a UNIX-like build system at installation time. However, all of the really important Rails gems, such as Mongrel and the database adapters, have gems with pre-built binaries available, so you'll be fine. A: Just grab the Bitnami RedMine stack. Or the Bitnami Ruby on Rails stack. A: Redmine ran fine on our Windows build server from the command line, sans IDE, using Mongrel and SQLite. Granted, ours isn't a VPS, but at any rate Linux isn't required for this app. A: The BitNami Redmine Stack is a free, up to date Windows installer package for Redmine on Windows, Linux, OS X... give it a try. You can also download a VMWare Virtual Machine as well A: I had good luck in the past with InstantRails, but I'm not sure how up-to-date it is now, as the project has changed hands several times. You might also try ruby stack or flash rails, though I've not used either personally. A: You can install SQLite and a Rails server like Mongrel on a Windows machine. I used Aptana to run a development environment, but I'm sure that there are better alternatives that don't require the IDE to be open to run. But this proves it is possible. A: Try using a Rails distribution like: http://instantrails.rubyforge.org/wiki/wiki.pl One download install for rails+ruby+mysql on Windows.
{ "language": "en", "url": "https://stackoverflow.com/questions/35494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Chat application AJAX polling In the project I am currently working on, we have the need to develop a web chat application, not a very complex chat, just a way to connect two people to talk about a very specific topic, we don't need any kind of authentication for one of the two users, we don't have to support emoticons, avatars, or stuff like that. Some project members suggested that we could use XMPP through BOSH, I said that is like trying to catch a fish with a boat's net, and proposed a simpler method, like a simple Ajax/MySQL web chat, but we're worried about the performance hit in the server because of the constant polling of many chats open at the same time. Has anyone done something like this before? What would you recommend? A: You might also want to look into Comet. It's used by GTalk, Meebo, and many other chat applications. A few years ago when I was experimenting with it, there weren't very many libraries or details about server architecture to implement it, but it looks like there is a lot more stuff out now. Have a look at the cometd project for more technical information. A: What would you recommend? XMPP through BOSH There's no need to invent your own message format and transport protocol when somebody else has. If you try, it'll slowly grow to be just as complex as BOSH but without the benefit of third-party library support or standardization. A: If you don't like the idea of HTTP-polling, you could have a Flash-movie on the chat page that has a constant connection to some deamon on the server, the Flash-movie would then invoke JavaScript functions on the client to update the chat as new messages comes along. (Unless you want a Flash interface for your chat..) A: You might also want to look into Comet. I thought everyone used cometd for this sort of thing. BOSH is a standard for transporting XMPP over HTTP. It involves Comet for pushing data to the client. A: There is a very good server for handling message pushing from server to browser (dubbed Comet) - Orbited. It's easily integrated with other technologies (Django, Rails, PHP etc.) just like memcached. You really should check it if you want to handle serious load. Otherwise, simple Ajax polling is the best way. A: I did this very same thing a few months back and had fun just playing around with the concepts. I actually used the forever-frame technique instead of polling. The below code is my "comet" js file that contains the general concepts required to get a "party chat" setup. function Comet(key) { var random = key; var title = 'Comet'; var connection = false; var iframediv = false; var browserIsIE = /*@cc_on!@*/false; var blurStatus = false; var tmpframe = document.createElement('iframe'); var nl = '\r\n'; this.initialize = function() { if (browserIsIE) { connection = new ActiveXObject("htmlfile"); connection.open(); connection.write("<html>"); connection.write("<script>document.domain = '"+document.domain+"'"); connection.write("</html>"); connection.close(); iframediv = connection.createElement("div"); connection.appendChild(iframediv); connection.parentWindow.comet = comet; iframediv.innerHTML = "<iframe id='comet_iframe' src='./comet.aspx?key="+random+"'></iframe>"; } else { connection = document.createElement('iframe'); connection.setAttribute('id', 'comet_iframe'); iframediv = document.createElement('iframe'); iframediv.setAttribute('src', './comet.aspx?key='+random); connection.appendChild(iframediv); document.body.appendChild(connection); } } // this function is called from the server to keep the connection alive this.keepAlive = function () { if (!browserIsIE) { mozillaHack(); } } // this function is called from the server to update the client this.updateClient = function (value) { var outputDiv = document.getElementById('output'); outputDiv.value = value + nl + outputDiv.value; if (blurStatus == true) { document.title = value; } if (!browserIsIE) { mozillaHack(); } } this.onUnload = function() { if (connection) { // this will release the iframe to prevent problems with IE when reloading the page connection = false; } } this.toggleBlurStatus = function(bool) { blurStatus = bool; } this.resetTitle = function() { document.title = title; } function mozillaHack() { // this hack will fix the hour glass and loading status for Mozilla browsers document.body.appendChild(tmpframe); document.body.removeChild(tmpframe); } } A: The trick is to realise that the only time your app needs to invoke CGI on the server is when someone says something. For the regular polls, poll a static page that your CGI script updates whenever there is new chat. Use HEAD requests, compare the timestamps with those last seen, and only do a full GET when those change. I have a simple naive chat application implemented this way, and the load and bandwidth usage is negligible for the few tens of simultaneous users we have. A: I thought everyone used cometd for this sort of thing. A: I agree with John. But there was another question that was not answered. I have done this but instead of using a database we used a flat file, it did eventually cripple the server, but it wasn't until we has ~450 active users, and if we had done it with a database it probably would have fared better.This was done on a basic hosting account from Godaddy. Edit: BTW Godaddy sounded less then amused when I got the phone call. A: I think polling is the simplest approach and would recommend that first. If the load becomes a problem start, looking into more complicated techniques. A good discussion on the pros and cons are here - http://www.infoq.com/news/2007/07/pushvspull http://ajaxian.com/archives/a-report-on-push-versus-pull A: Checkout Speeqe. Its a open-source solution for Web-based chat rooms that uses BOSH and XMPP behind the scenes. A: I just found this post, it is old, but polling concept gives troubles for a lot of poeple. So i'll put an implementation example here. But before giving it to you, I should give you an advice that made me mad some time ago : When you poll, you should take care of sessions behaviour (race conditions). To make it simple : if you open a session, the session file is locked until the session is closed to avoid 2 theads writting different data into it. So, if you need a session to check if a user is logged or so, always close the session before polling. My demo gives you an example of a polling implementation in PHP. I will not use a database, but a file instead. When you click polling button, you will enter the loop and until the file is modified, you will stay polling. When you fill the form and click Release, what you typed will be saved into the file. Modification time of the file will change so the polling will stop. Tip: use a tool like Firebug to see what's happen. Now lets speak in a better langage than my english : <?php // For this demo if (file_exists('poll.txt') == false) { file_put_contents('poll.txt', ''); } if (isset($_GET['poll'])) { // Don't forget to change the default time limit set_time_limit(120); date_default_timezone_set('Europe/Paris'); $time = time(); // We loop until you click on the "release" button... $poll = true; $number_of_tries = 1; while ($poll) { // Here we simulate a request (last mtime of file could be a creation/update_date field on a base) clearstatcache(); $mtime = filemtime('poll.txt'); if ($mtime > $time) { $result = htmlentities(file_get_contents('poll.txt')); $poll = false; } // Of course, else your polling will kill your resources! $number_of_tries++; sleep(1); } // Outputs result echo "Number of tries : {$number_of_tries}<br/>{$result}"; die(); } // Here we catch the release form if (isset($_GET['release'])) { $data = ''; if (isset($_GET['data'])) { $data = $_GET['data']; } file_put_contents('poll.txt', $data); die(); } ?> <!-- click this button to begin long-polling --> <input id="poll" type="button" value="Click me to start polling" /> <br/><br/> Give me some text here : <br/> <input id="data" type="text" /> <br/> <!-- click this button to release long-polling --> <input id="release" type="button" value="Click me to release polling" disabled="disabled" /> <br/><br/> Result after releasing polling : <div id="result"></div> <script type="text/javascript" src="https://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"></script> <script type="text/javascript"> // Script to launch polling $('#poll').click(function() { $('#poll').attr('disabled', 'disabled'); $('#release').removeAttr('disabled'); $.ajax({ url: 'poll.php', data: { poll: 'yes' // sets our $_GET['poll'] }, success: function(data) { $('#result').html(data); $('#poll').removeAttr('disabled'); $('#release').attr('disabled', 'disabled'); } }); }); // Script to release polling $('#release').click(function() { $.ajax({ url: 'poll.php', data: { release: 'yes', // sets our $_GET['release'] data: $('#data').val() // sets our $_GET['data'] } }); }); </script> You can try it here
{ "language": "en", "url": "https://stackoverflow.com/questions/35499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Changing default file structure in a Java Struts App I have been working with Struts for some time, but for a project I am finishing I was asked to separate Templates (velocity .vm files), configs (struts.xml, persistence.xml) from main WAR file. I have all in default structure like: application |-- META-INF -- Some configs are here |-- WEB-INF -- others here | |-- classes | | |-- META-INF | | `-- mypackage | | `-- class-files | `-- lib |-- css `-- tpl -- Template dir to be relocated And I apparently can't find documentation about how to setup (probably in struts.xml) where my templates go, and where config files will be. I think I will have to use configurations on the application server too (I am using Jetty 5.1.14). So, any lights on how to configure it ? Thanks Well, the whole thing about changing templates place is to put the templates in a designer accessible area, so any modification needed, the designer can load them to his/her computer, edit, and upload it again. I think this is a common scenario. So, probably I am missing something in my research. Maybe I am focusing in configuring it on the wrong place ... Any thoughts ? A: If I understood your question about Struts config files right, they are specified in web.xml. Find the Struts servlet config param. The param-value can be a list of comma separated list of XML files to load. Eg: <servlet> <servlet-name>action</servlet-name> <servlet-class>org.apache.struts.action.ActionServlet</servlet-class> <init-param> <param-name>config</param-name> <param-value> WEB-INF/config/struts-config.xml, WEB-INF/config/struts-config-stuff.xml, WEB-INF/config/struts-config-good.xml, WEB-INF/config/struts-config-bad.xml, WEB-INF/config/struts-config-ugly.xml </param-value> </init-param> ... </servlet> See this Struts guide under 5.3.2. And yes, this applies to 2.x also. A: For persistence.xml, specifically, you can put a persistence unit in a separate JAR, which you can deploy separately from your web application WAR, or both together in an EAR archive, depending on what your application server supports. For example, the JBoss manual describes this as Deploy EAR with EJB3 JAR. For struts-config.xml I expect that you are going to have to override the Struts code that loads it, if you want to use a non-standard location. I don't know about the Velocity templates. In general, web applications only load resources from within the WAR, for security reasons. There are other techniques you can use, but you may find it easier to try weblets, which seems to be a framework designed to let you load resources from a separate JAR. A: You need to look into velocity.properties file in your WEB_INF folder.IMHO it is here where you need to change your template root changing the property file.resource.loader.path. Hope it helps, Petr
{ "language": "en", "url": "https://stackoverflow.com/questions/35507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: looping and average in c++ Programming Student here...trying to work on a project but I'm stuck. The project is trying to find the miles per gallon per trip then at the end outputting total miles and total gallons used and averaging miles per gallon How do I loop back up to the first question after the first set of questions has been asked. Also how will I average the trips...will I have to have a variable for each of the trips? I'm stuck, any help would be great! A: You will have to tell us the type of data you are given. As per your last question: remember that an average can be calculated in real time by either storing the sum and the number of data points (two numbers), or the current average and the number of data points (again, two numbers). For instance: class Averager { double avg; int n; public: Averager() : avg(0), n(0) {} void addPoint(double v) { avg = (n * avg + v) / (n + 1); n++; } double average() const { return avg; } };
{ "language": "en", "url": "https://stackoverflow.com/questions/35522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: canonical problems list Does anyone known of a a good reference for canonical CS problems? I'm thinking of things like "the sorting problem", "the bin packing problem", "the travailing salesman problem" and what not. edit: websites preferred A: You can probably find the best in an algorithms textbook like Introduction to Algorithms. Though I've never read that particular book, it's quite renowned for being thorough and would probably contain most of the problems you're likely to encounter. A: "Computers and Intractability: A guide to the theory of NP-Completeness" by Garey and Johnson is a great reference for this sort of thing, although the "solved" problems (in P) are obviously not given much attention in the book. I'm not aware of any good on-line resources, but Karp's seminal paper Reducibility among Combinatorial Problems (1972) on reductions and complexity is probably the "canonical" reference for Hard Problems. A: Have you looked at Wikipedia's Category:Computational problems and Category:NP Complete Problems pages? It's probably not complete, but they look like good starting points. Wikipedia seems to do pretty well in CS topics. A: I don't think you'll find the answers to all those problems in only one book. I've never seen any decent, comprehensive website on algorithms, so I'd recommend you to stick to the books. That said, you can always get some introductory material on canonical algorithm texts (there are always three I usually recommend: CLRS, Manber, Aho, Hopcroft and Ullman (this one is a bit out of date in some key topics, but it's so formal and well-written that it's a must-read). All of them contain important combinatorial problems that are, in some sense, canonical problems in computer science. After learning some fundamentals in graph theory you'll be able to move to Network Flows and Linear Programming. These comprise a set of techniques that will ultimately solve most problems you'll encounter (linear programming with the variables restricted to integer values is NP-hard). Network flows deals with problems defined on graphs (with weighted/capacitated edges) with very interesting applications in fields that seemingly have no relationship to graph theory whatsoever. THE textbook on this is Ahuja, Magnanti and Orlin's. Linear programming is some kind of superset of network flows, and deals with optimizing a linear function on variables subject to restrictions in the form of a linear system of equations. A book that emphasizes the relationship to network flows is Bazaraa's. Then you can move on to integer programming, a very valuable tool that presents many natural techniques for modelling problems like bin packing, task scheduling, the knapsack problem, and so on. A good reference would be L. Wolsey's book. A: You definitely want to look at NIST's Dictionary of Algorithms and Data Structures. It's got the traveling salesman problem, the Byzantine generals problem, the dining philosophers' problem, the knapsack problem (= your "bin packing problem", I think), the cutting stock problem, the eight queens problem, the knight's tour problem, the busy beaver problem, the halting problem, etc. etc. It doesn't have the firing squad synchronization problem (I'm surprised about that omission) or the Jeep problem (more logistics than computer science). Interestingly enough there's a blog on codinghorror.com which talks about some of these in puzzle form. (I can't remember whether I've read Smullyan's book cited in the blog, but he is a good compiler of puzzles & philosophical musings. Martin Gardner and Douglas Hofstadter and H.E. Dudeney are others.) Also maybe check out the Stony Brook Algorithm Repository. (Or look up "combinatorial problems" on google, or search for "problem" in Wolfram Mathworld or look at Hilbert's problems, but in all these links many of them are more pure-mathematics than computer science.) A: @rcreswick those sound like good references but fall a bit shy of what I'm thinking of. (However, for all I know, it's the best there is) I'm going to not mark anything as accepted in hopes people might find a better reference. Meanwhile, I'm going to list a few problems here, fell free to add more The sorting problem Find an order for a set that is monotonic in a given way The bin packing problem partition a set into a minimum number of sets where each subset is "smaller" than some limit The travailing salesman problem Find a Hamiltonian cycle in a weighted graph with the minimum total weight
{ "language": "en", "url": "https://stackoverflow.com/questions/35528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What Are High-Pass and Low-Pass Filters? Graphics and audio editing and processing software often contain functions called "High-Pass Filter" and "Low-Pass Filter". Exactly what do these do, and what are the algorithms for implementing them? A: Here is a super simple example of a low pass filter in C++ that processes the signal one sample at a time: float lopass(float input, float cutoff) { lo_pass_output= outputs[0]+ (cutoff*(input-outputs[0])); outputs[0]= lo_pass_output; return(lo_pass_output); } Here is pretty much the same thing, except it's high pass: float hipass(float input, float cutoff) { hi_pass_output=input-(outputs[0] + cutoff*(input-outputs[0])); outputs[0]=hi_pass_output; return(hi_pass_output); } A: They are generally Electrical circuits that tend to pass parts of analog signals. High pass tends to transmit more of the high frequency parts and low pass tends to pass more of the low frequency parts. They can be simulated in software. A walking average can act as a low pass filter for instance and the difference between a walking average and it's input can work as a high pass filter. A: High-pass filter lets high-frequency (detailed/local information) pass. Low-pass filter lets low-frequency (coarse/rough/global information) pass. A: Filtering describes the act of processing data in a way that applies different levels of attenuation to different frequencies within the data. A high pass filter will apply minimal attentuation (ie. leave levels unchanged) for high frequencies, but applies maximum attenuation to low frequencies. A low pass filter is the reverse - it will apply no attenuation to low frequencies by applies attenuation to high frequencies. There are a number of different filtering algorithms that are used. The two simplest are probably the Finite Impulse Response filter (aka. FIR filter) and the Infinite Impulse Response filter (aka. IIR filter). The FIR filter works by keeping a series of samples and multiplying each of those samples by a fixed coefficient (which is based on the position in the series). The results of each of these multiplications is accumulated and is the output for that sample. This is referred to as a Multiply-Accumulate - and in dedicated DSP hardware there is a specific MAC instruction for doing just this. When the next sample is taken it's added to the start of the series, and the oldest sample in the series is removed, and the process repeated. The behavior of the filter is fixed by the selection of the filter coefficients. One of the simplest filters that is often provided by image processing software is the averaging filter. This can be implemented by an FIR filter by setting all of the filter coefficients to the same value. A: Here is how you implement a low-pass filter using convolution: double[] signal = (some 1d signal); double[] filter = [0.25 0.25 0.25 0.25]; // box-car filter double[] result = new double[signal.Length + filter.Length + 1]; // Set result to zero: for (int i=0; i < result.Length; i++) result[i] = 0; // Do convolution: for (int i=0; i < signal.Length; i++) for (int j=0; j < filter.Length; j++) result[i+j] = result[i+j] + signal[i] * filter[j]; Note that the example is extremely simplified. It does not do range checks and does not handle the edges properly. The filter used (box-car) is a particularly bad lowpass filter, because it will cause a lot of artifacts (ringing). Read up on filter design. You can also implement the filters in the frequency domain. Here is how you implement a high-pass filter using FFT: double[] signal = (some 1d signal); // Do FFT: double[] real; double[] imag; [real, imag] = fft(signal) // Set the first quarter of the real part to zero to attenuate the low frequencies for (int i=0; i < real.Length / 4; i++) real[i] = 0; // Do inverse FFT: double[] highfrequencysignal = inversefft(real, imag); Again, this is simplified, but you get the idea. The code does not look as complicated as the math. A: Wikipedia: * *High-pass filter *Low-pass filter *Band-pass filter These "high", "low", and "band" terms refer to frequencies. In high-pass, you try to remove low frequencies. In low-pass, you try to remove high. In band pass, you only allow a continuous frequency range to remain. Choosing the cut-off frequency depends upon your application. Coding these filters can either be done by simulating RC circuits or by playing around with Fourier transforms of your time-based data. See the wikipedia articles for code examples.
{ "language": "en", "url": "https://stackoverflow.com/questions/35530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Zoom for a windows form in C# Is there an easy way to set the zoom level for a windows form in C#? In VBA there was a zoom property of the form. A: You can get some kind of zoom by assigning different Font to the Form, all the controls will be zoomed accordingly if AutoScaleMode set to Font. Also settings AutoSize to False will keep form size intact, the controls will grow to the center of the form. You need to set up all Anchors correctly and test the look, since its just "kind of zoom". So basically here is sample constructor: public Form1() { InitializeComponent(); AutoSize = false; AutoScaleMode = AutoScaleMode.Font; Font = new Font("Trebuchet MS", 10.0f, FontStyle.Regular, GraphicsUnit.Point, ((byte)(204)) ); } After form has been shown assigning new Font will mess up all the controls and this trick will not work. A: I had the same problem and I solved it this way in c#. Code goes on Form load float scaleX = ((float)Screen.PrimaryScreen.WorkingArea.Width / 1024); float scaleY = ((float)Screen.PrimaryScreen.WorkingArea.Height / 768); SizeF aSf = new SizeF(scaleX, scaleY); this.Scale(aSf); This "more or less" scales form and all children. Loops forever in 800x600 (?) You have to set the following Form properties: AutoscaleMode = Font AutoSize = False A: There is no way (that I know of) to do what you ask with typical WinForms. If you're doing custom painting/drawing, you can zoom that by using a zoom transform, but so far as I know there is no "Zoom" property for the form in the entire world of .NET and native Windows/C++ APIs combined. You could probably rig something yourself such that you scale controls by a constant factor. And you can probably find 3rd-party controls/surfaces which support this. And who knows what is possible with WPF. But in a typical WinForms world, no.
{ "language": "en", "url": "https://stackoverflow.com/questions/35537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Validate (X)HTML in Python What's the best way to go about validating that a document follows some version of HTML (prefereably that I can specify)? I'd like to be able to know where the failures occur, as in a web-based validator, except in a native Python app. A: The html5lib module can be used to validate an HTML5 document: >>> import html5lib >>> html5parser = html5lib.HTMLParser(strict=True) >>> html5parser.parse('<html></html>') Traceback (most recent call last): ... html5lib.html5parser.ParseError: Unexpected start tag (html). Expected DOCTYPE. A: Try tidylib. You can get some really basic bindings as part of the elementtidy module (builds elementtrees from HTML documents). http://effbot.org/downloads/#elementtidy >>> import _elementtidy >>> xhtml, log = _elementtidy.fixup("<html></html>") >>> print log line 1 column 1 - Warning: missing <!DOCTYPE> declaration line 1 column 7 - Warning: discarding unexpected </html> line 1 column 14 - Warning: inserting missing 'title' element Parsing the log should give you pretty much everything you need. A: PyTidyLib is a nice python binding for HTML Tidy. Their example: from tidylib import tidy_document document, errors = tidy_document('''<p>f&otilde;o <img src="bar.jpg">''', options={'numeric-entities':1}) print document print errors Moreover it's compatible with both legacy HTML Tidy and the new tidy-html5. A: I think that HTML tidy will do what you want. There is a Python binding for it. A: In my case the python W3C/HTML cli validation packages did not work (as of sept 2016). I did it manually using requests like so code: r = requests.post('https://validator.w3.org/nu/', data=open('FILE.html','rb').read(), params={'out': 'json'}, headers={'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.101 Safari/537.36', 'Content-Type': 'text/html; charset=UTF-8'}) print r.json() in the console: $ echo '<!doctype html><html lang=en><head><title>blah</title></head><body></body></html>' | tee FILE.html $ pip install requests $ python Python 2.7.12 (default, Jun 29 2016, 12:46:54) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import requests >>> r = requests.post('https://validator.w3.org/nu/', ... data=open('FILE.html', 'rb').read(), ... params={'out': 'json'}, ... headers={'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.101 Safari/537.36', ... 'Content-Type': 'text/html; charset=UTF-8'}) >>> r.text >>> u'{"messages":[]}\n' >>> r.json() >>> {u'messages': []} More documentation here python requests, W3C Validator API A: This is a very basic html validator based on lxml's HTMLParser. It is not a complete html validator, but does a few basic checks, doesn't require any internet connection, and doesn't require a large library. _html_parser = None def validate_html(html): global _html_parser from lxml import etree from StringIO import StringIO if not _html_parser: _html_parser = etree.HTMLParser(recover = False) return etree.parse(StringIO(html), _html_parser) Note that this will not check for closing tags, so for example, the following will pass: validate_html("<a href='example.com'>foo") > <lxml.etree._ElementTree at 0xb2fd888> However, the following wont: validate_html("<a href='example.com'>foo</a") > XMLSyntaxError: End tag : expected '>', line 1, column 29 A: XHTML is easy, use lxml. from lxml import etree from StringIO import StringIO etree.parse(StringIO(html), etree.HTMLParser(recover=False)) HTML is harder, since there's traditionally not been as much interest in validation among the HTML crowd (run StackOverflow itself through a validator, yikes). The easiest solution would be to execute external applications such as nsgmls or OpenJade, and then parse their output. A: I think the most elegant way it to invoke the W3C Validation Service at http://validator.w3.org/ programmatically. Few people know that you do not have to screen-scrape the results in order to get the results, because the service returns non-standard HTTP header paramaters X-W3C-Validator-Recursion: 1 X-W3C-Validator-Status: Invalid (or Valid) X-W3C-Validator-Errors: 6 X-W3C-Validator-Warnings: 0 for indicating the validity and the number of errors and warnings. For instance, the command line curl -I "http://validator.w3.org/check?uri=http%3A%2F%2Fwww.stalsoft.com" returns HTTP/1.1 200 OK Date: Wed, 09 May 2012 15:23:58 GMT Server: Apache/2.2.9 (Debian) mod_python/3.3.1 Python/2.5.2 Content-Language: en X-W3C-Validator-Recursion: 1 X-W3C-Validator-Status: Invalid X-W3C-Validator-Errors: 6 X-W3C-Validator-Warnings: 0 Content-Type: text/html; charset=UTF-8 Vary: Accept-Encoding Connection: close Thus, you can elegantly invoke the W3C Validation Service and extract the results from the HTTP header: # Programmatic XHTML Validations in Python # Martin Hepp and Alex Stolz # [email protected] / [email protected] import urllib import urllib2 URL = "http://validator.w3.org/check?uri=%s" SITE_URL = "http://www.heppnetz.de" # pattern for HEAD request taken from # http://stackoverflow.com/questions/4421170/python-head-request-with-urllib2 request = urllib2.Request(URL % urllib.quote(SITE_URL)) request.get_method = lambda : 'HEAD' response = urllib2.urlopen(request) valid = response.info().getheader('X-W3C-Validator-Status') if valid == "Valid": valid = True else: valid = False errors = int(response.info().getheader('X-W3C-Validator-Errors')) warnings = int(response.info().getheader('X-W3C-Validator-Warnings')) print "Valid markup: %s (Errors: %i, Warnings: %i) " % (valid, errors, warnings) A: You can decide to install the HTML validator locally and create a client to request the validation. Here I had made a program to validate a list of urls in a txt file. I was just checking the HEAD to get the validation status, but if you do a GET you would get the full results. Look at the API of the validator, there are plenty of options for it. import httplib2 import time h = httplib2.Http(".cache") f = open("urllistfile.txt", "r") urllist = f.readlines() f.close() for url in urllist: # wait 10 seconds before the next request - be nice with the validator time.sleep(10) resp= {} url = url.strip() urlrequest = "http://qa-dev.w3.org/wmvs/HEAD/check?doctype=HTML5&uri="+url try: resp, content = h.request(urlrequest, "HEAD") if resp['x-w3c-validator-status'] == "Abort": print url, "FAIL" else: print url, resp['x-w3c-validator-status'], resp['x-w3c-validator-errors'], resp['x-w3c-validator-warnings'] except: pass
{ "language": "en", "url": "https://stackoverflow.com/questions/35538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: Are there any good programs for actionscript/flex that'll count lines of code, number of functions, files, packages,etc Doug McCune had created something that was exactly what I needed (http://dougmccune.com/blog/2007/05/10/analyze-your-actionscript-code-with-this-apollo-app/) but alas - it was for AIR beta 2. I just would like some tool that I can run that would provide some decent metrics...any idea's? A: There is a Code Metrics Explorer in the Enterprise Flex Plug-in below: http://www.deitte.com/archives/2008/09/flex_builder_pl.htm A: Simple tool called LocMetrics can work for .as files too... A: Or find . -name '*.as' -or -name '*.mxml' |Β xargs wc -l Or if you use zsh wc -l **/*.{as,mxml} It won't give you what fraction of those lines are comments, or blank lines, but if you're only interested in how one project differs from another and you've written them both, it's a useful metric. A: Here's a small script I wrote for finding the total numbers of occurrence for different source code elements in ActionScript 3 code (this is written in Python simply because I'm familiar with it, while Perl would probably be better suited for a regex-heavy script like this): #!/usr/bin/python import sys, os, re # might want to improve on the regexes used here codeElements = { 'package':{ 'regex':re.compile('^\s*[(private|public|static)\s]*package\s+([A-Za-z0-9_.]+)\s*', re.MULTILINE), 'numFound':0 }, 'class':{ 'regex':re.compile('^\s*[(private|public|static|dynamic|final|internal|(\[Bindable\]))\s]*class\s', re.MULTILINE), 'numFound':0 }, 'interface':{ 'regex':re.compile('^\s*[(private|public|static|dynamic|final|internal)\s]*interface\s', re.MULTILINE), 'numFound':0 }, 'function':{ 'regex':re.compile('^\s*[(private|public|static|protected|internal|final|override)\s]*function\s', re.MULTILINE), 'numFound':0 }, 'member variable':{ 'regex':re.compile('^\s*[(private|public|static|protected|internal|(\[Bindable\]))\s]*var\s+([A-Za-z0-9_]+)(\s*\\:\s*([A-Za-z0-9_]+))*\s*', re.MULTILINE), 'numFound':0 }, 'todo note':{ 'regex':re.compile('[*\s/][Tt][Oo]\s?[Dd][Oo][\s\-:_/]', re.MULTILINE), 'numFound':0 } } totalLinesOfCode = 0 filePaths = [] for i in range(1,len(sys.argv)): if os.path.exists(sys.argv[i]): filePaths.append(sys.argv[i]) for filePath in filePaths: thisFile = open(filePath,'r') thisFileContents = thisFile.read() thisFile.close() totalLinesOfCode = totalLinesOfCode + len(thisFileContents.splitlines()) for codeElementName in codeElements: matchSubStrList = codeElements[codeElementName]['regex'].findall(thisFileContents) codeElements[codeElementName]['numFound'] = codeElements[codeElementName]['numFound'] + len(matchSubStrList) for codeElementName in codeElements: print str(codeElements[codeElementName]['numFound']) + ' instances of element "'+codeElementName+'" found' print '---' print str(totalLinesOfCode) + ' total lines of code' print '' Pass paths to all of the source code files in your project as arguments for this script to get it to process all of them and report the totals. A command like this: find /path/to/project/root/ -name "*.as" -or -name "*.mxml" | xargs /path/to/script Will output something like this: 1589 instances of element "function" found 147 instances of element "package" found 58 instances of element "todo note" found 13 instances of element "interface" found 2033 instances of element "member variable" found 156 instances of element "class" found --- 40822 total lines of code A: CLOC - http://cloc.sourceforge.net/. Even though it is Windows commandline based, it works with AS3.0, has all the features you would want, and is well-documented. Here is the BAT file setup I am using: REM ===================== echo off cls REM set variables set ASDir=C:\root\directory\of\your\AS3\code\ REM run the program REM See docs for different output formats. cloc-1.09.exe --by-file-by-lang --force-lang="ActionScript",as --exclude_dir=.svn --ignored=ignoredFiles.txt --report-file=totalLOC.txt %ASDir% REM show the output totalLOC.txt REM end pause REM ===================== A: To get a rough estimate, you could always run find . -type f -exec cat {} \; | wc -l in the project directory if you're using Mac OS X.
{ "language": "en", "url": "https://stackoverflow.com/questions/35541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Make source with two targets I use this tool called Lazy C++ which breaks a single C++ .lzz file into a .h and .cpp file. I want Makepp to expect both of these files to exist after my rule for building .lzz files, but I'm not sure how to put two targets into a single build line. A: I've never used Makepp personally, but since it's a drop-in replacement for GNU Make, you should be able to do something like: build: foo.h foo.cpp g++ $(CFLAGS) foo.cpp -o $(LFLAGS) foo foo.h foo.cpp: foo.lzz lzz foo.lzz Also not sure about the lzz invocation there, but that should help. You can read more about this at http://theory.uwinnipeg.ca/gnu/make/make_37.html. A: Lzz is amazing! This is just what I was looking for http://groups.google.com/group/comp.lang.c++/browse_thread/thread/c50de73b70a6a957/f3f47fcdcfb6bc09 Actually all you need is to depend (typically) on foo.o in your link rule, and a pattern rule to call lzz: %.cpp %.h: %.lzz lzz $(input) The rest will fall into place automatically. When compiling any source that includes foo.h, or linking foo.o to a library or program, lzz will first get called automatically. Makepp will also recognize if only the timestamp but not the content of the produced file changed, and ignore that. But it can't hurt to give it less to do, by using the lzz options to suppress recreating an identical file. Regards -- Daniel
{ "language": "en", "url": "https://stackoverflow.com/questions/35548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Excluding Code Analysis rule in source In a project I'm working on FxCop shows me lots of (and I mean more than 400) errors on the InitializeComponent() methods generated by the Windows Forms designer. Most of those errors are just the assignment of the Text property of labels. I'd like to suppress those methods in source, so I copied the suppression code generated by FxCop into AssemblyInfo.cs, but it doesn't work. This is the attribute that FxCop copied to the clipboard. [module: SuppressMessage("Microsoft.Globalization", "CA1303:DoNotPassLiteralsAsLocalizedParameters", Scope = "member", Target = "WindowsClient.MainForm.InitializeComponent():System.Void", MessageId = "System.Windows.Forms.Control.set_Text(System.String)")] Anyone knows the correct attribute to suppress this messages? PS: I'm using Visual Studio 2005, C#, FxCop 1.36 beta. A: In FxCop 1.36 there is actually a project option on the "Spelling & Analysis" tab that will supress analysis for any generated code. If you don't want to turn analysis off for all generated code, you need to make sure that you add a CODE_ANALYSIS symbol to the list of conditional compilation symbols (project properties, Build tab). Without this symbol defined, the SupressMessage attributes will be removed from the compiled code so FxCop won't see them. The other problem with your SuppressMessage attribute is that you are listing a "Target" of a specific method name (in this case WindowsClient.MainForm.InitializeComponent():System.Void) and listing a specific "Scope". You may want to try removing these; otherwise you should add this SuppressMessage to each instance of the method. You should also upgrade to the RTM version of FxCop 1.36, the beta will not automatically detect the newer version. A: Module level suppression messages need to be pasted into the same file as the code that is raising the FxCop error before the namespace declaration or in assemblyinfo.cs. Additionally, you will need to have CODE_ANALYSIS defined as a conditional compiler symbols (Project > Properties > Build). Once that is in place, do a complete rebuild of project and the next time you run FxCop the error should be moved to the "Excluded in Source" tab. Also, one small tip, but if you are dealing with a lot of FxCop exclusions it might be useful to wrap a region around them so you can get them out of the way. A: You've probably got the right code, but you also need to add CODE_ANALYSIS as a precompiler defined symbol in the project properties. I think those SuppressMessage attributes are only left in the compiled binaries if CODE_ANALYSIS is defined.
{ "language": "en", "url": "https://stackoverflow.com/questions/35551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How far does SQL Server Express Edition scale? Wikipedia says SQL Server Express Edition is limited to "one processor, 1 GB memory and 4 GB database files". Does anyone have practical experience with how well this scales? A: It's a regular sql server, it just has a limit. SharePoint by default uses the sql server express if that gives you any idea. We have our entire office (80+) people running on that instance. A: We have used SQL Server Express Edition in some of our smaller applications, maybe 5+ users, and smaller databases. The 4GB is very limiting in a high transaction environments, and in some cases we have had to migrate our customer to SQL Server Standard Edition. A: It really comes down to the nature of your database and application. What kind of application(s) are hitting SQL Server? In my experience, it only handles 5-10 users with a heavy read/write application. A: This question is far too vague to be useful to you or anyone else. Also, Wikipedia is your primary source of info on SQL Server, fail? The first matrix of the MSDN page for Features Supported by the Editions of SQL Server 2008 is titled "Scalability." The only edition with any features marked "Yes" is Enterprise (you get Partitioning, Data compression, Resource governor, and Partition table parallelism.) And it goes down the line from there, Express does not support many of the features designed for "scale." If your main demand is space, how soon will you exceed 4GB? If your main demand is high availability and integrity, don't even bother with Express. "Scalable" is quickly becoming a weasel-/buzz-word, alongside "robust." People use it when they haven't thought hard enough about what they mean.
{ "language": "en", "url": "https://stackoverflow.com/questions/35559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Lazy Loading with a WCF Service Domain Model? I'm looking to push my domain model into a WCF Service API and wanted to get some thoughts on lazy loading techniques with this type of setup. Any suggestions when taking this approach? when I implemented this technique and step into my app, just before the server returns my list it hits the get of each property that is supposed to be lazy loaded ... Thus eager loading. Could you explain this issue or suggest a resolution? Edit: It appears you can use the XMLIgnore attribute so it doesn’t get looked at during serialization .. still reading up on this though A: Don't do lazy loading over a service interface. Define explicit DTO's and consume those as your data contracts in WCF. You can use NHibernate (or other ORMs) to properly fetch the objects you need to construct the DTOs. A: As for any remoting architecture, you'll want to avoid loading a full object graph "down the wire" in an uncontrolled way (unless you have a trivially small number of objects). The Wikipedia article has the standard techniques pretty much summarised (and in C#. too!). I've used both ghosts and value holders and they work pretty well. To implement this kind of technique, make sure that you separate concerns strictly. On the server, your service contract implementation classes should be the only bits of the code that work with data contracts. On the client, the service access layer should be the only code that works with the proxies. Layering like this lets you adjust the way that the service is implemented relatively independently of the UI layers calling the service and the business tier that's being called. It also gives you half a chance of unit testing! A: You could try to use something REST based (e.g. ADO.NET Data Services) and wrap it transpariently into your client code.
{ "language": "en", "url": "https://stackoverflow.com/questions/35560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do I make bash reverse-search work in Terminal.app without it displaying garbled output? Using Terminal.app on OS X 10.5, often you see the commands get garbled when you do a reverse-search with Bash. Is there some kind of termcap or perhaps a bash shopt command that can fix this? It is very annoying. Steps to reproduce: Open Terminal.app, reverse-search to a longish command. Hit <ctrl>-E once you've found the command. The cursor goes to the end of the line, but the display doesn't update. I'm guessing this is some kind of problem with the readline library on OS X. It's more of a problem with updating the cursor position after a search than anything else. Basically, ctrl-a and ctrl-e tend to break the search output. os x terminal failure image http://involution.com/images/osxterminal.png In the above, the first part of the command should be displayed, and the cursor should be at the end of the line, but it isn't. You literally can't see what you're editing when this happens. A: You may want to look at this post. bash-prompt-in-os-x-terminal-broken I had the same problem and it had to do with the PS1 variable. Let me know if this helps. A: If the prompt has colors, then this is an acknowledged bug. See bug report msg#00019. A: I was able to set my TERM to xterm instead of xterm-color and it solves the problem. (export TERM=xterm). A: I can't reproduce this, hitting either Ctrl+E, Ctrl+A or the arrow keys updates the command line correctly. Are you running 10.5.4? Is it perhaps a bug in earlier versions? A: I've encountered this bug, and while I don't know how to solve it, you can work around it by pressing <down><up> A: Not sure whether this is the problem here, but a very common cause of a messed up screen in bash (with any terminal emulator, not just Terminal.app) is the window being resized. Bash will read the window size when it starts up, and then assume it hasn't changed. When the window is resized a signal will be sent to whatever app is currently reading from the console. If this isn't bash (because you're running a text editor at the time, perhaps), then bash won't know about it. Solution in this case is to resize the window again so that bash gets the signal and notices the new size. A: In worst case, you could launch the X server (somewhere under utilities) and launch a real xterm.
{ "language": "en", "url": "https://stackoverflow.com/questions/35563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Ruby On Rails on Windows with Mongrel Where is the best tutorial for getting Ruby On Rails working on a windows box with Mongrel? I'm a complete novice at server management, so the more detailed the better! A: You can follow this tutorial, it will get you setup you can actually manage your Mongrel server as a Windows service (start/stop/restart, start on boot, manage programatically etc etc): How to setup mongrel as a native Windows service I'm currently using this method to manage an instance of Redmine on a Windows box and it works wonderfully. Good luck! A: I have found the book "Deploying Rails Applications" very useful. Specially the chapter "Deploying on Windows". It gives you step by step instructions to deploy a Rails application using Apache and Mongrel A: Rubystack is a free Windows Rails installer that we keep up to date and includes Mongrel support out of the box
{ "language": "en", "url": "https://stackoverflow.com/questions/35564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Why does Python's iter() on a mapping return iterkeys() instead of iteritems()? It seems like if you want to get the keys of a mapping, you ask for them; otherwise, give me the whole mapping (constituted by a set of key-value pairs). Is there a historical reason for this? A: Check out this thread for a discussion on the reasons behind this behavior (including that Guido likes it, and it's not likely to change).
{ "language": "en", "url": "https://stackoverflow.com/questions/35569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: What makes the Unix file system more superior to the Windows file system? I'll admit that I don't know the inner workings of the unix operating system, so I was hoping someone could shed some light on this topic. Why is the Unix file system better than the windows file system? Would grep work just as well on Windows, or is there something fundamentally different that makes it more powerful on a Unix box? e.g. I have heard that in a Unix system, the number of files in a given directory will not slow file access, while on Windows direct file access will degrade as the # of files increase in the given folder, true? Updates: Brad, no such thing as the unix file system? A: First, there is no such thing as "the Unix file system". Second, upon what premise does your argument rest? Did you hear someone say it was superior? Perhaps if you offered some source, we could critique the specific argument. Edit: Okay, according to http://en.wikipedia.org/wiki/Comparison_of_file_systems, NTFS has more green boxes than both UFS1 and UFS2. If green boxes are your measure of "better", then NTFS is "better". Still a stupid question. :-p A: I think you are a little bit confused. There is no 'Unix' and 'Windows' file systems. The *nix family of filesystems include ext3, ZFS, UFS etc. Windows primarily has had support for FAT16/32 and their own filesystem NTFS. However today linux systems can read and write to NTFS. More filesystems here I can't tell you why one could be better than the other though. A: I'm not at all familiar with the inner workings of the UNIX file systems, as in how the bits and bytes are stored, but really that part is interchangeable (ext3, reiserfs, etc). When people say that UNIX file systems are better, they might mean to be saying, "Oh ext3 stores bits in such as way that corruption happens way less than NTFS", but they might also be talking about design choices made at the common layer above. They might be referring to how the path of the file does not necessarily correspond to any particular device. For example, if you move your program files to a second disk, you probably have to refer to them as "D:\Program Files", while in UNIX /usr/bin could be a hard drive, a network drive, a CD ROM, or RAM. Another possibility is that people are using "file system" to mean the organization of paths. Like, for instance, how Windows generally likes programs in "C:\Program Files\CompanyName\AppName" while a particular UNIX distribution might put most of them in /usr/local/bin. In the later case, you can access much more of your system readily from the command line with a much smaller PATH variable. Also, since you mentioned grep, if all the source code for system libraries such as the kernel and libc is stored in /usr/local/src, doing a recursive grep for a particular error message coming from the guts of some system library is much simpler than if things were laid out as /usr/local/library-name/[bin|src|doc|etc]. If you already have an inkling of where you're searching, though, cygwin grep performs quite well under Windows. In fact, I find for full-text searching I get better results from grep than the search facilities built into Windows! A: One of the fundamental differences in filesystem semantics between Unix and Windows is the idea of inodes. On Windows, a file name is directly attached to the file data. This means that the OS prevents somebody from deleting a file that is currently open. On some versions of Windows you can rename a file that is currently open, and on some versions you can't. On Unix, a file name is a pointer to an inode, which is the place the file data is actually stored. This has a couple of implications: * *You can have two different filenames that refer to the same underlying file. This is often called a hard link. There is only one copy of the file data, so changes made through one filename will appear in the other. *You can delete (also known as unlink) a file that is currently open. All that happens is the directory entry is removed, but this doesn't affect any other process that might still have the file open. The process with the file open hangs on to the inode, rather than to the directory entry. When the process closes the file, the OS deletes the inode because there are no more directory entries pointing at it and no more processes with the inode open. This difference is important, but it is unrelated to things like the performance of grep. A: well the *nix filesystems do a far better job of actual file managment than fat16/32 or NTFS. The *nix systems try to prevent the need for a defrag over windows doing...nothing? Other than that I don't really know what would make one better than the other. A: There are differences in how Windows and Unix operating systems expose the disk drives to users and how drive space is partitioned. The biggest difference between the two operating systems is that Unix essentially treats all of the physical drives as one logical drive. (This isn't exactly how it works, but should give a good enough picture.) This allows a much simpler file system from the users perspective as there are no drive letters to deal with. I have a folder called /usr/bin that could span multiple physical drives. If I need to expand that partition I can do so by adding a new drive, remapping the folder, and moving the files. (Again, somewhat simplified, but it gets the point across.) The other difference is that when you format a drive, a certain amount is set aside (by default, as an admin you can change the size to 0 if you want) for use by the "root" account (admin account) which allows an admin to almost always be able to log in to the machine even when the user has filled the disk and is receiving "out of disk space" messages. A: One simple answer: Windows is a proprietary which means no one can see it's code except windows, while unix/linux are open-source. So as it is open-source many brighter minds have contributed towards the filesystem making it one of the robust and efficient, hence effective commands like grep come to our rescue when needed truly. A: I don't know enough about the guts of the file systems to answer the first, except when I read the first descriptions of NTFS it sounded an awful lot like the Berkley Fast Filesystem. As for the second, there are plenty of greps for Windows. When I had to use Windows in the past, I always installed Cygwin first thing. A: The answer turns out to have very little to do with the filesystem and everything to do with the filesystem access drivers. In particular, the implementation of NTFS on Windows is very slow compared to ext2/ext3. Also on Windows, "can't delete file in use" even though NTFS should be able to support it.
{ "language": "en", "url": "https://stackoverflow.com/questions/35599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Visual Studio 2008 Training I've been put in charge of coming up with a training itinerary for my team at work for a migration from c++ to Visual Studio 2008 with C#. The actual language switch I'm not too worried about, its the learning curve to Visual Studio. What does everything think would be the best way to set up a training course? I was thinking of having a list of different features of the IDE and having the team members create pages on our internal wiki on them, but I'm not sure if that would be hands on enough to be useful. A: We are a C++ shop, that is moving to C# for UI work (our image processing and 3D graphics code will stay in native C++). I found C# for C++ Developers a very quick and handy introduction to the language. Our team has been using Visual Studio for while, whereas I came from an SVN/Slickedit/CMake/Ant kind of environment in my last job. I found it very helpful to just dive in and start working, but as I figured things out, I documented them on our internal wiki. It's been about 6 months, but not only am I very comfortable with Visual Studio, but the rest of the team has had me streamlining our build process, and converting our build system to do out-of-place builds from Visual Studio (which I document on the wiki, of course). So I'd say do both - dive in and do real work, but document what you learn - which not only helps others, but it reinforces it in your mind. A: I think you're right to worry that the wiki thing wouldn't be hands-on enough. How about using it as an opportunity to refresh your process too, and do a mini project "Bootcamp" where you test drive the new language and IDE features along with some new development practices. Actually create a piece of software over the course of a week or so. A: MS has Visual Studio training kit. I think the best way is to teach the basics and then start using it in projects. Let them learn the features they need as they are using it on a project. A: I found Pluralsight a really good way to start training up a team. Learnvisualstudio.net is pretty good too. A: I purchased the on-demand training from pluralsight about 4 months ago and IMHO is the best training out there. link text
{ "language": "en", "url": "https://stackoverflow.com/questions/35614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is a good online resource for css 'design patterns'? Can anyone out there recommend a good online resource for CSS 'design patterns'? I know design patterns in a software context usually refer to OO based design patterns, but I mean design patterns in the broader sense of the term: i.e. common, clean solutions to common problems / tasks. An example of such a resource would be this list of table designs, this gives you all you really need to know about how to make tables look nice using a set of CSS techniques. Other examples of common problems which could have nice set solutions would be things like rounded corners on divs, highly usable form layouts etc. A: I refer to A List Apart articles all the time for those sorts of things. They do a lot of trial-and-error research to come up with really creative ways to handle those common CSS problems in the cleanest most portable way possible. A: The Floatutorial is a great starting point for learning the important CSS attribute "float" and how to use it to layout content using some common patterns including two-column and three-column liquid layouts. Floatutorial takes you through the basics of floating elements such as images, drop caps, next and back buttons, image galleries, inline lists and multi-column layouts. A: Some websites that address web design patterns are listed below. They do not specifically provide the HTML and/or CSS in order to achieve the desired results, but they do provide examples of live sites that you can view source on (or, even better, use Firebug). UI-patterns This is probably the best of the bunch. It breaks things down into categories that cover the breadth of web page design tasks. You'll find categories such as tag-clouds, live preview and user registration among many others. This is a really comprehensive resource that is well organized. It explains each pattern and provides plenty of examples. Pattern Tap Similar to UI-Patterns although currently not as comprehensive. It takes a more social approach to collating design patterns by allowing users to create their own categories ("user sets") and populate them with their own selection of sites. Yahoo Design Pattern Library Unlike the other two, this one doesn't provide many examples of real sites. It is well organized and quite comprehensive. Elements of Design This is a blog showcasing various elements of web design. It doesn't discuss the patterns, but is good as a quick source of inspiration, or as a means to start your own analysis. A: The already mentioned A List Apart is really good. Another site I've used since I started web development is SitePoint.com. Here is their CSS Reference. If you want a good CSS book their's is one of my favorites. A: The nearest thing to a "design pattern" in CSS are common layouts. The best tool for taking advantage of common layouts, column widths, etc. is 960 grid system, at 960.gs Watch this screencast for a brief intro. It saves a ton of time, and helps you apply all the common layout patterns with minimal code: http://net.tutsplus.com/videos/screencasts/a-detailed-look-at-the-960-css-framework/ All you have to do is to apply the proper classes and do a little arithmetic to make sure all the column widths add up. The one book that I recommend the most for CSS is CSS Mastery by Andy Budd (cssmastery.com). It is somewhat small, but it has helped me more than any other CSS book.
{ "language": "en", "url": "https://stackoverflow.com/questions/35615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Ruby "is" equivalent Is there a Ruby equivalent for Python's "is"? It tests whether two objects are identical (i.e. have the same memory location). A: You could also use __id__. This gives you the objects internal ID number, which is always unique. To check if to objects are the same, try a.__id__ = b.__id__ This is how Ruby's standard library does it as far as I can tell (see group_by and others). A: Use a.equal? b http://www.ruby-doc.org/core/classes/Object.html Unlike ==, the equal? method should never be overridden by subclasses: it is used to determine object identity (that is, a.equal?(b) iff a is the same object as b).
{ "language": "en", "url": "https://stackoverflow.com/questions/35634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: ASP.NET MVC Route Help, 2 routes, 1 with a category url structure and the other for content page I need some help with ASP.NET MVC routes. I need to create 2 routes for a cms type application. One route will be for category level URLS, and the other route will be for the actual page content. * *categories, always ends in a '/' www.example.com/category/ www.example.com/category/subcategory/ www.example.com/category/subcategory/subsubcategory/ *content page, doesn't end in a '/', can only be at the root level or after 1 subcategory page. www.example.com/root-level-page www.example.com/category/some-page-name Ideas? A: Routing does not distinguish between URLs ending with a / and URLs that don't end in /.
{ "language": "en", "url": "https://stackoverflow.com/questions/35637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Using Office to programmatically convert documents? I'm interested in using Office 2007 to convert between the pre-2007 binary formats (.doc, .xls, .ppt) and the new Office Open XML formats (.docx, .xlsx, .pptx) How would I do this? I'd like to write a simple command line app that takes in two filenames (input and output) and perhaps the source and/or destination types, and performs the conversion. A: Microsoft has a page which gives several examples of writing scripts to "drive" MS Word. One such example shows how to convert from a Word document to HTML. By changing the last parameter to any values listed here, you can get the output in different formats. A: The easiest way would be to use Automation thru the Microsoft.Office.Interop. libraries. You can create an instance of a Word application, for example. There are methods attached to the Application object that will allow you to open and close documents, plus pretty much anything else you can accomplish in VBA by recording a macro. You could also just write the VBA code in your Office application to do roughly the same thing. Both approaches are equally valid, depending on your comfort in programming in C#, VB.NET or VBA.
{ "language": "en", "url": "https://stackoverflow.com/questions/35639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }