text
stringlengths
8
267k
meta
dict
Q: How should anonymous types be used in C#? I've seen lots of descriptions how anonymous types work, but I'm not sure how they're really useful. What are some scenarios that anonymous types can be used to address in a well-designed program? A: From LINQ in action (page 76 section 2.6.3): ... anonymous types [are] a great tool for quick and simple temporary results. We don't need to declare classes to hold temporary results thanks to temporary types. basically they're useful to hold information in the local scope temporarily. Anything more requires the use of reflection and can become quite a problem. The example they give in the above-quoted book is in writing to console the id, name, and amount of memory taken up by each running process. They create an anonymous type, add it to a list (all one statement) and then use ObjectDumper to output it. Therefore the code no longer needs a separately declared class to hold the id, name and memory used but its all declared implicitly bringing the line count down to 4: var pl = new List<Object>(); foreach(var p in Process.GetProcesses()) pl.Add(new {p.Id, p.ProcessName, Memory=p.WorkingSet64}); ObjectDumper.Write(pl); A: The most popular use of anonymous types are for specifying projections in a LINQ to SQL query. This query from x in db.Table1 select new {x.Column1, Alias2=x.Column2} will be converted to this SQL: SELECT Column1, Column2 AS Alias2 FROM Table1 With anonymous types, you can create ad hoc projections without defining the type for it beforehand. The compiler will define the type for you. A: Anonymous types have nothing to do with the design of systems or even at the class level. They're a tool for developers to use when coding. I don't even treat anonymous types as types per-se. I use them mainly as method-level anonymous tuples. If I query the database and then manipulate the results, I would rather create an anonymous type and use that rather than declare a whole new type that will never be used or known outside of the scope of my method. For instance: var query = from item in database.Items // ... select new { Id = item.Id, Name = item.Name }; return query.ToDictionary(item => item.Id, item => item.Name); Nobody cares about `a, the anonymous type. It's there so you don't have to declare another class. A: When you create types for 'Use and throw' purposes. This seems to have come due to LINQ. Seems to be a way to create structures with fields on the fly for a LINQ query. Returning a struct/type with specified fields only. If not for this, you'd have to declare a .Net type for each unique combination of fields you wish to retrieve. A: Use them with Linq. A: It is important to know, that LINQ doesn't force you, to use anonymous types. You can also write normal object constructions after select. var query = from item in database.Items // ... select new Person(item.id, item.Name) This prevents you from ugly reflection programming. A: @Wouter : var query = from item in database.Items select new Person { ID =item.id, NAME= item.Name }; where ID and NAME are real property of your Person class.
{ "language": "en", "url": "https://stackoverflow.com/questions/48668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Are there any tools out there to compare the structure of 2 web pages? I receive HTML pages from our creative team, and then use those to build aspx pages. One challenge I frequently face is getting the HTML I spit out to match theirs exactly. I almost always end up screwing up the nesting of <div>s between my page and the master pages. Does anyone know of a tool that will help in this situation -- something that will compare 2 pages and output the structural differences? I can't use a standard diff tool, because IDs change from what I receive from creative, text replaces lorem ipsum, etc.. A: You can use HTMLTidy to convert the HTML to well-formed XML so you can use XML Diff, as Gulzar suggested. tidy -asxml index.html A: If out output XML compliant HTML. Or at least translate your HTML product into XML compliancy, you at least could then XSL your output to remove the content and id tags. Apply the same transformation to their html, and then compare. A: I was thinking on lines of XML Diff since HTML can be represented as an XML Document. The challenge with HTML is that it might not be always well formed. Found one more here showing how to use XMLDiff class. A: A copy of my own answer from here. What about DaisyDiff (Java and PHP vesions available). Following features are really nice: * *Works with badly formed HTML that can be found "in the wild". *The diffing is more specialized in HTML than XML tree differs. Changing part of a text node will not cause the entire node to be changed. *In addition to the default visual diff, HTML source can be diffed coherently. *Provides easy to understand descriptions of the changes. *The default GUI allows easy browsing of the modifications through keyboard shortcuts and links. A: winmerge is a good visual diff program
{ "language": "en", "url": "https://stackoverflow.com/questions/48669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Copy a file without using the windows file cache Anybody know of a way to copy a file from path A to path B and suppressing the Windows file system cache? Typical use is copying a large file from a USB drive, or server to your local machine. Windows seems to swap everything out if the file is really big, e.g. 2GiB. Prefer example in C#, but I'm guessing this would be a Win32 call of some sort if possible. A: In C# I have found something like this to work, this can be changed to copy directly to destination file: public static byte[] ReadAllBytesUnbuffered(string filePath) { const FileOptions FileFlagNoBuffering = (FileOptions)0x20000000; var fileInfo = new FileInfo(filePath); long fileLength = fileInfo.Length; int bufferSize = (int)Math.Min(fileLength, int.MaxValue / 2); bufferSize += ((bufferSize + 1023) & ~1023) - bufferSize; using (var stream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.None, bufferSize, FileFlagNoBuffering | FileOptions.SequentialScan)) { long length = stream.Length; if (length > 0x7fffffffL) { throw new IOException("File too long over 2GB"); } int offset = 0; int count = (int)length; var buffer = new byte[count]; while (count > 0) { int bytesRead = stream.Read(buffer, offset, count); if (bytesRead == 0) { throw new EndOfStreamException("Read beyond end of file EOF"); } offset += bytesRead; count -= bytesRead; } return buffer; } } A: Even more important, there are FILE_FLAG_WRITE_THROUGH and FILE_FLAG_NO_BUFFERING. MSDN has a nice article on them both: http://support.microsoft.com/kb/99794 A: I am not sure if this helps, but take a look at Increased Performance Using FILE_FLAG_SEQUENTIAL_SCAN. SUMMARY There is a flag for CreateFile() called FILE_FLAG_SEQUENTIAL_SCAN which will direct the Cache Manager to access the file sequentially. Anyone reading potentially large files with sequential access can specify this flag for increased performance. This flag is useful if you are reading files that are "mostly" sequential, but you occasionally skip over small ranges of bytes. A: If you dont mind using a tool, ESEUTIL worked great for me. You can check out this blog entry comparing Buffered and NonBuffered IO functions and from where to get ESEUTIL. copying some text from the technet blog: So looking at the definition of buffered I/O above, we can see where the perceived performance problems lie - in the file system cache overhead. Unbuffered I/O (or a raw file copy) is preferred when attempting to copy a large file from one location to another when we do not intend to access the source file after the copy is complete. This will avoid the file system cache overhead and prevent the file system cache from being effectively flushed by the large file data. Many applications accomplish this by calling CreateFile() to create an empty destination file, then using the ReadFile() and WriteFile() functions to transfer the data. CreateFile() - The CreateFile function creates or opens a file, file stream, directory, physical disk, volume, console buffer, tape drive, communications resource, mailslot, or named pipe. The function returns a handle that can be used to access an object. ReadFile() - The ReadFile function reads data from a file, and starts at the position that the file pointer indicates. You can use this function for both synchronous and asynchronous operations. WriteFile() - The WriteFile function writes data to a file at the position specified by the file pointer. This function is designed for both synchronous and asynchronous operation. For copying files around the network that are very large, my copy utility of choice is ESEUTIL which is one of the database utilities provided with Exchange. A: Eseutil is a correct answer, also since Win7 / 2008 R2, you can use the /j switch in Xcopy, which has the same effect. A: I understand this question was 11 years ago, nowadays there is robocopy which is kind of replacement for xcopy. you need to check /J option /J :: copy using unbuffered I/O (recommended for large files)
{ "language": "en", "url": "https://stackoverflow.com/questions/48679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Winforms c# - Set focus to first child control of TabPage Say I have a Textbox nested within a TabControl. When the form loads, I would like to focus on that Textbox (by default the focus is set to the TabControl). Simply calling textbox1.focus() in the Load event of the form does not appear to work. I have been able to focus it by doing the following: private void frmMainLoad(object sender, EventArgs e) { foreach (TabPage tab in this.tabControl1.TabPages) { this.tabControl1.SelectedTab = tab; } } My question is: Is there a more elegant way to do this? A: The following is the solution: private void frmMainLoad(object sender, EventArgs e) { ActiveControl = textBox1; } The better question would however be why... I'm not entirely sure what the answer to that one is. Edit: I suspect it is something to do with the fact that both the form, and the TabControl are containers, but I'm not sure. A: Try to use textbox1.Select() instead of textbox1.Focus(). This helped me few times. A: Try putting it in the Form_Shown() event. Because it's in a container, putting in the Form_Load or even the Form() constructor won't work. A: You just need to add the Control.Select() for your control to this code. I have used this to set focus on controls during validation when there are errors. private void ShowControlTab(Control ControlToShow) { if (!TabSelected) { if (ControlToShow.Parent != null) { if (ControlToShow.Parent.GetType() == typeof(TabPage)) { TabPage Tab = (TabPage)ControlToShow.Parent; if (WOTabs.TabPages.Contains(Tab)) { WOTabs.SelectedTab = Tab; TabSelected = true; return; } } ShowControlTab(ControlToShow.Parent); } } } A: I had a user control within another user control. textbox1.Select() worked for me but textbox1.Focus() did not work. You can also try setting Tabstop to false, textbox1.Focus(), TabStop true. A: private void ChildForm1_Load(object sender, EventArgs e) { ActiveControl = txt_fname; } i use this code it works fine on win tab control or dotnetbar supertab contrl
{ "language": "en", "url": "https://stackoverflow.com/questions/48680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: How to save persistent objects databound to a DataLayoutControl (DevExpress tools)? I have a small form displaying the DataLayoutControl component. If I use a GridControl the objects get saved. If I use the DataLayoutControl (which shows them individually) they do not get saved after they are changed. The underlying object is changed after the user interface edits, but doesn't get saved. How can I enable this ? PS: I have tried UnitOfWork.CommitChanges (i have one uow going through the whole architecture) to no avail. A: You should have a Session and an XPCollection on the form where the DataLayoutControl is. You should hook XPCollection with Session. You select the right class for the XPCollection and maybe add some criterial that make the XPCollection return zero records. Hook XPCollection to the DataLayoutControl. Then you should provide a constructor with a parameter: The Oid of the object you want to edit. Inside the constructor you should use the Criteria to make the XPCollection contain only that object. Make sure you call Session.Save() in your Save button or menu item.
{ "language": "en", "url": "https://stackoverflow.com/questions/48688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to maintain Hibernate cache consistency running two Java applications? Our design has one jvm that is a jboss/webapp (read/write) that is used to maintain the data via hibernate (using jpa) to the db. The model has 10-15 persistent classes with 3-5 levels of depth in the relationships. We then have a separate jvm that is the server using this data. As it is running continuously we just have one long db session (read only). There is currently no intra-jvm cache involved - so we manually signal one jvm from the other. Now when the webapp changes some data, it signals the server to reload the changed data. What we have found is that we need to tell hibernate to purge the data and then reload it. Just doing a fetch/merge with the db does not do the job - mainly in respect of the objects several layers down the hierarchy. Any thoughts on whether there is anything fundamentally wrong with this design or if anyone is doing this and has had better luck with working with hibernate on the reloads. Thanks, Chris A: Chris, I'm a little confused about your circumstances. If I understand correctly, you have a both a web app (read/write) a standalone application (read-only?) using Hibernate to access a shared database. The changes you make with the web app aren't visible to the standalone app. Is that right? If so, have you considered using a different second-level cache implementation? I'm wondering if you might be able to use a clustered cache that is shared by both the web application and the standalone application. I believe that SwarmCache, which is integrated with Hibernate, will allow this, but I haven't tried it myself. In general, though, you should know that the contents of a given cache will never be aware of activity by another application (that's why I suggest having both apps share a cache). Good luck! A: From my point of view, you should change your underline Hibernate cache to that one, which supports clustered mode. It could be a JBoss Cache or a Swarm Cache. The first one has a better support of data synchronization (replication and invalidation) and also supports JTA. Then you will able to configure cache synchronization between webapp and server. Also look at isolation level if you will use JBoss Cache. I believe you should use READ_COMMITTED mode if you want to get new data on a server from the same session. A: A Hibernate session loads all data it reads from the DB into what they call the first-level cache. Once a row is loaded from the DB, any subsequent fetches for a row with the same PK will return the data from this cache. Furthermore, Hibernate gaurentees reference equality for objects with the same PK in a single Session. From what I understand, your read-only server application never closes its Hibernate session. So when the DB gets updated by the read-write application, the Session on read-only server is unaware of the change. Effectively, your read-only application is loading an in-memory copy of the database and using that copy, which gets stale in due course. The simplest and best course of action I can suggest is to close and open Sessions as needed. This sidesteps the whole problem. Hibernate Sessions are intended to be a window for a short-lived interaction with the DB. I agree that there is a performance gain by not reloading the object-graph again and again; but you need to measure it and convince yourself that it is worth the pains. Another option is to close and reopen the Session periodically. This ensures that the read-only application works with data not older than a given time interval. But there definitely is a window where the read-only application works with stale data (although the design guarantees that it gets the up-to-date data eventually). This might be permissible in many applications - you need to evaluate your situation. The third option is to use a second level cache implementation, and use short-lived Sessions. There are various caching packages that work with Hibernate with relative merits and demerits. A: The most used practice is to have a Container-Managed Entity Manager so that two or more applications in the same container (ie Glassfish, Tomcat, Websphere) can share the same caches. But if you don't use an Application container, because you use Play! for instance, then I would build some webservices in the primary Application to read/write consistently in the cache. I think using stale data is an open door for disaster. Just like Singletons become Multitons, read-only applications are often a write sometimes. Belt and braces :)
{ "language": "en", "url": "https://stackoverflow.com/questions/48733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Finding the phone numbers in 50,000 HTML pages How do you find the phone numbers in 50,000 HTML pages? Jeff Attwood posted 5 Questions for programmers applying for jobs: In an effort to make life simpler for phone screeners, I've put together this list of Five Essential Questions that you need to ask during an SDE screen. They won't guarantee that your candidate will be great, but they will help eliminate a huge number of candidates who are slipping through our process today. 1) Coding The candidate has to write some simple code, with correct syntax, in C, C++, or Java. 2) OO design The candidate has to define basic OO concepts, and come up with classes to model a simple problem. 3) Scripting and regexes The candidate has to describe how to find the phone numbers in 50,000 HTML pages. 4) Data structures The candidate has to demonstrate basic knowledge of the most common data structures. 5) Bits and bytes The candidate has to answer simple questions about bits, bytes, and binary numbers. Please understand: what I'm looking for here is a total vacuum in one of these areas. It's OK if they struggle a little and then figure it out. It's OK if they need some minor hints or prompting. I don't mind if they're rusty or slow. What you're looking for is candidates who are utterly clueless, or horribly confused, about the area in question. >>> The Entirety of Jeff´s Original Post <<< Note: Steve Yegge originally posed the Question. A: Made this in Java. The regex was borrowed from this forum. final String regex = "[\\s](\\({0,1}\\d{3}\\){0,1}" + "[- \\.]\\d{3}[- \\.]\\d{4})|" + "(\\+\\d{2}-\\d{2,4}-\\d{3,4}-\\d{3,4})"; final Pattern phonePattern = Pattern.compile(regex); /* The result set */ Set<File> files = new HashSet<File>(); File dir = new File("/initDirPath"); if (!dir.isDirectory()) return; for (File file : dir.listFiles()) { if (file.isDirectory()) continue; BufferedReader reader = new BufferedReader(new FileReader(file)); String line; boolean found = false; while ((line = reader.readLine()) != null && !found) { if (found = phonePattern.matcher(line).find()) { files.add(file); } } } for (File file : files) { System.out.println(file.getAbsolutePath()); } Performed some tests and it went ok! :) Remeber I'm not trying to use the best design here. Just implemented the algorithm for that. A: Here is a improved regex pattern \(?\d{3}\)?[-\s\.]?\d{3}[-\s\.]?\d{4} It is able to identify several number formats * *xxx.xxx.xxxx *xxx.xxxxxxx *xxx-xxx-xxx *xxxxxxxxxx *(xxx) xxx xxxx *(xxx) xxx-xxxx *(xxx)xxx-xxxx A: egrep "(([0-9]{1,2}.)?[0-9]{3}.[0-9]{3}.[0-9]{4})" . -R --include='*.html' A: egrep '\(?\d{3}\)?[-\s.]?\d{3}[-.]\d{4}' *.html A: Borrowing 2 things from the C# answer from sieben, here's a little F# snippet that will do the job. All it's missing is a way to call processDirectory, which is left out intentionally :) open System open System.IO open System.Text.RegularExpressions let rgx = Regex(@"(\({0,1}\d{3}\){0,1}[- \.]\d{3}[- \.]\d{4})|(\+\d{2}-\d{2,4}-\d{3,4}-\d{3,4})", RegexOptions.Compiled) let processFile contents = contents |> rgx.Matches |> Seq.cast |> Seq.map(fun m -> m.Value) let processDirectory path = Directory.GetFiles(path, "*.html", SearchOption.AllDirectories) |> Seq.map(File.ReadAllText >> processFile) |> Seq.concat A: Perl Solution By: "MH" via codinghorror,com on September 5, 2008 07:29 AM #!/usr/bin/perl while (<*.html>) { my $filename = $_; my @data = <$filename>; # Loop once through with simple search while (@data) { if (/\(?(\d\d\d)\)?[ -]?(\d\d\d)-?(\d\d\d\d)/) { push( @files, $filename ); next; } } # None found, strip html $text = ""; $text .= $_ while (@data); $text =~ s#<[^>]+>##gxs; # Strip line breaks $text =~ s#\n|\r##gxs; # Check for occurrence. if ( $text =~ /\(?(\d\d\d)\)?[ -]?(\d\d\d)-?(\d\d\d\d)/ ) { push( @files, $filename ); next; } } # Print out result print join( '\n', @files ); A: i love doing these little problems, can't help myself. not sure if it was worth doing though since it's very similar to the java answer. private readonly Regex phoneNumExp = new Regex(@"(\({0,1}\d{3}\){0,1}[- \.]\d{3}[- \.]\d{4})|(\+\d{2}-\d{2,4}-\d{3,4}-\d{3,4})"); public HashSet<string> Search(string dir) { var numbers = new HashSet<string>(); string[] files = Directory.GetFiles(dir, "*.html", SearchOption.AllDirectories); foreach (string file in files) { using (var sr = new StreamReader(file)) { string line; while ((line = sr.ReadLine()) != null) { var match = phoneNumExp.Match(line); if (match.Success) { numbers.Add(match.Value); } } } } return numbers; } A: Here's why phone interview coding questions don't work: phone screener: how do you find the phone numbers in 50,000 HTML pages? candidate: hang on one second (covers phone) hey (roommate/friend/etc who's super good at programming), how do you find the phone numbers in 50,000 HTML pages? Save the coding questions for early in the in-person interview, and make the interview questions more personal, i.e. "I'd like details about the last time you solved a problem using code". That's a question that will beg follow-ups to their details and it's a lot harder to get someone else to answer it for you without sounding weird over the phone.
{ "language": "en", "url": "https://stackoverflow.com/questions/48744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: When should a method be static? In addition, are there any performance advantages to static methods over instance methods? I came across the following recently: http://www.cafeaulait.org/course/week4/22.html : When should a method be static? * *Neither reads from nor writes to instance fields *Independent of the state of the object *Mathematical methods that accept arguments, apply an algorithm to those arguments, and return a value *Factory methods that serve in lieu of constructors I would be very interested in the feedback of the Stack Overflow community on this. A: Make methods static when they are not part of the instance. Don't sweat the micro-optimisations. You might find you have lots of private methods that could be static but you always call from instance methods (or each other). In that case it doesn't really matter that much. However, if you want to actually be able to test your code, and perhaps use it from elsewhere, you might want to consider making those static methods in a different, non-instantiable class. A: Whether or not a method is static is more of a design consideration than one of efficiency. A static method belongs to a class, where a non-static method belongs to an object. If you had a Math class, you might have a few static methods to deal with addition and subtraction because these are concepts associated with Math. However, if you had a Car class, you might have a few non-static methods to change gears and steer, because those are associated with a specific car, and not the concept of cars in general. A: Just remember that whenever you are writing a static method, you are writing an inflexible method that cannot have it's behavior modified very easily. You are writing procedural code, so if it makes sense to be procedural, then do it. If not, it should probably be an instance method. This idea is taken from an article by Steve Yegge, which I think is an interesting and useful read. A: Another problem with static methods is that it is quite painful to write unit tests for them - in Java, at least. You cannot mock a static method in any way. There is a post on google testing blog about this issue. My rule of thumb is to write static methods only when they have no external dependencies (like database access, read files, emails and so on) to keep them as simple as possible. A: Performance-wise, a C++ static method can be slightly faster than a non-virtual instance method, as there's no need for a 'this' pointer to get passed to the method. In turn, both will be faster than virtual methods as there's no VMT lookup needed. But, it's likely to be right down in the noise - particularly for languages which allow unnecessary parameter passing to be optimized out. A: @jagmal I think you've got some wires crossed somewhere - all the examples you list are clearly not static methods. Static methods should deal entirely with abstract properties and concepts of a class - they should in no way relate to instance specific attributes (and most compilers will yell if they do). For the car example, speed, kms driven are clearly attribute related. Gear shifting and speed calculation, when considered at the car level, are attribute dependent - but consider a carModel class that inherits from car: at this point theyy could become static methods, as the required attributes (such as wheel diameter) could be defined as constants at that level. A: Here is a related discussion as to why String.Format is static that will highlight some reasons. A: Another thing to consider when making methods static is that anyone able to see the class is able to call a static method. Whereas when the mehtod is an instance method, only those who have access to an instance are able to call that method.
{ "language": "en", "url": "https://stackoverflow.com/questions/48755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Unhandled exceptions filter in a windows service I am creating a windows service and want to know best practices for this. In all my windows Program I have a form that asks the user if he wants to report the error and if he answers yes I created a case in FogBugz. What should I do in a windows service. A: Since you're not going to have a user interacting with the program, I'd say make configuration variable (in an app.config file) responsible for sending/not sending the data. That way users who don't want to report errors can just change a flag in a config file. I'd personally have it turned on by default and then give them guidance on how to turn it off it they wanted to. A: You could also have a system tray representation of the service which would show a small notification about any errors and ask the user whether they want it reported or not. I think that it is still better to be able to give the user the choice whenever you are sending 'out' data from their computer.
{ "language": "en", "url": "https://stackoverflow.com/questions/48757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I create a foreign key in SQL Server? I have never "hand-coded" object creation code for SQL Server and foreign key decleration is seemingly different between SQL Server and Postgres. Here is my sql so far: drop table exams; drop table question_bank; drop table anwser_bank; create table exams ( exam_id uniqueidentifier primary key, exam_name varchar(50), ); create table question_bank ( question_id uniqueidentifier primary key, question_exam_id uniqueidentifier not null, question_text varchar(1024) not null, question_point_value decimal, constraint question_exam_id foreign key references exams(exam_id) ); create table anwser_bank ( anwser_id uniqueidentifier primary key, anwser_question_id uniqueidentifier, anwser_text varchar(1024), anwser_is_correct bit ); When I run the query I get this error: Msg 8139, Level 16, State 0, Line 9 Number of referencing columns in foreign key differs from number of referenced columns, table 'question_bank'. Can you spot the error? A: To Create a foreign key on any table ALTER TABLE [SCHEMA].[TABLENAME] ADD FOREIGN KEY (COLUMNNAME) REFERENCES [TABLENAME](COLUMNNAME) EXAMPLE ALTER TABLE [dbo].[UserMaster] ADD FOREIGN KEY (City_Id) REFERENCES [dbo].[CityMaster](City_Id) A: If you want to create two table's columns into a relationship by using a query try the following: Alter table Foreign_Key_Table_name add constraint Foreign_Key_Table_name_Columnname_FK Foreign Key (Column_name) references Another_Table_name(Another_Table_Column_name) A: You can also name your foreign key constraint by using: CONSTRAINT your_name_here FOREIGN KEY (question_exam_id) REFERENCES EXAMS (exam_id) A: Like you, I don't usually create foreign keys by hand, but if for some reason I need the script to do so I usually create it using ms sql server management studio and before saving then changes, I select Table Designer | Generate Change Script A: This script is about creating tables with foreign key and I added referential integrity constraint sql-server. create table exams ( exam_id int primary key, exam_name varchar(50), ); create table question_bank ( question_id int primary key, question_exam_id int not null, question_text varchar(1024) not null, question_point_value decimal, constraint question_exam_id_fk foreign key references exams(exam_id) ON DELETE CASCADE ); A: And if you just want to create the constraint on its own, you can use ALTER TABLE alter table MyTable add constraint MyTable_MyColumn_FK FOREIGN KEY ( MyColumn ) references MyOtherTable(PKColumn) I wouldn't recommend the syntax mentioned by Sara Chipps for inline creation, just because I would rather name my own constraints. A: I like AlexCuse's answer, but something you should pay attention to whenever you add a foreign key constraint is how you want updates to the referenced column in a row of the referenced table to be treated, and especially how you want deletes of rows in the referenced table to be treated. If a constraint is created like this: alter table MyTable add constraint MyTable_MyColumn_FK FOREIGN KEY ( MyColumn ) references MyOtherTable(PKColumn) .. then updates or deletes in the referenced table will blow up with an error if there is a corresponding row in the referencing table. That might be the behaviour you want, but in my experience, it much more commonly isn't. If you instead create it like this: alter table MyTable add constraint MyTable_MyColumn_FK FOREIGN KEY ( MyColumn ) references MyOtherTable(PKColumn) on update cascade on delete cascade ..then updates and deletes in the parent table will result in updates and deletes of the corresponding rows in the referencing table. (I'm not suggesting that the default should be changed, the default errs on the side of caution, which is good. I'm just saying it's something that a person who is creating constaints should always pay attention to.) This can be done, by the way, when creating a table, like this: create table ProductCategories ( Id int identity primary key, ProductId int references Products(Id) on update cascade on delete cascade CategoryId int references Categories(Id) on update cascade on delete cascade ) A: Necromancing. Actually, doing this correctly is a little bit trickier. You first need to check if the primary-key exists for the column you want to set your foreign key to reference to. In this example, a foreign key on table T_ZO_SYS_Language_Forms is created, referencing dbo.T_SYS_Language_Forms.LANG_UID -- First, chech if the table exists... IF 0 < ( SELECT COUNT(*) FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = 'BASE TABLE' AND TABLE_SCHEMA = 'dbo' AND TABLE_NAME = 'T_SYS_Language_Forms' ) BEGIN -- Check for NULL values in the primary-key column IF 0 = (SELECT COUNT(*) FROM T_SYS_Language_Forms WHERE LANG_UID IS NULL) BEGIN ALTER TABLE T_SYS_Language_Forms ALTER COLUMN LANG_UID uniqueidentifier NOT NULL -- No, don't drop, FK references might already exist... -- Drop PK if exists -- ALTER TABLE T_SYS_Language_Forms DROP CONSTRAINT pk_constraint_name --DECLARE @pkDropCommand nvarchar(1000) --SET @pkDropCommand = N'ALTER TABLE T_SYS_Language_Forms DROP CONSTRAINT ' + QUOTENAME((SELECT CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS --WHERE CONSTRAINT_TYPE = 'PRIMARY KEY' --AND TABLE_SCHEMA = 'dbo' --AND TABLE_NAME = 'T_SYS_Language_Forms' ----AND CONSTRAINT_NAME = 'PK_T_SYS_Language_Forms' --)) ---- PRINT @pkDropCommand --EXECUTE(@pkDropCommand) -- Instead do -- EXEC sp_rename 'dbo.T_SYS_Language_Forms.PK_T_SYS_Language_Forms1234565', 'PK_T_SYS_Language_Forms'; -- Check if they keys are unique (it is very possible they might not be) IF 1 >= (SELECT TOP 1 COUNT(*) AS cnt FROM T_SYS_Language_Forms GROUP BY LANG_UID ORDER BY cnt DESC) BEGIN -- If no Primary key for this table IF 0 = ( SELECT COUNT(*) FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE = 'PRIMARY KEY' AND TABLE_SCHEMA = 'dbo' AND TABLE_NAME = 'T_SYS_Language_Forms' -- AND CONSTRAINT_NAME = 'PK_T_SYS_Language_Forms' ) ALTER TABLE T_SYS_Language_Forms ADD CONSTRAINT PK_T_SYS_Language_Forms PRIMARY KEY CLUSTERED (LANG_UID ASC) ; -- Adding foreign key IF 0 = (SELECT COUNT(*) FROM INFORMATION_SCHEMA.REFERENTIAL_CONSTRAINTS WHERE CONSTRAINT_NAME = 'FK_T_ZO_SYS_Language_Forms_T_SYS_Language_Forms') ALTER TABLE T_ZO_SYS_Language_Forms WITH NOCHECK ADD CONSTRAINT FK_T_ZO_SYS_Language_Forms_T_SYS_Language_Forms FOREIGN KEY(ZOLANG_LANG_UID) REFERENCES T_SYS_Language_Forms(LANG_UID); END -- End uniqueness check ELSE PRINT 'FSCK, this column has duplicate keys, and can thus not be changed to primary key...' END -- End NULL check ELSE PRINT 'FSCK, need to figure out how to update NULL value(s)...' END A: create table question_bank ( question_id uniqueidentifier primary key, question_exam_id uniqueidentifier not null, question_text varchar(1024) not null, question_point_value decimal, constraint fk_questionbank_exams foreign key (question_exam_id) references exams (exam_id) ); A: I always use this syntax to create the foreign key constraint between 2 tables Alter Table ForeignKeyTable Add constraint `ForeignKeyTable_ForeignKeyColumn_FK` `Foreign key (ForeignKeyColumn)` references `PrimaryKeyTable (PrimaryKeyColumn)` i.e. Alter Table tblEmployee Add constraint tblEmployee_DepartmentID_FK foreign key (DepartmentID) references tblDepartment (ID) A: create table question_bank ( question_id uniqueidentifier primary key, question_exam_id uniqueidentifier not null constraint fk_exam_id foreign key references exams(exam_id), question_text varchar(1024) not null, question_point_value decimal ); --That will work too. Pehaps a bit more intuitive construct?
{ "language": "en", "url": "https://stackoverflow.com/questions/48772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "258" }
Q: Adding extra information to a custom exception I've created a custom exception for a very specific problem that can go wrong. I receive data from another system, and I raise the exception if it bombs while trying to parse that data. In my custom exception, I added a field called "ResponseData", so I can track exactly what my code couldn't handle. In custom exceptions such as this one, should that extra response data go into the exception "message"? If it goes there, the message could be huge. I kind of want it there because I'm using Elmah, and that's how I can get at that data. So the question is either: - How can I get Elmah to record extra information from a field in a custom exception OR - Should extra exception details go into the "message" property? A: You shouldn't fill .Message with debug information, but rather with a concise, helpful piece of text. http://msdn.microsoft.com/en-us/library/system.exception.message.aspx The text of Message should completely describe the error and should, when possible, explain how to correct it. The value of the Message property is included in the information returned by ToString. The Message property is set only when creating an Exception. If no message was supplied to the constructor for the current instance, the system supplies a default message that is formatted using the current system culture. [..] Notes to Inheritors: The Message property is overridden in classes that require control over message content or format. Application code typically accesses this property when it needs to display information about an exception that has been caught. The error message should be localized. Response data does not qualify as a description. Not being familiar with elmah, I can't tell you how to extend the Exception class while using it. Does elmah implement its own subclass to Exception? Or an interface? Can you subclass it yourself? A: The Exception class contains a dictionary (named Data, I believe) that you can use to associate custom data with a vanilla exception. A: In custom exceptions such as this one, should that extra response data go into the exception "message"? No, as Sören already pointed out. However, your exception type could override ToString and sensibly add the response data information there. This is a perfectly normal practice followed by many of the exception types in the BCL (Base Class Library) so you will not find yourself swimming against the tide. For example, have a look at the System.IO.FileNotFoundException.ToString implementation in SSCLI (Rotor): public override String ToString() { String s = GetType().FullName + ": " + Message; if (_fileName != null && _fileName.Length != 0) s += Environment.NewLine + String.Format(Environment.GetResourceString("IO.FileName_Name"), _fileName); if (InnerException != null) s = s + " ---> " + InnerException.ToString(); if (StackTrace != null) s += Environment.NewLine + StackTrace; try { if(FusionLog!=null) { if (s==null) s=" "; s+=Environment.NewLine; s+=Environment.NewLine; s+="Fusion log follows: "; s+=Environment.NewLine; s+=FusionLog; } } catch(SecurityException) { } return s; } As you can see, it appends the content of FusionLog property, which represent extra information in case of assembly load failures. How can I get Elmah to record extra information from a field in a custom exception ELMAH stores the result of calling ToString on an exception as the details of the error so if you have ToString implemented as prescribed, the information would get logged without further work. The only issue is that the logged detail will be unstructured text. A: I don't fully understand the question but you seem to be asking what to do with additional exception data, if that is not your question feel free to ignore this. I think an important question to ask is what exactly is the exception message for? It is not for knowing where the exception came from, the stack trace is for that; it is not to encapsulate an exception in a more general one, that should be done with the InnerException field; in the case where your exception is only raised from a particular place in your code it isn't even for describing what kind of error you had - thats what the type of the exception is for. Generally I use the message field to provide simple, human-readable tips that a programmer that is not me, seeing this error for the first time can use to gain an understanding of the underlying system. I consider the message field to be appropriate for a short (one sentence) explanation, a hint as to how this error is frequently raised, or a reference to further reading. So, as far as I understand your question, I think that the best way to store this 'additional information' that is received from another system is as an InnerException. I don't know Elmah, but if it's worth its salt it will check for InnerExceptions and store them. A: I don't understand the question -- you're extending System.Exception, and you already added the Elmah field. That's where it belongs -- as a public property of the exception itself. A: Elmah is a http module that records unhandled exceptions. I guess it's just a limitation of Elmah, since it doesn't store custom fields. I guess I'll have to ask those guys. I have the extra field in there for the response data, but Elmah does not store it.
{ "language": "en", "url": "https://stackoverflow.com/questions/48773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Boolean Expressions in Shell Scripts What's the "right" way to do the following as a boolean expression? for i in `ls $1/resources`; do if [ $i != "database.db" ] then if [ $i != "tiles" ] then if [ $i != "map.pdf" ] then if [ $i != "map.png" ] then svn export -q $1/resources/$i ../MyProject/Resources/$i ... A: Even shorter: for i in `ls $1/resources`; do if [ $i != databse.db -a $i != titles -a $i != map.pdf ]; then svn export -q $1/resources/$i ../MyProject/Resources/$i fi done; The -a in the if expression is the equivalent of the boolean AND in shell-tests. For more see man test A: Consider using a case statement: for i in $(ls $1/resources); do case $i in database.db|tiles|map.pdf|map.png) ;; *) svn export -q $1/resources/$i ../MyProject/Resources/$i;; esac done A: The other solutions have a couple of common mistakes: http://www.pixelbeat.org/programming/shell_script_mistakes.html * *for i in $(ls ...) is redundant/problematic just do: for i in $1/resources*; do ... *[ $i != file1 -a $1 != file2 ] This actually has 2 problems. a. The $i is not quoted, hence names with spaces will cause issues b. -a is inefficient if stating files as it doesn't short circuit (I know the above is not stating files). So instead try: for i in $1/resources/*; do if [ "$i" != "database.db" ] && [ "$i" != "tiles" ] && [ "$i" != "map.pdf" ] && [ "$i" != "map.png" ]; then svn export -q "$i" "../MyProject/Resources/$(basename $i)" fi done A: for i in `ls $1/resources`; do if [ $i != "database.db" ] && [ $i != "tiles" ] && [ $i != "map.pdf" ] && [ $i != "map.png" ]; then svn export -q $1/resources/$i ../MyProject/Resources/$i A: For future reference, the new [[ test operator is preferred. The accepted answer is close and everything mentioned applies, but that answer will require lots of quoting and calls to multiple tests. The preferred method would be something like: for i in $1/resources/*; do if [[ $i != "database.db" && $i != "tiles" && $i != "map.pdf" && $i != "map.png" ]]; then svn export -q "$i" "../MyProject/Resources/$(basename $i)" fi done
{ "language": "en", "url": "https://stackoverflow.com/questions/48774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Python: No module named core.exceptions I'm trying to get Google AppEngine to work on my Debian box and am getting the following error when I try to access my page: <type 'exceptions.ImportError'>: No module named core.exceptions The same app works fine for me when I run it on my other Ubuntu box, so I know it's not a problem with the app itself. However, I need to get it working on this Debian box. It originally had python 2.4 but after AppEngine complained about it I installed the python2.5 and python2.5-dev packages (to no avail). I saw on this Google Group post that it may be due to the version of AppEngine and just to reinstall it, but that didn't work. Any ideas? Edit 1: Also tried uninstalling python2.4 and 2.5 then reinstalling 2.5, which also didn't work. Edit 2: Turns out when I made AppEngine into a CVS project it didn't add the core directory into my project, so when I checked it out there literally was no module named core.exceptions. Re-downloading that folder resolved the problem. A: core.exceptions is part of django; what version of django do you have installed? The AppEngine comes with the appropriate version for whatever release you've downloaded (in the lib/django directory). It can be installed by going to that directory and running python setup.py install
{ "language": "en", "url": "https://stackoverflow.com/questions/48777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: TinyMCE vs Xinha I have to choose an online WYSIWYG editor. I'm pending between TinyMCE and Xinha. My application is developed in Asp.Net 3.5. Could you help me with with some pros and cons? A: Haven't tried Xihna myself, but I have experience with TinyMCE and FCKeditor. In my company we switched to TinyMce (from FCKeditor) due to the superior support for pasting from word documents and the (relatively easy to work with) plugin architecture which we used to add some custom modules (links browser, simple file browser). TinyMCE also converts the text to xhtml code which is usually better. A: I'd recommend FCKEditor over TinyMCE. I've had much better luck with it (better markup, better managers, better extensibility, better speed, better compatibility, etc) A: Try SPAW Editor. File Manager is included. Editor is generated from server side code, meaning it's lighter on client side processing. A: Of course TinyMCE :) A: I found Xinha to be much better and more functional than FCKEditor. If you know PHP and a dab of javascript you can customize the file manager, and there is a lovely set of plugins on offer. I am also impressed with what I have seen of TinyMCE and due to wider adoption you may find it to have more options. A: I've never used Xinha, but I can vouch for TinyMCE. It's fast, scales well, and is infinitely customizable. I particularly like the dynamic loading of functionality, which means you only take the performance hit for the stuff you use. It also includes language-specific compressors to further increase performance (C# is supported, along with PHP, Java and ColdFusion) by GZipping components. A: Of course TinyMCE it has more plugins to choose from and its easy to make custom plugins. It only gives issues with ipad(cause of iframes).
{ "language": "en", "url": "https://stackoverflow.com/questions/48782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Html.RenderPartial call from masterpage Here is a scenario: Let's say I have site with two controllers responsible for displaying different type of content - Pages and Articles. I need to embed Partial View into my masterpage that will list pages and articles filtered with some criteria, and be displayed on each page. I cannot set Model on my masterpage (am I right?). How do I solve this task using Html.RenderPartial? [EDIT] Yes, I'd probably create separate partial views for listing articles and pages, but still, there is a barrier that I cannot and shouldn't set model on masterpage. I need somehow to say "here are the pages" as an argument to my renderpartial, and also for articles. Entire concept of renderpartial with data from database in masterpages is a bit blurry to me. A: How about creating an HtmlHelper extension method that allows you to call a partial view result on the an action on the controller. Something like public static void RenderPartialAction<TController>(this HtmlHelper helper, Func<TController, PartialViewResult> actionToRender) where TController : Controller, new() { var arg = new TController {ControllerContext = helper.ViewContext.Controller.ControllerContext}; actionToRender(arg).ExecuteResult(arg.ControllerContext); } you could then use this in your master page like <% Html.RenderPartialAction((HomeController x) => x.RenderPartial()) %> and in your controller the appropriate method public PartialViewResult RenderPartial() { return PartialView("~/Path/or/View",_homeService.GetModel()) } Well that is my 2 cents anyway A: I had a similar post and came up with an object model to handle it. I HATE the non-strongly typed views so went with this approach and it is working well. A: I think that your solution may lie in the land of MVC 2.0 RC and beyond... Phil Haack posted an article on his blog: http://haacked.com/archive/2009/11/18/aspnetmvc2-render-action.aspx A: The ViewData Model property should only be used for the content that you're viewing/editing on the main section of the UI. Other parts of the view may need some data present in ViewData, so just add those to the dictionary. I'd just pass data from the dictionary like this: ViewData["articles"] to the partial. (or ViewData.Get() from MvcContrib). You might also look at the recently implemented SubController pattern implemented in MvcContrib. A: yes, this is correct. but let's look at this scenario: on views that are related to articles, I'd have ViewData["article"], and on views related to pages, I have ViewData["pages"], but I don't have both articles and pages available all time. So, if I add: Html.RenderPartial("articlesView", ViewData["articles"]) Html.RenderPartial("pagesView", ViewData["pages"]) to my masterpage, I'll have an exception thrown on each page on which ViewDataDictionary doesn't contain both articles and pages. At least, that's how I see it. A: The way that I handle this is by using a BaseViewModel. All Views are strongly typed against a view model that inherits from BaseViewModel. The BaseViewModel class has all of the information needed by the MasterPage. So for navigation your BaseViewModel may look like this: public class BaseViewModel { public BaseViewModel() { NavigationItems = RetrieveNavigationItemsFromModel(); } public List<NavItems> NavigationItems {get; set;} } In your MasterPage and PartialViews, you can cast the Model to BaseViewModel and access the NavigationsItems property. <ul> <% foreach (NavItem ni in (Model as BaseViewModel).NavigationItems) { %> <li> <a href="<%= ni.Url %>" alt="<%= ni.Alt%>"><%= ni.DisplayText %></a> </li> <% } %> </ul> A: This is a very late reply, but I got to this page whilst googling - so chances are someone else will see this question (and my reply) as well. The way I've worked around this problem is by using a simple jQuery script to load a PartialView and execute it's controller code. Sample below. <asp:Content ID="indexContent" ContentPlaceHolderID="MainContent" runat="server"> <script type="text/javascript"> $(document).ready(function() { $("#applicationForm").load("/Home/ApplicationForm"); }); </script> <div id="applicationForm" /> </asp:Content> The big drawback to this approach is that the client has to have scripting enabled for it to work (so it's really SEO unfriendly). If that's something you can live with it works well. I only use it on an intranet site where I know that each client has JavaScript enabled and I don't have to worry about google's bots.
{ "language": "en", "url": "https://stackoverflow.com/questions/48794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do you access browser history? Some e-Marketing tools claim to choose which web page to display based on where you were before. That is, if you've been browsing truck sites and then go to Ford.com, your first page would be of the Ford Explorer. I know you can get the immediate preceding page with HTTP_REFERRER, but how do you know where they were 6 sites ago? A: Unrelated but relevant, if you only want to look one page back and you can't get to the headers of a page, then document.referrer gives you the place a visitor came from. A: Javascript this should get you started: http://www.dicabrio.com/javascript/steal-history.php There are more nefarius means to: http://ha.ckers.org/blog/20070228/steal-browser-history-without-javascript/ Edit:I wanted to add that although this works it is a sleazy marketing teqnique and an invasion of privacy. A: You can't access the values for the entries in browser history (neither client side nor server side). All you can do is to send the browser back or forward a number of steps. The entries of the history are otherwise hidden from programmatic access. Also note that HTTP_REFERER won't be there if the user typed the address in the URL bar instead of following a link to your page. A: The browser history can't be directly accessed, but you can compare a list of sites with the user's history. This can be done because the browser attributes a different CSS style to a link that hasn't been visited and one that has. Using this style difference you can change the content of you pages using pure CSS, but in general javascript is used. There is a good article here about using this trick to improve the user experience by displaying only the RSS aggregator or social bookmarking links that the user actually uses: http://www.niallkennedy.com/blog/2008/02/browser-history-sniff.html
{ "language": "en", "url": "https://stackoverflow.com/questions/48805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Where to find resources on Refactoring? Refactoring is the process of improving the existing system design without changing its behavior. Besides Martin Fowler's seminal book "Refactoring - Improving the design of existing code" and Joshua Kerievsky's book "Refactoring to Patterns", are there any good resources on refactoring? A: http://www.refactoring.com/ might help you. They have a long list of methods here: * *http://www.refactoring.com/catalog/index.html Joel's article Rub a dub dub shows you why you should refactor and not rewrite (but I guess you already knew that rewriting is a thing you should never do..) A: Working Effectively with Legacy Code focuses on dealing with existing code-bases that need to evolve to be testable. Many techniques are used in the book to accomplish this, and is an excellent resource for refactoring. A: If you're looking for more than just code refactoring, you might find Scott Ambler's book quite useful: http://www.ambysoft.com/books/refactoringDatabases.html A: Here are some Wiki pages about refactoring that explore various principles and guidelines. A: What is your codebase? Eclipse has quite good support for Java. But unfortunately limited support for C++ code. Heres an article from the makers.. A: Refactoring HTML is new and relatively good, you can guess what it covers :) Other than that the two books you mention are the two I've used most, but Agile Principles is also very good. A: There is a 'cheat sheet' for code smells here: http://industriallogic.com/papers/ A: I would recommend reading Working Effectively with Legacy Code, then Refactoring - Improving the design of existing code. Martin Fowler's book is more like a receipt book for me, it explains how. Working effectively with legacy code, explains the why in my opinion. below is some other books relating to refactoring: antipatterns refactoring software architectures and projects in crisis refactoring in large software projects performing complex restructurings refactoring sql applications Prefactoring A: Sourcemaking - http://sourcemaking.com/refactoring
{ "language": "en", "url": "https://stackoverflow.com/questions/48817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is the Mono Developer Support from Novell worth it? My company are thinking about using Mono for an upcoming product, so we were thinking about the $12,995 Mono Kickstart support from Novell. Anybody here used it, is it worth it? A: if i were you i'd probably start the project and then only if i needed support for mono buy the product. that way if you dont need it you wont be wasting the $13k.
{ "language": "en", "url": "https://stackoverflow.com/questions/48844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Detecting Client Disconnects in Web Services I'm using the Apache CXF Web Services stack. When a client times out or disconnects from the server before the operation is complete, the server keeps running the operation until it is complete. I would like to have the server detect when the client disconnects and handle that accordingly. Is there a way to detect when a client disconnects using Apache CXF? What about using other Java web-services stacks? A: I am not familiar with Apache CXF, but the following should be applicable to any Java Servlet based framework. In order to determine if a user has disconnected (stop button, closed browser, etc.) the server must attempt to send a packet. If the TCP/IP connection has been closed, an IOException will be thrown. In theory, a Java application could send a space character at various points during processing. An IOException would signal that the client has gone away and processing can be aborted. However, there may be a few issues with this technique: * *Sending characters during processing will cause the response to be "committed", so it may be impossible to set HTTP headers, cookies, etc. based on the result of the long-running serverside processing. *If the output stream is buffered, the space characters will not be sent immediately, thereby not performing an adequate test. It may be possible to use flush() as a workaround. *It may be difficult to implement this technique for a given framework or view technology (JSP, etc.) For example, the page rendering code will not be able to sent the content type after the response has been committed.
{ "language": "en", "url": "https://stackoverflow.com/questions/48859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What is this Icarus thing that comes with MbUnit? I've had to install MbUnit multiple times now and it keeps coming with something called the Gallilo Icarus GUI Test Runner I have tried using it thinking it was just an update to the MbUnit GUI but it won't detect my MbUnit tests and sometimes won't even open the assemblies properly. Perhaps I'm just overlooking it but I haven't been able to find much of an answer on their website either except that it has something to do with a new testing platform. Can someone give me a better explanation of what this is? A: According to a blog entry MbUnit v3 and Gallio alpha 1, So whats going on here, Gallio is a neutral test platform that is an off shoot from the work we had done on MbUnit v3. Gallio is both a common framework and a set of runners for testing tools. MbUnit v3 uses Gallio as its native test platform, Gallio can also as Jeff mentions run MbUnit, NUnit and XUnit.net tests. For both migration purposes and to help improve how you are using your existing test framework we hope this will prove useful. We still have a lot of work to do but make no secrets of what we are up to, check out our road map. I do want to draw attention to the work we are doing with our new runners. Starting with Icarus, our new GUI. So, "Gallio is a neutral test platform" and "Icarus, [their] new GUI."
{ "language": "en", "url": "https://stackoverflow.com/questions/48864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Why/when should you use nested classes in .net? Or shouldn't you? In Kathleen Dollard's 2008 blog post, she presents an interesting reason to use nested classes in .net. However, she also mentions that FxCop doesn't like nested classes. I'm assuming that the people writing FxCop rules aren't stupid, so there must be reasoning behind that position, but I haven't been able to find it. A: It depends on the usage. I rarely would ever use a Public nested class but use Private nested classes all of the time. A private nested class can be used for a sub-object that is intended to be used only inside the parent. An example of this would be if a HashTable class contains a private Entry object to store data internally only. If the class is meant to be used by the caller (externally), I generally like making it a separate standalone class. A: In addition to the other reasons listed above, there is one more reason that I can think of not only to use nested classes, but in fact public nested classes. For those who work with multiple generic classes that share the same generic type parameters, the ability to declare a generic namespace would be extremely useful. Unfortunately, .Net (or at least C#) does not support the idea of generic namespaces. So in order to accomplish the same goal, we can use generic classes to fulfill the same goal. Take the following example classes related to a logical entity: public class BaseDataObject < tDataObject, tDataObjectList, tBusiness, tDataAccess > where tDataObject : BaseDataObject<tDataObject, tDataObjectList, tBusiness, tDataAccess> where tDataObjectList : BaseDataObjectList<tDataObject, tDataObjectList, tBusiness, tDataAccess>, new() where tBusiness : IBaseBusiness<tDataObject, tDataObjectList, tBusiness, tDataAccess> where tDataAccess : IBaseDataAccess<tDataObject, tDataObjectList, tBusiness, tDataAccess> { } public class BaseDataObjectList < tDataObject, tDataObjectList, tBusiness, tDataAccess > : CollectionBase<tDataObject> where tDataObject : BaseDataObject<tDataObject, tDataObjectList, tBusiness, tDataAccess> where tDataObjectList : BaseDataObjectList<tDataObject, tDataObjectList, tBusiness, tDataAccess>, new() where tBusiness : IBaseBusiness<tDataObject, tDataObjectList, tBusiness, tDataAccess> where tDataAccess : IBaseDataAccess<tDataObject, tDataObjectList, tBusiness, tDataAccess> { } public interface IBaseBusiness < tDataObject, tDataObjectList, tBusiness, tDataAccess > where tDataObject : BaseDataObject<tDataObject, tDataObjectList, tBusiness, tDataAccess> where tDataObjectList : BaseDataObjectList<tDataObject, tDataObjectList, tBusiness, tDataAccess>, new() where tBusiness : IBaseBusiness<tDataObject, tDataObjectList, tBusiness, tDataAccess> where tDataAccess : IBaseDataAccess<tDataObject, tDataObjectList, tBusiness, tDataAccess> { } public interface IBaseDataAccess < tDataObject, tDataObjectList, tBusiness, tDataAccess > where tDataObject : BaseDataObject<tDataObject, tDataObjectList, tBusiness, tDataAccess> where tDataObjectList : BaseDataObjectList<tDataObject, tDataObjectList, tBusiness, tDataAccess>, new() where tBusiness : IBaseBusiness<tDataObject, tDataObjectList, tBusiness, tDataAccess> where tDataAccess : IBaseDataAccess<tDataObject, tDataObjectList, tBusiness, tDataAccess> { } We can simplify the signatures of these classes by using a generic namespace (implemented via nested classes): public partial class Entity < tDataObject, tDataObjectList, tBusiness, tDataAccess > where tDataObject : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.BaseDataObject where tDataObjectList : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.BaseDataObjectList, new() where tBusiness : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.IBaseBusiness where tDataAccess : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.IBaseDataAccess { public class BaseDataObject {} public class BaseDataObjectList : CollectionBase<tDataObject> {} public interface IBaseBusiness {} public interface IBaseDataAccess {} } Then, through the use of partial classes as suggested by Erik van Brakel in an earlier comment, you can separate the classes into separate nested files. I recommend using a Visual Studio extension like NestIn to support nesting the partial class files. This allows the "namespace" class files to also be used to organize the nested class files in a folder like way. For example: Entity.cs public partial class Entity < tDataObject, tDataObjectList, tBusiness, tDataAccess > where tDataObject : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.BaseDataObject where tDataObjectList : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.BaseDataObjectList, new() where tBusiness : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.IBaseBusiness where tDataAccess : Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess>.IBaseDataAccess { } Entity.BaseDataObject.cs partial class Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess> { public class BaseDataObject { public DataTimeOffset CreatedDateTime { get; set; } public Guid CreatedById { get; set; } public Guid Id { get; set; } public DataTimeOffset LastUpdateDateTime { get; set; } public Guid LastUpdatedById { get; set; } public static implicit operator tDataObjectList(DataObject dataObject) { var returnList = new tDataObjectList(); returnList.Add((tDataObject) this); return returnList; } } } Entity.BaseDataObjectList.cs partial class Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess> { public class BaseDataObjectList : CollectionBase<tDataObject> { public tDataObjectList ShallowClone() { var returnList = new tDataObjectList(); returnList.AddRange(this); return returnList; } } } Entity.IBaseBusiness.cs partial class Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess> { public interface IBaseBusiness { tDataObjectList Load(); void Delete(); void Save(tDataObjectList data); } } Entity.IBaseDataAccess.cs partial class Entity<tDataObject, tDataObjectList, tBusiness, tDataAccess> { public interface IBaseDataAccess { tDataObjectList Load(); void Delete(); void Save(tDataObjectList data); } } The files in the visual studio solution explorer would then be organized as such: Entity.cs + Entity.BaseDataObject.cs + Entity.BaseDataObjectList.cs + Entity.IBaseBusiness.cs + Entity.IBaseDataAccess.cs And you would implement the generic namespace like the following: User.cs public partial class User : Entity < User.DataObject, User.DataObjectList, User.IBusiness, User.IDataAccess > { } User.DataObject.cs partial class User { public class DataObject : BaseDataObject { public string UserName { get; set; } public byte[] PasswordHash { get; set; } public bool AccountIsEnabled { get; set; } } } User.DataObjectList.cs partial class User { public class DataObjectList : BaseDataObjectList {} } User.IBusiness.cs partial class User { public interface IBusiness : IBaseBusiness {} } User.IDataAccess.cs partial class User { public interface IDataAccess : IBaseDataAccess {} } And the files would be organized in the solution explorer as follows: User.cs + User.DataObject.cs + User.DataObjectList.cs + User.IBusiness.cs + User.IDataAccess.cs The above is a simple example of using an outer class as a generic namespace. I've built "generic namespaces" containing 9 or more type parameters in the past. Having to keep those type parameters synchronized across the nine types that all needed to know the type parameters was tedious, especially when adding a new parameter. The use of generic namespaces makes that code far more manageable and readable. A: If I understand Katheleen's article right, she proposes to use nested class to be able to write SomeEntity.Collection instead of EntityCollection< SomeEntity>. In my opinion it's controversial way to save you some typing. I'm pretty sure that in real world application collections will have some difference in implementations, so you will need to create separate class anyway. I think that using class name to limit other class scope is not a good idea. It pollutes intellisense and strengthen dependencies between classes. Using namespaces is a standard way to control classes scope. However I find that usage of nested classes like in @hazzen comment is acceptable unless you have tons of nested classes which is a sign of bad design. A: From Sun's Java Tutorial: Why Use Nested Classes? There are several compelling reasons for using nested classes, among them: * *It is a way of logically grouping classes that are only used in one place. *It increases encapsulation. *Nested classes can lead to more readable and maintainable code. Logical grouping of classes—If a class is useful to only one other class, then it is logical to embed it in that class and keep the two together. Nesting such "helper classes" makes their package more streamlined. Increased encapsulation—Consider two top-level classes, A and B, where B needs access to members of A that would otherwise be declared private. By hiding class B within class A, A's members can be declared private and B can access them. In addition, B itself can be hidden from the outside world. <- This doesn't apply to C#'s implementation of nested classes, this only applies to Java. More readable, maintainable code—Nesting small classes within top-level classes places the code closer to where it is used. A: Use a nested class when the class you are nesting is only useful to the enclosing class. For instance, nested classes allow you to write something like (simplified): public class SortedMap { private class TreeNode { TreeNode left; TreeNode right; } } You can make a complete definition of your class in one place, you don't have to jump through any PIMPL hoops to define how your class works, and the outside world doesn't need to see anything of your implementation. If the TreeNode class was external, you would either have to make all the fields public or make a bunch of get/set methods to use it. The outside world would have another class polluting their intellisense. A: Fully Lazy and thread-safe singleton pattern public sealed class Singleton { Singleton() { } public static Singleton Instance { get { return Nested.instance; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly Singleton instance = new Singleton(); } } source: https://csharpindepth.com/Articles/Singleton A: I often use nested classes to hide implementation detail. An example from Eric Lippert's answer here: abstract public class BankAccount { private BankAccount() { } // Now no one else can extend BankAccount because a derived class // must be able to call a constructor, but all the constructors are // private! private sealed class ChequingAccount : BankAccount { ... } public static BankAccount MakeChequingAccount() { return new ChequingAccount(); } private sealed class SavingsAccount : BankAccount { ... } } This pattern becomes even better with use of generics. See this question for two cool examples. So I end up writing Equality<Person>.CreateComparer(p => p.Id); instead of new EqualityComparer<Person, int>(p => p.Id); Also I can have a generic list of Equality<Person> but not EqualityComparer<Person, int> var l = new List<Equality<Person>> { Equality<Person>.CreateComparer(p => p.Id), Equality<Person>.CreateComparer(p => p.Name) } where as var l = new List<EqualityComparer<Person, ??>>> { new EqualityComparer<Person, int>>(p => p.Id), new EqualityComparer<Person, string>>(p => p.Name) } is not possible. That's the benefit of nested class inheriting from parent class. Another case (of the same nature - hiding implementation) is when you want to make a class's members (fields, properties etc) accessible only for a single class: public class Outer { class Inner //private class { public int Field; //public field } static inner = new Inner { Field = -1 }; // Field is accessible here, but in no other class } A: Another use not yet mentioned for nested classes is the segregation of generic types. For example, suppose one wants to have some generic families of static classes that can take methods with various numbers of parameters, along with values for some of those parameters, and generate delegates with fewer parameters. For example, one wishes to have a static method which can take an Action<string, int, double> and yield a String<string, int> which will call the supplied action passing 3.5 as the double; one may also wish to have a static method which can take an an Action<string, int, double> and yield an Action<string>, passing 7 as the int and 5.3 as the double. Using generic nested classes, one can arrange to have the method invocations be something like: MakeDelegate<string,int>.WithParams<double>(theDelegate, 3.5); MakeDelegate<string>.WithParams<int,double>(theDelegate, 7, 5.3); or, because the latter types in each expression can be inferred even though the former ones can't: MakeDelegate<string,int>.WithParams(theDelegate, 3.5); MakeDelegate<string>.WithParams(theDelegate, 7, 5.3); Using the nested generic types makes it possible to tell which delegates are applicable to which parts of the overall type description. A: The nested classes can be used for following needs: * *Classification of the data *When the logic of the main class is complicated and you feel like you require subordinate objects to manage the class *When you that the state and existence of the class fully depends on the enclosing class A: As nawfal mentioned implementation of Abstract Factory pattern, that code can be axtended to achieve Class Clusters pattern which is based on Abstract Factory pattern. A: I like to nest exceptions that are unique to a single class, ie. ones that are never thrown from any other place. For example: public class MyClass { void DoStuff() { if (!someArbitraryCondition) { // This is the only class from which OhNoException is thrown throw new OhNoException( "Oh no! Some arbitrary condition was not satisfied!"); } // Do other stuff } public class OhNoException : Exception { // Constructors calling base() } } This helps keep your project files tidy and not full of a hundred stubby little exception classes. A: Bear in mind that you'll need to test the nested class. If it is private, you won't be able to test it in isolation. You could make it internal, though, in conjunction with the InternalsVisibleTo attribute. However, this would be the same as making a private field internal only for testing purposes, which I consider bad self-documentation. So, you may want to only implement private nested classes involving low complexity. A: yes for this case: class Join_Operator { class Departamento { public int idDepto { get; set; } public string nombreDepto { get; set; } } class Empleado { public int idDepto { get; set; } public string nombreEmpleado { get; set; } } public void JoinTables() { List<Departamento> departamentos = new List<Departamento>(); departamentos.Add(new Departamento { idDepto = 1, nombreDepto = "Arquitectura" }); departamentos.Add(new Departamento { idDepto = 2, nombreDepto = "Programación" }); List<Empleado> empleados = new List<Empleado>(); empleados.Add(new Empleado { idDepto = 1, nombreEmpleado = "John Doe." }); empleados.Add(new Empleado { idDepto = 2, nombreEmpleado = "Jim Bell" }); var joinList = (from e in empleados join d in departamentos on e.idDepto equals d.idDepto select new { nombreEmpleado = e.nombreEmpleado, nombreDepto = d.nombreDepto }); foreach (var dato in joinList) { Console.WriteLine("{0} es empleado del departamento de {1}", dato.nombreEmpleado, dato.nombreDepto); } } } A: Base on my understanding of this concept we could use this feature when classes are related to each other conceptually. I mean some of them are complete one Item in our business like entities that exist in the DDD world that help to an aggregate root object to complete its business logic. In order to clarify I'm going to show this via an example: Imagine that we have two classes like Order and OrderItem. In order class, we are going to manage all orderItems and in OrderItem we are holding data about a single order for clarification, you can see below classes: class Order { private List<OrderItem> _orderItems = new List<OrderItem>(); public void AddOrderItem(OrderItem line) { _orderItems.Add(line); } public double OrderTotal() { double total = 0; foreach (OrderItem item in _orderItems) { total += item.TotalPrice(); } return total; } // Nested class public class OrderItem { public int ProductId { get; set; } public int Quantity { get; set; } public double Price { get; set; } public double TotalPrice() => Price * Quantity; } } class Program { static void Main(string[] args) { Order order = new Order(); Order.OrderItem orderItem1 = new Order.OrderItem(); orderItem1.ProductId = 1; orderItem1.Quantity = 5; orderItem1.Price = 1.99; order.AddOrderItem(orderItem1); Order.OrderItem orderItem2 = new Order.OrderItem(); orderItem2.ProductId = 2; orderItem2.Quantity = 12; orderItem2.Price = 0.35; order.AddOrderItem(orderItem2); Console.WriteLine(order.OrderTotal()); ReadLine(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/48872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "104" }
Q: Choosing between Ajax, Flex and Silverlight Ajax, Flex and Silverlight are a few ways to make more interactive web applications. What kinds of factors would you consider when deciding which to use for a new web application? Does any one of them offer better cross-platform compatibility, performance, developer tools or community support? A: The choice should in my opinion be mostly based on the nature of the application you'll be building (for example, if you need to manipulate vector graphics, Ajax is pretty much out), but here are some general guidelines: Ubiquity * *Ajax -- Supported by all modern browsers across platforms *Flex -- Runtime (Flash Player) has very wide installed base for Windows, Mac OS, Linux. Linux version was a bit buggy the last time I checked, though *Silverlight -- Runtime has quite low installed base (and no Linux support) at the moment Choice of programming language (Unordeded because of subjectivity, but note that Silverlight offers the most choice. Also note that the existing language experience of developers in your team should be taken into account.) * *Silverlight: Any .NET language (C#, Visual Basic, IronPython(?), IronRuby(?)) (and XAML for UI definition) *Ajax: JavaScript (and XHTML for UI definition) *Flex: ActionScript 3 (and MXML for UI definition) API Stability and compatibility * *Flex -- Runtime is the same across platforms and browsers, more mature and stable at the moment than Silverlight *Silverlight -- Runtime is the same across platforms and browsers, less mature than Flex/Flash, v2.0 is still in beta *Ajax -- Compatibility problems across browsers (may be mitigated via Ajax libraries, though) Web/Browser Integration * *Ajax -- Content is native inside browser, based on standards: searchable by browser and search engine crawlers, subject to any standard UI practices the browser and operating system have established *Flex and Silverlight -- Content not native to browser (i.e. runs in its own little "sandbox/rectangle"): not necessarily subject to established UI practices for the given platform Developer Tools * *Ajax -- Your favorite code editor, browser and debugging toolkit for the browser *Flex -- Flex SDK is available for Windows, Mac OS and Linux for free and can be used with your favorite code editor. A Command-line debugger is included, but the Adobe-provided profiler is only available in the commercial Flex Builder IDE *Silverlight -- AFAIK, The SDK is available for Windows for free and can be used with your favorite .NET development tools A: The web runtimes like Flex and Silverlight all offer enticing things, but come with two big costs: * *They run only within a rectangle on the page, and don't interact with normal web widgets *They are only available to people who have that plug-in installed Even the nearly-ubiquitous Flash isn't installed on every web browser, so by choosing to use a plug-in runtime you're excluding part of your audience. In contrast, JavaScript (or Ajax) is available on pretty much every browser, and interacts better with normal web pages, but is a more primitive and restricting language. Using it for complex animations can be tricky, and you'll need to test your applications in more versions on more platforms to make sure it works. Cross-platform compatibility isn't the issue it used to be, so the issue is this: Will you gain more in the features of a plug-in library than you'll lose in the audience you exclude? My own answer has so far always been JavaScript/Ajax, but I'd re-evaluate that in any new project. A: Here's a quick rundown of each area (with lots of helpful links): Cross-platform compatibility Ajax works in any modern browser that can run JavaScript. Flex requires Flash or anything else that can handle SWFs but, once that's installed, it's a total freeride as far as compatibility. Silverlight is tricky and misunderstood so carefully consider your userbase before going with this MS foray into the rich web applications arms race. Also keep in mind that Silverlight is still in Beta, so it may become more widely used and installed in the future as it is developed. Performance I'm fearful of making too many statements about performance because it really depends on how much you are willing to optimize and the exact nature of your application. Also, some technology stacks are just never going to be very fast. Some people out there have been making comparisons, but again, it depends on a great many factors (even the version of the browser from which you are testing!). It's probably best to choose based on other factors and optimize once you've started to develop. Developer tools There are the "golden standard" dev tools for each of the three: * *Ajax has basically unlimited options, depending on the rest of your technology and architecture choices. The big questions you're actually faced with are which libraries to rely upon, and Google has voiced a pretty well adopted answer with things like Web Toolkit. When you get right down to it, it's just XML and JavaScript, right? *Flex is from Adobe and, just like with Flash development, you'd better stick with their homegrown tools because--well--they're making the standards as they go along. *Microsoft has positioned Microsoft Expression Blend versions 2.0 and 2.5 for designing the UI of Silverlight 1.0 and 2 applications respectively. Visual Studio 2008 can be used to develop and debug Silverlight applications (from Wikipedia). Community support There is both official and unofficial community, corporate, and open-source support for all three options. Whichever you are already integrated with and which makes you feel most at home are very individual things, but I'll offer this advice: stick with what you know. If you are a MS developer and have MSDN as your homepage, you are probably going to think the Silverlight documentation and forums are really helpful. Flex has a very similar story; the forums are pretty good and if you're a Flash person already, you're going to be right at home with their documentation and user community. On the other hand, Ajax is basically all over the place because you can implement so many different ways and use so many widely-varied libraries. Each library can have it's own forums to visit and mailing lists to lurk within for answers. Once again, all three have corporate giants trying to foster their communities and to get the best support possible to the developers that will give them greater market share in the future. A: What is your audience: public web site or an intranet business app? Adoption rates are not relevant if you have a controlled audience who will install what is needed to run your app. However, if you need the largest possible audience to make your web startup a success then it may be critical. What is your goal? Building something for the lowest cost? Learning new technology? Can you leverage your existing skills? If you already know .NET then Silverlight gets a boost. Learning Flex may be interesting and useful, but is it more useful to you than more experience with .NET technologies? Remember to consider the opportunity cost of learning something totally new. I don't see a clear technology winner at this point, and likely there won't be one for a long time, so the choice will come down to fairly subjective factors. A: Other than what's already been mentioned here, another huge thing to consider is what your UI is going to be. If you're going to be using a lot of advanced UI controls like trees, lists, tab controls, etc then consider the following: * *JavaScript/HTML - No native support for anything beyond things basic drop down boxes, buttons, and text fields. If you want something like a tree control or tab control then you'll have to roll your own or find a third party library. *Adobe ActionScript - Native support for a wide array of advanced UI controls *Silverlight - 1.0 had very limited UI controls, but 2.0 will be adding many more and I'm sure we'll continue to see controls added in future releases.
{ "language": "en", "url": "https://stackoverflow.com/questions/48877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Fundeps and GADTs: When is type checking decidable? I was reading a research paper about Haskell and how HList is implemented and wondering when the techniques described are and are not decidable for the type checker. Also, because you can do similar things with GADTs, I was wondering if GADT type checking is always decidable. I would prefer citations if you have them so I can read/understand the explanations. Thanks! A: I believe GADT type checking is always decidable; it's inference which is undecidable, as it requires higher order unification. But a GADT type checker is a restricted form of the proof checkers you see in eg. Coq, where the constructors build up the proof term. For example, the classic example of embedding lambda calculus into GADTs has a constructor for each reduction rule, so if you want to find the normal form of a term, you have to tell it which constructors will get you to it. The halting problem has been moved into the user's hands :-) A: You've probably already seen this but there are a collection of papers on this issue at Microsoft research: Type Checking papers. The first one describes the decidable algorithm actually used in the Glasgow Haskell compiler.
{ "language": "en", "url": "https://stackoverflow.com/questions/48905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How Do Sockets Work in C? I am a bit confused about socket programming in C. You create a socket, bind it to an interface and an IP address and get it to listen. I found a couple of web resources on that, and understood it fine. In particular, I found an article Network programming under Unix systems to be very informative. What confuses me is the timing of data arriving on the socket. How can you tell when packets arrive, and how big the packet is, do you have to do all the heavy lifting yourself? My basic assumption here is that packets can be of variable length, so once binary data starts appearing down the socket, how do you begin to construct packets from that? A: When you do a read on the socket, you tell it how many maximum bytes to read, but if it doesn't have that many, it gives you however many it's got. It's up to you to design the protocol so you know whether you've got a partial packet or not. For instance, in the past when sending variable length binary data, I would put an int at the beginning that said how many bytes to expect. I'd do a read requesting a number of bytes greater than the largest possible packet in my protocol, and then I'd compare the first int against however many bytes I'd received, and either process it or try more reads until I'd gotten the full packet, depending. A: Short answer is that you have to do all the heavy lifting yourself. You can be notified that there is data available to be read, but you won't know how many bytes are available. In most IP protocols that use variable length packets, there will be a header with a known fixed length prepended to the packet. This header will contain the length of the packet. You read the header, get the length of the packet, then read the packet. You repeat this pattern (read header, then read packet) until communication is complete. When reading data from a socket, you request a certain number of bytes. The read call may block until the requested number of bytes are read, but it can return fewer bytes than what was requested. When this happens, you simply retry the read, requesting the remaining bytes. Here's a typical C function for reading a set number of bytes from a socket: /* buffer points to memory block that is bigger than the number of bytes to be read */ /* socket is open socket that is connected to a sender */ /* bytesToRead is the number of bytes expected from the sender */ /* bytesRead is a pointer to a integer variable that will hold the number of bytes */ /* actually received from the sender. */ /* The function returns either the number of bytes read, */ /* 0 if the socket was closed by the sender, and */ /* -1 if an error occurred while reading from the socket */ int readBytes(int socket, char *buffer, int bytesToRead, int *bytesRead) { *bytesRead = 0; while(*bytesRead < bytesToRead) { int ret = read(socket, buffer + *bytesRead, bytesToRead - *bytesRead); if(ret <= 0) { /* either connection was closed or an error occurred */ return ret; } else { *bytesRead += ret; } } return *bytesRead; } A: So, the answer to your question depends a fair bit on whether you are using UDP or TCP as your transport. For UDP, life gets a lot simpler, in that you can call recv/recvfrom/recvmsg with the packet size you need (you'd likely send fixed-length packets from the source anyway), and make the assumption that if data is available, it's there in multiples of packet-length sizes. (I.E. You call recv* with the size of your sending side packet, and you're set.) For TCP, life gets a bit more interesting - for the purpose of this explanation, I will assume that you already know how to use socket(), bind(), listen() and accept() - the latter being how you get the file descriptor (FD) of your newly made connection. There are two ways of doing the I/O for a socket - blocking, in which you call read(fd, buf, N) and the read sits there and waits until you've read N bytes into buf - or non-blocking, in which you have to check (using select() or poll()) whether the FD is readable, and THEN do your read(). When dealing with TCP-based connections, the OS doesn't pay attention to the packet sizes, since it's considered a continual stream of data, not seperate packet-sized chunks. If your application uses "packets" (packed or unpacked data structures that you're passing around), you ought to be able to call read() with the proper size argument, and read an entire data structure off the socket at a time. The only caveat you have to deal with, is to remember to properly byte-order any data that you're sending, in case the source and destination system are of different byte endian-ness. This applies to both UDP and TCP. As far as *NIX socket programming is concerned, I highly recommend W. Richard Stevens' "Unix Network Programming, Vol. 1" (UNPv1) and "Advanced Programming in an Unix Environment" (APUE). The first is a tome regarding network-based programming, regardless of the transport, and the latter is a good all-around programming book as it applies to *NIX based programming. Also, look for "TCP/IP Illustrated", Volumes 1 and 2. A: Sockets operate at a higher level than raw packets - it's like a file you can read/write from. Also, when you try to read from a socket, the operating system will block (put on hold) your process until it has data to fulfill the request.
{ "language": "en", "url": "https://stackoverflow.com/questions/48908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Multi-threaded splash screen in C#? I want a splash screen to show while the application is loading. I have a form with a system tray control tied to it. I want the splash screen to display while this form loads, which takes a bit of time since it's accessing a web service API to populate some drop-downs. I also want to do some basic testing for dependencies before loading (that is, the web service is available, the configuration file is readable). As each phase of the startup goes, I want to update the splash screen with progress. I have been reading a lot on threading, but I am getting lost on where this should be controlled from (the main() method?). I am also missing how Application.Run() works, is this where the threads for this should be created from? Now, if the form with the system tray control is the "living" form, should the splash come from there? Wouldn't it not load until the form is completed anyway? I'm not looking for a code handout, more of an algorithm/approach so I can figure this out once and for all :) A: I recommend calling Activate(); directly after the last Show(); in the answer provided by aku. Quoting MSDN: Activating a form brings it to the front if this is the active application, or it flashes the window caption if this is not the active application. The form must be visible for this method to have any effect. If you don't activate your main form, it may be displayed behind any other open windows, making it look a bit silly. A: I think using some method like aku's or Guy's is the way to go, but a couple of things to take away from the specific examples: * *The basic premise would be to show your splash on a separate thread as soon as possible. That's the way I would lean, similar to what aku's illustrated, since it's the way I'm most familiar with. I was not aware of the VB function Guy mentioned. And, even thought it's a VB library, he is right -- it's all IL in the end. So, even if it feels dirty it's not all that bad! :) I think you'll want to be sure that either VB provides a separate thread for in that override or that you create one yourself -- definitely research that. *Assuming you create another thread to display this splash, you will want to be careful of cross thread UI updates. I bring this up because you mentioned updating progress. Basically, to be safe, you need to call an update function (that you create) on the splash form using a delegate. You pass that delegate to the Invoke function on your splash screen's form object. In fact if you call the splash form directly to update progress/UI elements on it, you'll get an exception provided you are running on the .Net 2.0 CLR. As a rule of thumb, any UI element on a form must be updated by the thread that created it -- that's what Form.Invoke insures. Finally, I would likely opt to create the splash (if not using the VB overload) in the main method of your code. To me this is better than having the main form perform creation of the object and to be so tightly bound to it. If you take that approach, I'd suggest creating a simple interface that the splash screen implements -- something like IStartupProgressListener -- which receives start-up progress updates via a member function. This will allow you to easily swap in/out either class as needed, and nicely decouples the code. The splash form can also know when to close itself if you notify when start-up is complete. A: One simple way is the use something like this as main(): <STAThread()> Public Shared Sub Main() splash = New frmSplash splash.Show() ' Your startup code goes here... UpdateSplashAndLogMessage("Startup part 1 done...") ' ... and more as needed... splash.Hide() Application.Run(myMainForm) End Sub When the .NET CLR starts your application, it creates a 'main' thread and starts executing your main() on that thread. The Application.Run(myMainForm) at the end does two things: * *Starts the Windows 'message pump', using the thread that has been executing main() as the GUI thread. *Designates your 'main form' as the 'shutdown form' for the application. If the user closes that form, then the Application.Run() terminates and control returns to your main(), where you can do any shutdown you want. There is no need to spawn a thread to take care of the splash window, and in fact this is a bad idea, because then you would have to use thread-safe techniques to update the splash contents from main(). If you need other threads to do background operations in your application, you can spawn them from main(). Just remember to set Thread.IsBackground to True, so that they will die when the main / GUI thread terminates. Otherwise you will have to arrange to terminate all your other threads yourself, or they will keep your application alive (but with no GUI) when the main thread terminates. A: The trick is to to create separate thread responsible for splash screen showing. When you run you app .net creates main thread and loads specified (main) form. To conceal hard work you can hide main form until loading is done. Assuming that Form1 - is your main form and SplashForm is top level, borderles nice splash form: private void Form1_Load(object sender, EventArgs e) { Hide(); bool done = false; ThreadPool.QueueUserWorkItem((x) => { using (var splashForm = new SplashForm()) { splashForm.Show(); while (!done) Application.DoEvents(); splashForm.Close(); } }); Thread.Sleep(3000); // Emulate hardwork done = true; Show(); } A: Well, for a ClickOnce app that I deployed in the past, we used the Microsoft.VisualBasic namespace to handle the splash screen threading. You can reference and use the Microsoft.VisualBasic assembly from C# in .NET 2.0 and it provides a lot of nice services. * *Have the main form inherit from Microsoft.VisualBasic.WindowsFormsApplicationBase *Override the "OnCreateSplashScreen" method like so: protected override void OnCreateSplashScreen() { this.SplashScreen = new SplashForm(); this.SplashScreen.TopMost = true; } Very straightforward, it shows your SplashForm (which you need to create) while loading is going on, then closes it automatically once the main form has completed loading. This really makes things simple, and the VisualBasic.WindowsFormsApplicationBase is of course well tested by Microsoft and has a lot of functionality that can make your life a lot easier in Winforms, even in an application that is 100% C#. At the end of the day, it's all IL and bytecode anyway, so why not use it? A: I posted an article on splash screen incorporation in the application at codeproject. It is multithreaded and might be of your interest Yet Another Splash Screen in C# A: private void MainForm_Load(object sender, EventArgs e) { FormSplash splash = new FormSplash(); splash.Show(); splash.Update(); System.Threading.Thread.Sleep(3000); splash.Hide(); } I got this from the Internet somewhere but cannot seem to find it again. Simple but yet effective. A: I like Aku's answer a lot, but the code is for C# 3.0 and up since it uses a lambda function. For people needing to use the code in C# 2.0, here's the code using anonymous delegate instead of the lambda function. You need a topmost winform called formSplash with FormBorderStyle = None. The TopMost = True parameter of the form is important because the splash screen might look like it appears then disappears quickly if it's not topmost. I also choose StartPosition=CenterScreen so it looks like what a professional app would do with a splash screen. If you want an even cooler effect, you can use the TrasparencyKey property to make an irregular shaped splash screen. private void formMain_Load(object sender, EventArgs e) { Hide(); bool done = false; ThreadPool.QueueUserWorkItem(delegate { using (formSplash splashForm = new formSplash()) { splashForm.Show(); while (!done) Application.DoEvents(); splashForm.Close(); } }, null); Thread.Sleep(2000); done = true; Show(); } A: I disagree with the other answers recommending WindowsFormsApplicationBase. In my experience, it can slow your app. To be precise, while it runs your form's constructor in parallel with the splash screen, it postpone your form's Shown event. Consider an app (without splashs screen) with a constructor that takes 1 second and a event handler on Shown that takes 2 seconds. This app is usable after 3 seconds. But suppose you install a splash screen using WindowsFormsApplicationBase. You might think MinimumSplashScreenDisplayTime of 3 seconds is sensible and won't slow your app. But, try it, your app will now take 5 seconds to load. class App : WindowsFormsApplicationBase { protected override void OnCreateSplashScreen() { this.MinimumSplashScreenDisplayTime = 3000; // milliseconds this.SplashScreen = new Splash(); } protected override void OnCreateMainForm() { this.MainForm = new Form1(); } } and public Form1() { InitializeComponent(); Shown += Form1_Shown; Thread.Sleep(TimeSpan.FromSeconds(1)); } void Form1_Shown(object sender, EventArgs e) { Thread.Sleep(TimeSpan.FromSeconds(2)); Program.watch.Stop(); this.textBox1.Text = Program.watch.ElapsedMilliseconds.ToString(); } Conclusion: don't use WindowsFormsApplicationBase if your app has a handler on the Slown event. You can write better code that runs the splash in parallel to both the constructor and the Shown event. A: After looking all over Google and SO for solutions, this is my favorite: http://bytes.com/topic/c-sharp/answers/277446-winform-startup-splash-screen FormSplash.cs: public partial class FormSplash : Form { private static Thread _splashThread; private static FormSplash _splashForm; public FormSplash() { InitializeComponent(); } /// <summary> /// Show the Splash Screen (Loading...) /// </summary> public static void ShowSplash() { if (_splashThread == null) { // show the form in a new thread _splashThread = new Thread(new ThreadStart(DoShowSplash)); _splashThread.IsBackground = true; _splashThread.Start(); } } // called by the thread private static void DoShowSplash() { if (_splashForm == null) _splashForm = new FormSplash(); // create a new message pump on this thread (started from ShowSplash) Application.Run(_splashForm); } /// <summary> /// Close the splash (Loading...) screen /// </summary> public static void CloseSplash() { // need to call on the thread that launched this splash if (_splashForm.InvokeRequired) _splashForm.Invoke(new MethodInvoker(CloseSplash)); else Application.ExitThread(); } } Program.cs: static class Program { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main(string[] args) { // splash screen, which is terminated in FormMain FormSplash.ShowSplash(); Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); // this is probably where your heavy lifting is: Application.Run(new FormMain()); } } FormMain.cs ... public FormMain() { InitializeComponent(); // bunch of database access, form loading, etc // this is where you could do the heavy lifting of "loading" the app PullDataFromDatabase(); DoLoadingWork(); // ready to go, now close the splash FormSplash.CloseSplash(); } I had issues with the Microsoft.VisualBasic solution -- Worked find on XP, but on Windows 2003 Terminal Server, the main application form would show up (after the splash screen) in the background, and the taskbar would blink. And bringing a window to foreground/focus in code is a whole other can of worms you can Google/SO for. A: This is an old question, but I kept coming across it when trying to find a threaded splash screen solution for WPF that could include animation. Here is what I ultimately pieced together: App.XAML: <Application Startup="ApplicationStart" … App.XAML.cs: void ApplicationStart(object sender, StartupEventArgs e) { var thread = new Thread(() => { Dispatcher.CurrentDispatcher.BeginInvoke ((Action)(() => new MySplashForm().Show())); Dispatcher.Run(); }); thread.SetApartmentState(ApartmentState.STA); thread.IsBackground = true; thread.Start(); // call synchronous configuration process // and declare/get reference to "main form" thread.Abort(); mainForm.Show(); mainForm.Activate(); } A: Actually mutlithreading here is not necessary. Let your business logic generate an event whenever you want to update splash screen. Then let your form update the splash screen accordingly in the method hooked to eventhandler. To differentiate updates you can either fire different events or provide data in a class inherited from EventArgs. This way you can have nice changing splash screen without any multithreading headache. Actually with this you can even support, for example, gif image on a splash form. In order for it to work, call Application.DoEvents() in your handler: private void SomethingChanged(object sender, MyEventArgs e) { formSplash.Update(e); Application.DoEvents(); //this will update any animation }
{ "language": "en", "url": "https://stackoverflow.com/questions/48916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: Make the web browser scroll to the top? What is the JavaScript to scroll to the top when a button/link/etc. is clicked? A: <a href="javascript:scroll(0, 0)">Top</a> A: If you had an anchor link at the top of the page, you could do it with anchors too. <a name="top"></a> <a href='#top">top</a> It'll work in browser's with Javascript disabled, but changes the url. :( It also lets you jump to anywhere the name anchor is set. A: actually, this works by itself, no need to define it. <a href="#top">top</a> This is a "magic" hashname value that does not need to be defined in browsers. Just like this will "reload" the page. <a href="/">reload</a>
{ "language": "en", "url": "https://stackoverflow.com/questions/48919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to read bound hover callback functions in jQuery I used jQuery to set hover callbacks for elements on my page. I'm now writing a module which needs to temporarily set new hover behaviour for some elements. The new module has no access to the original code for the hover functions. I want to store the old hover functions before I set new ones so I can restore them when finished with the temporary hover behaviour. I think these can be stored using the jQuery.data() function: //save old hover behavior (somehow) $('#foo').data('oldhoverin',???) $('#foo').data('oldhoverout',???); //set new hover behavior $('#foo').hover(newhoverin,newhoverout); Do stuff with new hover behaviour... //restore old hover behaviour $('#foo').hover($('#foo').data('oldhoverin'),$('#foo').data('oldhoverout')); But how do I get the currently registered hover functions from jQuery? Shadow2531, I am trying to do this without modifying the code which originally registered the callbacks. Your suggestion would work fine otherwise. Thanks for the suggestion, and for helping clarify what I'm searching for. Maybe I have to go into the source of jquery and figure out how these callbacks are stored internally. Maybe I should change the question to "Is it possible to do this without modifying jquery?" A: Calling an event bind method (such as hover) does not delete old event handlers, only adds your new events, so your idea of 'restoring' the old event functions wouldn't work, as it wouldn't delete your events. You can add your own events, and then remove them without affecting any other events then use Event namespacing: http://docs.jquery.com/Events_(Guide)#Namespacing_events A: Not sure if this will work, but you can try this: function setHover(obj, mouseenter, mouseleave) { obj.data("_mouseenter", mouseenter); obj.data("_mouseleave", mouseleave); obj.hover(obj.data("_mouseenter"), obj.data("_mouseleave")); } function removeHover(obj) { obj.unbind("mouseenter", obj.data("_mouseenter")); obj.unbind("mouseleave", obj.data("_mouseleave")); obj.data("_mouseenter", undefined); obj.data("_mouseleave", undefined); } $(document).ready(function() { var test = $("#test"); setHover(test, function(e) { alert("original " + e.type); }, function(e) { alert("original " + e.type); }); var saved_mouseenter = test.data("_mouseenter"); var saved_mouseleave = test.data("_mouseleave"); removeHover(test); setHover(test, function() { alert("zip"); }, function() { alert('zam'); }); removeHover(test); setHover(test, saved_mouseenter, saved_mouseleave); }); <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Jquery - Get, change and restore hover handlers</title> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> </head> <body> <p><a id="test" href="">test</a></p> </body> </html> If not, maybe it'll give you some ideas. A: I'm not sure if this is what you mean, but you can bind custom events and then trigger them. http://docs.jquery.com/Events/bind So add your hover event, script the functionality you need for that hover, then trigger your custom event. A: Maybe it would be easier to just hide the old element and create a clone with your event handlers attached? Then just swap back in the old element when you're done..
{ "language": "en", "url": "https://stackoverflow.com/questions/48931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I list loaded plugins in Vim? Does anybody know of a way to list up the "loaded plugins" in Vim? I know I should be keeping track of this kind of stuff myself but it would always be nice to be able to check the current status. A: The problem with :scriptnames, :commands, :functions, and similar Vim commands, is that they display information in a large slab of text, which is very hard to visually parse. To get around this, I wrote Headlights, a plugin that adds a menu to Vim showing all loaded plugins, TextMate style. The added benefit is that it shows plugin commands, mappings, files, and other bits and pieces. A: Not a VIM user myself, so forgive me if this is totally offbase. But according to what I gather from the following VIM Tips site: " where was an option set :scriptnames : list all plugins, _vimrcs loaded (super) :verbose set history? : reveals value of history and where set :function : list functions :func SearchCompl : List particular function A: :set runtimepath? This lists the path of all plugins loaded when a file is opened with Vim. A: If you use Vundle, :PluginList. A: :help local-additions Lists local plugins added. A: If you use vim-plug (Plug), " A minimalist Vim plugin manager.": :PlugStatus That will not only list your plugins but check their status.
{ "language": "en", "url": "https://stackoverflow.com/questions/48933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "329" }
Q: In Delphi 7, why can I assign a value to a const? I copied some Delphi code from one project to another, and found that it doesn't compile in the new project, though it did in the old one. The code looks something like this: procedure TForm1.CalculateGP(..) const Price : money = 0; begin ... Price := 1.0; ... end; So in the new project, Delphi complains that "left side cannot be assigned to" - understandable! But this code compiles in the old project. So my question is, why? Is there a compiler switch to allow consts to be reassigned? How does that even work? I thought consts were replaced by their values at compile time? A: You need to turn assignable typed constants on. Project -> Options -> Compiler -> Assignable typed Constants Also you can add {$J+} or {$WRITEABLECONST ON} to the pas file, which is probably better, since it'll work even if you move the file to another project. A: Type-inferred constants can only be scalar values - i.e. things like integers, doubles, etc. For these kinds of constants, the compiler does indeed replace the constant's symbol with the constant's value whenever it meets them in expressions. Typed constants, on the other hand, can be structured values - arrays and records. These guys need actual storage in the executable - i.e. they need to have storage allocated for them such that, when the OS loads the executable, the value of the typed constant is physically contained at some location in memory. To explain why, historically, typed constants in early Delphi and its predecessor, Turbo Pascal, are writable (and thus essentially initialized global variables), we need to go back to the days of DOS. DOS runs in real-mode, in x86 terms. This means that programs have direct access to physical memory without any MMU doing virtual-physical mappings. When programs have direct access to memory, no memory protection is in effect. In other words, if there is memory at any given address, it is both readable and writable in real-mode. So, in a Turbo Pascal program for DOS with a typed constant, whose value is allocated at an address in memory at runtime, that typed constant will be writable. There is no hardware MMU getting in the way and preventing the program from writing to it. Similarly, because Pascal has no notion of 'const'ness that C++ has, there is nothing in the type system to stop you. A lot of people took advantage of this, since Turbo Pascal and Delphi did not at that time have initialized global variables as a feature. Moving on to Windows, there is a layer between memory addresses and physical addresses: the memory management unit. This chip takes the page index (a shifted mask) of the memory address you're trying to access, and looks up the attributes of this page in its page table. These attributes include readable, writable, and for modern x86 chips, non-executable flags. With this support, it's possible to mark sections of the .EXE or .DLL with attributes such that when the Windows loader loads the executable image into memory, it assigns appropriate page attributes for memory pages that map to disk pages within these sections. When the 32-bit Windows version of the Delphi compiler came around, it thus made sense to make const-like things really const, as the OS also has this feature. A: Like Barry said, people took advantage of consts; One of the ways this was used, was for keeping track of singleton instances. If you look at a classic singleton implementation, you would see this : // Example implementation of the Singleton pattern. TSingleton = class(TObject) protected constructor CreateInstance; virtual; class function AccessInstance(Request: Integer): TSingleton; public constructor Create; virtual; destructor Destroy; override; class function Instance: TSingleton; class procedure ReleaseInstance; end; constructor TSingleton.Create; begin inherited Create; raise Exception.CreateFmt('Access class %s through Instance only', [ClassName]); end; constructor TSingleton.CreateInstance; begin inherited Create; // Do whatever you would normally place in Create, here. end; destructor TSingleton.Destroy; begin // Do normal destruction here if AccessInstance(0) = Self then AccessInstance(2); inherited Destroy; end; {$WRITEABLECONST ON} class function TSingleton.AccessInstance(Request: Integer): TSingleton; const FInstance: TSingleton = nil; begin case Request of 0 : ; 1 : if not Assigned(FInstance) then FInstance := CreateInstance; 2 : FInstance := nil; else raise Exception.CreateFmt('Illegal request %d in AccessInstance', [Request]); end; Result := FInstance; end; {$IFNDEF WRITEABLECONST_ON} {$WRITEABLECONST OFF} {$ENDIF} class function TSingleton.Instance: TSingleton; begin Result := AccessInstance(1); end; class procedure TSingleton.ReleaseInstance; begin AccessInstance(0).Free; end; A: * *Why: Because in previous versions of Delphi the typed constants were assignable by default to preserve compatibility with older versions where they were always writable (Delphi 1 up to early Pascal). The default has now been changed to make constants really constant… *Compiler switch: {$J+} or {$J-} {$WRITEABLECONST ON} or {$WRITEABLECONST OFF} Or in the project options for the compiler: check assignable typed Constants *How it works: If the compiler can compute the value at compile time, it replaces the const by its value everywhere in the code, otherwise it holds a pointer to a memory area holding the value, which can be made writeable or not. *see 3.
{ "language": "en", "url": "https://stackoverflow.com/questions/48934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How can I register a global hot key to say CTRL+SHIFT+(LETTER) using WPF and .NET 3.5? I'm building an application in C# using WPF. How can I bind to some keys? Also, how can I bind to the Windows key? A: This is a full working solution, hope it helps... Usage: _hotKey = new HotKey(Key.F9, KeyModifier.Shift | KeyModifier.Win, OnHotKeyHandler); ... private void OnHotKeyHandler(HotKey hotKey) { SystemHelper.SetScreenSaverRunning(); } Class: using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; using System.Net.Mime; using System.Runtime.InteropServices; using System.Text; using System.Windows; using System.Windows.Input; using System.Windows.Interop; namespace UnManaged { public class HotKey : IDisposable { private static Dictionary<int, HotKey> _dictHotKeyToCalBackProc; [DllImport("user32.dll")] private static extern bool RegisterHotKey(IntPtr hWnd, int id, UInt32 fsModifiers, UInt32 vlc); [DllImport("user32.dll")] private static extern bool UnregisterHotKey(IntPtr hWnd, int id); public const int WmHotKey = 0x0312; private bool _disposed = false; public Key Key { get; private set; } public KeyModifier KeyModifiers { get; private set; } public Action<HotKey> Action { get; private set; } public int Id { get; set; } // ****************************************************************** public HotKey(Key k, KeyModifier keyModifiers, Action<HotKey> action, bool register = true) { Key = k; KeyModifiers = keyModifiers; Action = action; if (register) { Register(); } } // ****************************************************************** public bool Register() { int virtualKeyCode = KeyInterop.VirtualKeyFromKey(Key); Id = virtualKeyCode + ((int)KeyModifiers * 0x10000); bool result = RegisterHotKey(IntPtr.Zero, Id, (UInt32)KeyModifiers, (UInt32)virtualKeyCode); if (_dictHotKeyToCalBackProc == null) { _dictHotKeyToCalBackProc = new Dictionary<int, HotKey>(); ComponentDispatcher.ThreadFilterMessage += new ThreadMessageEventHandler(ComponentDispatcherThreadFilterMessage); } _dictHotKeyToCalBackProc.Add(Id, this); Debug.Print(result.ToString() + ", " + Id + ", " + virtualKeyCode); return result; } // ****************************************************************** public void Unregister() { HotKey hotKey; if (_dictHotKeyToCalBackProc.TryGetValue(Id, out hotKey)) { UnregisterHotKey(IntPtr.Zero, Id); } } // ****************************************************************** private static void ComponentDispatcherThreadFilterMessage(ref MSG msg, ref bool handled) { if (!handled) { if (msg.message == WmHotKey) { HotKey hotKey; if (_dictHotKeyToCalBackProc.TryGetValue((int)msg.wParam, out hotKey)) { if (hotKey.Action != null) { hotKey.Action.Invoke(hotKey); } handled = true; } } } } // ****************************************************************** // Implement IDisposable. // Do not make this method virtual. // A derived class should not be able to override this method. public void Dispose() { Dispose(true); // This object will be cleaned up by the Dispose method. // Therefore, you should call GC.SupressFinalize to // take this object off the finalization queue // and prevent finalization code for this object // from executing a second time. GC.SuppressFinalize(this); } // ****************************************************************** // Dispose(bool disposing) executes in two distinct scenarios. // If disposing equals true, the method has been called directly // or indirectly by a user's code. Managed and unmanaged resources // can be _disposed. // If disposing equals false, the method has been called by the // runtime from inside the finalizer and you should not reference // other objects. Only unmanaged resources can be _disposed. protected virtual void Dispose(bool disposing) { // Check to see if Dispose has already been called. if (!this._disposed) { // If disposing equals true, dispose all managed // and unmanaged resources. if (disposing) { // Dispose managed resources. Unregister(); } // Note disposing has been done. _disposed = true; } } } // ****************************************************************** [Flags] public enum KeyModifier { None = 0x0000, Alt = 0x0001, Ctrl = 0x0002, NoRepeat = 0x4000, Shift = 0x0004, Win = 0x0008 } // ****************************************************************** } A: I'm not sure of what you mean by "global" here, but here it goes (I'm assuming you mean a command at the application level, for example, Save All that can be triggered from anywhere by Ctrl + Shift + S.) You find the global UIElement of your choice, for example, the top level window which is the parent of all the controls where you need this binding. Due to "bubbling" of WPF events, events at child elements will bubble all the way up to the root of the control tree. Now, first you need * *to bind the Key-Combo with a Command using an InputBinding like this *you can then hookup the command to your handler (e.g. code that gets called by SaveAll) via a CommandBinding. For the Windows Key, you use the right Key enumerated member, Key.LWin or Key.RWin public WindowMain() { InitializeComponent(); // Bind Key var ib = new InputBinding( MyAppCommands.SaveAll, new KeyGesture(Key.S, ModifierKeys.Shift | ModifierKeys.Control)); this.InputBindings.Add(ib); // Bind handler var cb = new CommandBinding( MyAppCommands.SaveAll); cb.Executed += new ExecutedRoutedEventHandler( HandlerThatSavesEverthing ); this.CommandBindings.Add (cb ); } private void HandlerThatSavesEverthing (object obSender, ExecutedRoutedEventArgs e) { // Do the Save All thing here. } A: This is similar to the answers already given, but I find it a bit cleaner: using System; using System.Windows.Forms; namespace GlobalHotkeyExampleForm { public partial class ExampleForm : Form { [System.Runtime.InteropServices.DllImport("user32.dll")] private static extern bool RegisterHotKey(IntPtr hWnd, int id, int fsModifiers, int vk); [System.Runtime.InteropServices.DllImport("user32.dll")] private static extern bool UnregisterHotKey(IntPtr hWnd, int id); enum KeyModifier { None = 0, Alt = 1, Control = 2, Shift = 4, WinKey = 8 } public ExampleForm() { InitializeComponent(); int id = 0; // The id of the hotkey. RegisterHotKey(this.Handle, id, (int)KeyModifier.Shift, Keys.A.GetHashCode()); // Register Shift + A as global hotkey. } protected override void WndProc(ref Message m) { base.WndProc(ref m); if (m.Msg == 0x0312) { /* Note that the three lines below are not needed if you only want to register one hotkey. * The below lines are useful in case you want to register multiple keys, which you can use a switch with the id as argument, or if you want to know which key/modifier was pressed for some particular reason. */ Keys key = (Keys)(((int)m.LParam >> 16) & 0xFFFF); // The key of the hotkey that was pressed. KeyModifier modifier = (KeyModifier)((int)m.LParam & 0xFFFF); // The modifier of the hotkey that was pressed. int id = m.WParam.ToInt32(); // The id of the hotkey that was pressed. MessageBox.Show("Hotkey has been pressed!"); // do something } } private void ExampleForm_FormClosing(object sender, FormClosingEventArgs e) { UnregisterHotKey(this.Handle, 0); // Unregister hotkey with id 0 before closing the form. You might want to call this more than once with different id values if you are planning to register more than one hotkey. } } } I've found it on fluxbytes.com. A: With the NHotKey package, you can make your hotkey global: * *https://github.com/thomaslevesque/NHotkey *https://thomaslevesque.com/2014/02/05/wpf-declare-global-hotkeys-in-xaml-with-nhotkey/ (use web.archive.org if the link is broken) In short, for XAML, all you need to do is to replace <KeyBinding Gesture="Ctrl+Alt+Add" Command="{Binding IncrementCommand}" /> by <KeyBinding Gesture="Ctrl+Alt+Add" Command="{Binding IncrementCommand}" HotkeyManager.RegisterGlobalHotkey="True" /> A: Registering OS level shortcuts is hardly ever a good thing: users don't want you to mess with their OS. That said, there is a much simpler and user friendly way of doing this in WPF, if you're ok with the hotkey working within the application only (i.e as long as your WPF app has the focus): In App.xaml.cs : protected override void OnStartup(StartupEventArgs e) { EventManager.RegisterClassHandler(typeof(Window), Window.PreviewKeyUpEvent, new KeyEventHandler(OnWindowKeyUp)); } private void OnWindowKeyUp(object source, KeyEventArgs e)) { //Do whatever you like with e.Key and Keyboard.Modifiers } It's that simple A: If you're going to mix Win32 and WPF, here's how I did it: using System; using System.Runtime.InteropServices; using System.Windows.Interop; using System.Windows.Media; using System.Threading; using System.Windows; using System.Windows.Input; namespace GlobalKeyboardHook { public class KeyboardHandler : IDisposable { public const int WM_HOTKEY = 0x0312; public const int VIRTUALKEYCODE_FOR_CAPS_LOCK = 0x14; [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool RegisterHotKey(IntPtr hWnd, int id, int fsModifiers, int vlc); [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool UnregisterHotKey(IntPtr hWnd, int id); private readonly Window _mainWindow; WindowInteropHelper _host; public KeyboardHandler(Window mainWindow) { _mainWindow = mainWindow; _host = new WindowInteropHelper(_mainWindow); SetupHotKey(_host.Handle); ComponentDispatcher.ThreadPreprocessMessage += ComponentDispatcher_ThreadPreprocessMessage; } void ComponentDispatcher_ThreadPreprocessMessage(ref MSG msg, ref bool handled) { if (msg.message == WM_HOTKEY) { //Handle hot key kere } } private void SetupHotKey(IntPtr handle) { RegisterHotKey(handle, GetType().GetHashCode(), 0, VIRTUALKEYCODE_FOR_CAPS_LOCK); } public void Dispose() { UnregisterHotKey(_host.Handle, GetType().GetHashCode()); } } } You can get the virtual-key code for the hotkey you want to register here: http://msdn.microsoft.com/en-us/library/ms927178.aspx There may be a better way, but this is what I've got so far. Cheers! A: I'm not sure about WPF, but this may help. I used the solution described in RegisterHotKey (user32) (modified to my needs of course) for a C# Windows Forms application to assign a CTRL-KEY combination within Windows to bring up a C# form, and it worked beautifully (even on Windows Vista). I hope it helps and good luck! A: Although RegisterHotKey is sometimes precisely what you want, in most cases you probably do not want to use system-wide hotkeys. I ended up using code like the following: using System.Windows; using System.Windows.Interop; namespace WpfApp { public partial class MainWindow : Window { const int WM_KEYUP = 0x0101; const int VK_RETURN = 0x0D; const int VK_LEFT = 0x25; public MainWindow() { this.InitializeComponent(); ComponentDispatcher.ThreadPreprocessMessage += ComponentDispatcher_ThreadPreprocessMessage; } void ComponentDispatcher_ThreadPreprocessMessage( ref MSG msg, ref bool handled) { if (msg.message == WM_KEYUP) { if ((int)msg.wParam == VK_RETURN) MessageBox.Show("RETURN was pressed"); if ((int)msg.wParam == VK_LEFT) MessageBox.Show("LEFT was pressed"); } } } } A: I've found the Global Hotkeys in WPF project on codeproject.com which does the job for me. It's relatively recent, does not need a reference to System.Windows.Forms and works "globally" in terms of reacting to the hotkey being pressed even if "your" application is not the active window. A: Baboon's solution works best because you may have multiple windows. I did tweak it so it uses the PreviewKeyDownEvent instead of the PreviewKeyUpEvent in order to handle repetition in keystrokes. I would advise against OS-level registration unless you are writing something like a snipping tool or an audio recording app as it will let you access functionality when the window is not focused. A: RegisterHotKey() suggested by John could work - the only catch is that it requires an HWND (using PresentationSource.FromVisual(), and casting the result to an HwndSource). However, you'll also need to respond to the WM_HOTKEY message - I'm not sure if there is a way to get access to the WndProc of a WPF window or not (which can be done for Windows Forms windows).
{ "language": "en", "url": "https://stackoverflow.com/questions/48935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53" }
Q: How do I implement a callback in PHP? How are callbacks written in PHP? A: With PHP 5.3, you can now do this: function doIt($callback) { $callback(); } doIt(function() { // this will be done }); Finally a nice way to do it. A great addition to PHP, because callbacks are awesome. A: well... with 5.3 on the horizon, all will be better, because with 5.3, we'll get closures and with them anonymous functions http://wiki.php.net/rfc/closures A: You will want to verify whatever your calling is valid. For example, in the case of a specific function, you will want to check and see if the function exists: function doIt($callback) { if(function_exists($callback)) { $callback(); } else { // some error handling } } A: create_function did not work for me inside a class. I had to use call_user_func. <?php class Dispatcher { //Added explicit callback declaration. var $callback; public function Dispatcher( $callback ){ $this->callback = $callback; } public function asynchronous_method(){ //do asynch stuff, like fwrite...then, fire callback. if ( isset( $this->callback ) ) { if (function_exists( $this->callback )) call_user_func( $this->callback, "File done!" ); } } } Then, to use: <?php include_once('Dispatcher.php'); $d = new Dispatcher( 'do_callback' ); $d->asynchronous_method(); function do_callback( $data ){ print 'Data is: ' . $data . "\n"; } ?> [Edit] Added a missing parenthesis. Also, added the callback declaration, I prefer it that way. A: For those who don't care about breaking compatibility with PHP < 5.4, I'd suggest using type hinting to make a cleaner implementation. function call_with_hello_and_append_world( callable $callback ) { // No need to check $closure because of the type hint return $callback( "hello" )."world"; } function append_space( $string ) { return $string." "; } $output1 = call_with_hello_and_append_world( function( $string ) { return $string." "; } ); var_dump( $output1 ); // string(11) "hello world" $output2 = call_with_hello_and_append_world( "append_space" ); var_dump( $output2 ); // string(11) "hello world" $old_lambda = create_function( '$string', 'return $string." ";' ); $output3 = call_with_hello_and_append_world( $old_lambda ); var_dump( $output3 ); // string(11) "hello world" A: Implementation of a callback is done like so // This function uses a callback function. function doIt($callback) { $data = "this is my data"; $callback($data); } // This is a sample callback function for doIt(). function myCallback($data) { print 'Data is: ' . $data . "\n"; } // Call doIt() and pass our sample callback function's name. doIt('myCallback'); Displays: Data is: this is my data A: I cringe every time I use create_function() in php. Parameters are a coma separated string, the whole function body in a string... Argh... I think they could not have made it uglier even if they tried. Unfortunately, it is the only choice when creating a named function is not worth the trouble. A: The manual uses the terms "callback" and "callable" interchangeably, however, "callback" traditionally refers to a string or array value that acts like a function pointer, referencing a function or class method for future invocation. This has allowed some elements of functional programming since PHP 4. The flavors are: $cb1 = 'someGlobalFunction'; $cb2 = ['ClassName', 'someStaticMethod']; $cb3 = [$object, 'somePublicMethod']; // this syntax is callable since PHP 5.2.3 but a string containing it // cannot be called directly $cb2 = 'ClassName::someStaticMethod'; $cb2(); // fatal error // legacy syntax for PHP 4 $cb3 = array(&$object, 'somePublicMethod'); This is a safe way to use callable values in general: if (is_callable($cb2)) { // Autoloading will be invoked to load the class "ClassName" if it's not // yet defined, and PHP will check that the class has a method // "someStaticMethod". Note that is_callable() will NOT verify that the // method can safely be executed in static context. $returnValue = call_user_func($cb2, $arg1, $arg2); } Modern PHP versions allow the first three formats above to be invoked directly as $cb(). call_user_func and call_user_func_array support all the above. See: http://php.net/manual/en/language.types.callable.php Notes/Caveats: * *If the function/class is namespaced, the string must contain the fully-qualified name. E.g. ['Vendor\Package\Foo', 'method'] *call_user_func does not support passing non-objects by reference, so you can either use call_user_func_array or, in later PHP versions, save the callback to a var and use the direct syntax: $cb(); *Objects with an __invoke() method (including anonymous functions) fall under the category "callable" and can be used the same way, but I personally don't associate these with the legacy "callback" term. *The legacy create_function() creates a global function and returns its name. It's a wrapper for eval() and anonymous functions should be used instead. A: One nifty trick that I've recently found is to use PHP's create_function() to create an anonymous/lambda function for one-shot use. It's useful for PHP functions like array_map(), preg_replace_callback(), or usort() that use callbacks for custom processing. It looks pretty much like it does an eval() under the covers, but it's still a nice functional-style way to use PHP.
{ "language": "en", "url": "https://stackoverflow.com/questions/48947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "191" }
Q: Is there a way to make WatiN click a link before the page finishes loading We're using WatiN for testing our UI, but one page (which is unfortunately not under our teams control) takes forever to finish loading. Is there a way to get WatiN to click a link on the page before the page finishes rendering completely? A: Here's the code we found to work: IE browser = new IE(....); browser.Button("SlowPageLoadingButton").ClickNoWait(); Link continueLink = browser.Link(Find.ByText("linktext")); continueLink.WaitUntilExists(); continueLink.Click(); A: You should be able to leave out the call to WaitUntilExists() since WatiN does this internally when you call a method or property on an element (like the link.Click() in you rexample). HTH, Jeroen van Menen Lead dev WatiN
{ "language": "en", "url": "https://stackoverflow.com/questions/48984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Linking combo box (JQuery preferrably) I am wondering if anyone has any experience using a JQuery plugin that converts a html <select> <option> Blah </option> </select> combo box into something (probably a div) where selecting an item acts the same as clicking a link. I guess you could probably use javascript to handle a selection event (my javascript knowledge is a little in disrepair at the moment) and 'switch' on the value of the combo box but this seems like more of a hack. Your advice, experience and recommendations are appreciated. A: The simple solution is to use $("#mySelect").change(function() { document.location = this.value; }); This creates an onchange event on the select box that redirects you to the url stored in the value field of the selected option. A: I'm not sure where you want to link to when you click the Div, but given something like this perhaps would work: <select id="mySelect"> <option value="1">Option 1</option> <option value="2">Option 2</options> </select> <div id="myDiv"/> and the following JQuery creates a list of <div> elements, a goes to a URL based on the value of the option: $("#mySelect option").each(function() { $("<div>" + $(this).text() + "</div>").appendTo($("#myDiv")).bind("click", $(this).val(), function(event) { location.href = "goto.php?id=" + event.data; }); }); $("#mySelect").remove(); Does this do what you want? A: If you're going to have a lot of select boxes that you want to allow to use as redirects, without having to define them independently, try something similar to: $("[id*='COMMON_NAME']").change(function() { document.location = this.value; }); And have your select boxes be named accordingly: <select id="COMMON_NAME_001">...</select> <select id="COMMON_NAME_002">...</select> This creates a onchange event for all IDs containing "COMMON_NAME" to do a redirect of the <option> value. A: This bit of javascript in the 'select': onchange="if(this.options[this.selectedIndex].value!=''){this.form.submit()}" It's not ideal (because form submissions in ASP.NET MVC which I'm using don't appear to use the routing engine for URLs) but it does its job.
{ "language": "en", "url": "https://stackoverflow.com/questions/48993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Getting DIV id based on x & y position The problem I'm trying to solve is "What's at this position?" It's fairly trivial to get the x/y position (offset) of a DIV, but what about the reverse? How do I get the id of a DIV (or any element) given an x/y position? A: Unfortunately, triggering a manufactured/simulated mouse event won't work, since when you dispatch it, you have to provide a target element. Since that element is the one you're trying to figure out, all you could do is dispatch it on the body, as if it had already bubbled. You really are left to do it on your own, that is manually walk through the elements you're interested in, and compare their position/size/zIndex to your x/y point and see if they overlap. Except in IE and more recently FF3, where you can use var el = document.elementFromPoint(x, y); See http://developer.mozilla.org/En/DOM:document.elementFromPoint http://msdn.microsoft.com/en-us/library/ms536417(VS.85).aspx A: function getDivByXY(x,y) { var alldivs = document.getElementsByTagName('div'); for(var d = 0; d < alldivs.length; d++) { if((alldivs[d].offsetLeft == x) && (alldivs[d].offsetTop == y)) { return alldivs[d]; } } return false; } A: Use a JQuery selector to filter the list of all DIVs for one that matches your position criteria? A: Create a mouse event listener, then trigger a mouse event at that location. This should give you the entire stack of elements at that location. Or, look at the source of Firebug. A: If all you have is the X and Y position, (and you can't track mouse movement like you mentioned) then you will have to traverse the DOM, looping through every DIV. For each DIV you will need to compare its X and Y coordinates against those you have. This is an expensive operation, but it is the only way. I suggest you might be better off rethinking your problem instead of coming up with a solution for it. A: One option is to build an array of "div-dimension" objects. (Not to be confused with the divs themselves... IE7 perf is frustrating when you read dimensions off of object.) These objects consist of a pointer to the div, their dimensions (four points... say top, left, bottom, and right), and possibly a dirty bit. (Dirty bit is only really needed if the sizes change. You could then iterate through the array and check dimensions. It requires O(n) to do that on each mouse move. You might be able to do slightly better with a binary search style approach... maybe. If you do a binary search style approach, one way is to store 4 arrays. Each with a single point of the dimension, and then binary search on all four. O(4logn) = O(logn). I'm not saying I recommend any of these, but they MIGHT work. A: I think what John is saying is that you can use document.createEvent() to simulate a mousemove at the location you want. If you capture that event, by adding an eventlistener to the body, you can look at the event.target and see what element was at that position. I'm unsure as to what degree IE supports this method, maybe someone else knows? http://developer.mozilla.org/en/DOM/document.createEvent Update: Here's a jquery plugin that simulates events: http://jquery-ui.googlecode.com/svn/trunk/tests/simulate/jquery.simulate.js A: this might be a little too processor intensive but going over the whole list of div elements on a page, finding their positions and sizes then testing if they're under the mouse. i don't think i'd want to do that to a browser though. A: You might find it's more efficient to traverse the DOM tree once when the page is loaded, get all elements' positions and sizes, and store them in an array/hash/etc. If you design the data structure well, you should be able to find an element at the given coordinates fairly quickly when you need it later. Consider how often you will need to detect an element, and compare that to how often the elements on the page will change. You would be balancing the number of times you have to re-compute all the element locations (an expensive computation) against the number of times you'd actually use the computed information (relatively cheap, I hope).
{ "language": "en", "url": "https://stackoverflow.com/questions/48999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Prefer composition over inheritance? Why prefer composition over inheritance? What trade-offs are there for each approach? When should you choose inheritance over composition? A: When you want to "copy"/Expose the base class' API, you use inheritance. When you only want to "copy" functionality, use delegation. One example of this: You want to create a Stack out of a List. Stack only has pop, push and peek. You shouldn't use inheritance given that you don't want push_back, push_front, removeAt, et al.-kind of functionality in a Stack. A: Didn't find a satisfactory answer here, so I wrote a new one. To understand why "prefer composition over inheritance", we need first get back the assumption omitted in this shortened idiom. There are two benefits of inheritance: subtyping and subclassing * *Subtyping means conforming to a type (interface) signature, i.e. a set of APIs, and one can override part of the signature to achieve subtyping polymorphism. *Subclassing means implicit reuse of method implementations. With the two benefits comes two different purposes for doing inheritance: subtyping oriented and code reuse oriented. If code reuse is the sole purpose, subclassing may give one more than what he needs, i.e. some public methods of the parent class don't make much sense for the child class. In this case, instead of favoring composition over inheritance, composition is demanded. This is also where the "is-a" vs. "has-a" notion comes from. So only when subtyping is purposed, i.e. to use the new class later in a polymorphic manner, do we face the problem of choosing inheritance or composition. This is the assumption that gets omitted in the shortened idiom under discussion. To subtype is to conform to a type signature, this means composition has always to expose no less amount of APIs of the type. Now the trade offs kick in: * *Inheritance provides straightforward code reuse if not overridden, while composition has to re-code every API, even if it's just a simple job of delegation. *Inheritance provides straightforward open recursion via the internal polymorphic site this, i.e. invoking overriding method (or even type) in another member function, either public or private (though discouraged). Open recursion can be simulated via composition, but it requires extra effort and may not always viable(?). This answer to a duplicated question talks something similar. *Inheritance exposes protected members. This breaks encapsulation of the parent class, and if used by subclass, another dependency between the child and its parent is introduced. *Composition has the befit of inversion of control, and its dependency can be injected dynamically, as is shown in decorator pattern and proxy pattern. *Composition has the benefit of combinator-oriented programming, i.e. working in a way like the composite pattern. *Composition immediately follows programming to an interface. *Composition has the benefit of easy multiple inheritance. With the above trade offs in mind, we hence prefer composition over inheritance. Yet for tightly related classes, i.e. when implicit code reuse really make benefits, or the magic power of open recursion is desired, inheritance shall be the choice. A: These two ways can live together just fine and actually support each other. Composition is just playing it modular: you create interface similar to the parent class, create new object and delegate calls to it. If these objects need not to know of each other, it's quite safe and easy to use composition. There are so many possibilites here. However, if the parent class for some reason needs to access functions provided by the "child class" for inexperienced programmer it may look like it's a great place to use inheritance. The parent class can just call it's own abstract "foo()" which is overwritten by the subclass and then it can give the value to the abstract base. It looks like a nice idea, but in many cases it's better just give the class an object which implements the foo() (or even set the value provided the foo() manually) than to inherit the new class from some base class which requires the function foo() to be specified. Why? Because inheritance is a poor way of moving information. The composition has a real edge here: the relationship can be reversed: the "parent class" or "abstract worker" can aggregate any specific "child" objects implementing certain interface + any child can be set inside any other type of parent, which accepts it's type. And there can be any number of objects, for example MergeSort or QuickSort could sort any list of objects implementing an abstract Compare -interface. Or to put it another way: any group of objects which implement "foo()" and other group of objects which can make use of objects having "foo()" can play together. I can think of three real reasons for using inheritance: * *You have many classes with same interface and you want to save time writing them *You have to use same Base Class for each object *You need to modify the private variables, which can not be public in any case If these are true, then it is probably necessary to use inheritance. There is nothing bad in using reason 1, it is very good thing to have a solid interface on your objects. This can be done using composition or with inheritance, no problem - if this interface is simple and does not change. Usually inheritance is quite effective here. If the reason is number 2 it gets a bit tricky. Do you really only need to use the same base class? In general, just using the same base class is not good enough, but it may be a requirement of your framework, a design consideration which can not be avoided. However, if you want to use the private variables, the case 3, then you may be in trouble. If you consider global variables unsafe, then you should consider using inheritance to get access to private variables also unsafe. Mind you, global variables are not all THAT bad - databases are essentially big set of global variables. But if you can handle it, then it's quite fine. A: Inheritance is pretty enticing especially coming from procedural-land and it often looks deceptively elegant. I mean all I need to do is add this one bit of functionality to some other class, right? Well, one of the problems is that inheritance is probably the worst form of coupling you can have Your base class breaks encapsulation by exposing implementation details to subclasses in the form of protected members. This makes your system rigid and fragile. The more tragic flaw however is the new subclass brings with it all the baggage and opinion of the inheritance chain. The article, Inheritance is Evil: The Epic Fail of the DataAnnotationsModelBinder, walks through an example of this in C#. It shows the use of inheritance when composition should have been used and how it could be refactored. A: Aside from is a/has a considerations, one must also consider the "depth" of inheritance your object has to go through. Anything beyond five or six levels of inheritance deep might cause unexpected casting and boxing/unboxing problems, and in those cases it might be wise to compose your object instead. A: When you have an is-a relation between two classes (example dog is a canine), you go for inheritance. On the other hand when you have has-a or some adjective relationship between two classes (student has courses) or (teacher studies courses), you chose composition. A: When can you use composition? You can always use composition. In some cases, inheritance is also possible and may lead to a more powerful and/or intuitive API, but composition is always an option. When can you use inheritance? It is often said that if "a bar is a foo", then the class Bar can inherit the class Foo. Unfortunately, this test alone is not reliable, use the following instead: * *a bar is a foo, AND *bars can do everything that foos can do. The first test ensures that all getters of Foo make sense in Bar (= shared properties), while the second test makes sure that all setters of Foo make sense in Bar (= shared functionality). Example: Dog/Animal A dog is an animal AND dogs can do everything that animals can do (such as breathing, moving, etc.). Therefore, the class Dog can inherit the class Animal. Counter-example: Circle/Ellipse A circle is an ellipse BUT circles can't do everything that ellipses can do. For example, circles can't stretch, while ellipses can. Therefore, the class Circle cannot inherit the class Ellipse. This is called the Circle-Ellipse problem, which isn't really a problem, but more an indication that "a bar is a foo" isn't a reliable test by itself. In particular, this example highlights that derived classes should extend the functionality of base classes, never restrict it. Otherwise, the base class couldn't be used polymorphically. Adding the test "bars can do everything that foos can do" ensures that polymorphic use is possible, and is equivalent to the Liskov Substitution Principle: Functions that use pointers or references to base classes must be able to use objects of derived classes without knowing it When should you use inheritance? Even if you can use inheritance doesn't mean you should: using composition is always an option. Inheritance is a powerful tool allowing implicit code reuse and dynamic dispatch, but it does come with a few disadvantages, which is why composition is often preferred. The trade-offs between inheritance and composition aren't obvious, and in my opinion are best explained in lcn's answer. As a rule of thumb, I tend to choose inheritance over composition when polymorphic use is expected to be very common, in which case the power of dynamic dispatch can lead to a much more readable and elegant API. For example, having a polymorphic class Widget in GUI frameworks, or a polymorphic class Node in XML libraries allows to have an API which is much more readable and intuitive to use than what you would have with a solution purely based on composition. A: Think of containment as a has a relationship. A car "has an" engine, a person "has a" name, etc. Think of inheritance as an is a relationship. A car "is a" vehicle, a person "is a" mammal, etc. I take no credit for this approach. I took it straight from the Second Edition of Code Complete by Steve McConnell, Section 6.3. A: In Java or C#, an object cannot change its type once it has been instantiated. So, if your object need to appear as a different object or behave differently depending on an object state or conditions, then use Composition: Refer to State and Strategy Design Patterns. If the object need to be of the same type, then use Inheritance or implement interfaces. A: A simple way to make sense of this would be that inheritance should be used when you need an object of your class to have the same interface as its parent class, so that it can thereby be treated as an object of the parent class (upcasting). Moreover, function calls on a derived class object would remain the same everywhere in code, but the specific method to call would be determined at runtime (i.e. the low-level implementation differs, the high-level interface remains the same). Composition should be used when you do not need the new class to have the same interface, i.e. you wish to conceal certain aspects of the class' implementation which the user of that class need not know about. So composition is more in the way of supporting encapsulation (i.e. concealing the implementation) while inheritance is meant to support abstraction (i.e. providing a simplified representation of something, in this case the same interface for a range of types with different internals). A: Subtyping is appropriate and more powerful where the invariants can be enumerated, else use function composition for extensibility. A: I agree with @Pavel, when he says, there are places for composition and there are places for inheritance. I think inheritance should be used if your answer is an affirmative to any of these questions. * *Is your class part of a structure that benefits from polymorphism ? For example, if you had a Shape class, which declares a method called draw(), then we clearly need Circle and Square classes to be subclasses of Shape, so that their client classes would depend on Shape and not on specific subclasses. *Does your class need to re-use any high level interactions defined in another class ? The template method design pattern would be impossible to implement without inheritance. I believe all extensible frameworks use this pattern. However, if your intention is purely that of code re-use, then composition most likely is a better design choice. A: Inheritance is a very powerfull machanism for code reuse. But needs to be used properly. I would say that inheritance is used correctly if the subclass is also a subtype of the parent class. As mentioned above, the Liskov Substitution Principle is the key point here. Subclass is not the same as subtype. You might create subclasses that are not subtypes (and this is when you should use composition). To understand what a subtype is, lets start giving an explanation of what a type is. When we say that the number 5 is of type integer, we are stating that 5 belongs to a set of possible values (as an example, see the possible values for the Java primitive types). We are also stating that there is a valid set of methods I can perform on the value like addition and subtraction. And finally we are stating that there are a set of properties that are always satisfied, for example, if I add the values 3 and 5, I will get 8 as a result. To give another example, think about the abstract data types, Set of integers and List of integers, the values they can hold are restricted to integers. They both support a set of methods, like add(newValue) and size(). And they both have different properties (class invariant), Sets does not allow duplicates while List does allow duplicates (of course there are other properties that they both satisfy). Subtype is also a type, which has a relation to another type, called parent type (or supertype). The subtype must satisfy the features (values, methods and properties) of the parent type. The relation means that in any context where the supertype is expected, it can be substitutable by a subtype, without affecting the behaviour of the execution. Let’s go to see some code to exemplify what I’m saying. Suppose I write a List of integers (in some sort of pseudo language): class List { data = new Array(); Integer size() { return data.length; } add(Integer anInteger) { data[data.length] = anInteger; } } Then, I write the Set of integers as a subclass of the List of integers: class Set, inheriting from: List { add(Integer anInteger) { if (data.notContains(anInteger)) { super.add(anInteger); } } } Our Set of integers class is a subclass of List of Integers, but is not a subtype, due to it is not satisfying all the features of the List class. The values, and the signature of the methods are satisfied but the properties are not. The behaviour of the add(Integer) method has been clearly changed, not preserving the properties of the parent type. Think from the point of view of the client of your classes. They might receive a Set of integers where a List of integers is expected. The client might want to add a value and get that value added to the List even if that value already exist in the List. But her wont get that behaviour if the value exists. A big suprise for her! This is a classic example of an improper use of inheritance. Use composition in this case. (a fragment from: use inheritance properly). A: Even though Composition is preferred, I would like to highlight pros of Inheritance and cons of Composition. Pros of Inheritance: * *It establishes a logical "IS A" relation. If Car and Truck are two types of Vehicle ( base class), child class IS A base class. i.e. Car is a Vehicle Truck is a Vehicle *With inheritance, you can define/modify/extend a capability * *Base class provides no implementation and sub-class has to override complete method (abstract) => You can implement a contract *Base class provides default implementation and sub-class can change the behaviour => You can re-define contract *Sub-class adds extension to base class implementation by calling super.methodName() as first statement => You can extend a contract *Base class defines structure of the algorithm and sub-class will override a part of algorithm => You can implement Template_method without change in base class skeleton Cons of Composition: * *In inheritance, subclass can directly invoke base class method even though it's not implementing base class method because of IS A relation. If you use composition, you have to add methods in container class to expose contained class API e.g. If Car contains Vehicle and if you have to get price of the Car, which has been defined in Vehicle, your code will be like this class Vehicle{ protected double getPrice(){ // return price } } class Car{ Vehicle vehicle; protected double getPrice(){ return vehicle.getPrice(); } } A: Personally I learned to always prefer composition over inheritance. There is no programmatic problem you can solve with inheritance which you cannot solve with composition; though you may have to use Interfaces(Java) or Protocols(Obj-C) in some cases. Since C++ doesn't know any such thing, you'll have to use abstract base classes, which means you cannot get entirely rid of inheritance in C++. Composition is often more logical, it provides better abstraction, better encapsulation, better code reuse (especially in very large projects) and is less likely to break anything at a distance just because you made an isolated change anywhere in your code. It also makes it easier to uphold the "Single Responsibility Principle", which is often summarized as "There should never be more than one reason for a class to change.", and it means that every class exists for a specific purpose and it should only have methods that are directly related to its purpose. Also having a very shallow inheritance tree makes it much easier to keep the overview even when your project starts to get really large. Many people think that inheritance represents our real world pretty well, but that isn't the truth. The real world uses much more composition than inheritance. Pretty much every real world object you can hold in your hand has been composed out of other, smaller real world objects. There are downsides of composition, though. If you skip inheritance altogether and only focus on composition, you will notice that you often have to write a couple of extra code lines that weren't necessary if you had used inheritance. You are also sometimes forced to repeat yourself and this violates the DRY Principle (DRY = Don't Repeat Yourself). Also composition often requires delegation, and a method is just calling another method of another object with no other code surrounding this call. Such "double method calls" (which may easily extend to triple or quadruple method calls and even farther than that) have much worse performance than inheritance, where you simply inherit a method of your parent. Calling an inherited method may be equally fast as calling a non-inherited one, or it may be slightly slower, but is usually still faster than two consecutive method calls. You may have noticed that most OO languages don't allow multiple inheritance. While there are a couple of cases where multiple inheritance can really buy you something, but those are rather exceptions than the rule. Whenever you run into a situation where you think "multiple inheritance would be a really cool feature to solve this problem", you are usually at a point where you should re-think inheritance altogether, since even it may require a couple of extra code lines, a solution based on composition will usually turn out to be much more elegant, flexible and future proof. Inheritance is really a cool feature, but I'm afraid it has been overused the last couple of years. People treated inheritance as the one hammer that can nail it all, regardless if it was actually a nail, a screw, or maybe a something completely different. A: A rule of thumb I have heard is inheritance should be used when its a "is-a" relationship and composition when its a "has-a". Even with that I feel that you should always lean towards composition because it eliminates a lot of complexity. A: If you understand the difference, it's easier to explain. Procedural Code An example of this is PHP without the use of classes (particularly before PHP5). All logic is encoded in a set of functions. You may include other files containing helper functions and so on and conduct your business logic by passing data around in functions. This can be very hard to manage as the application grows. PHP5 tries to remedy this by offering a more object-oriented design. Inheritance This encourages the use of classes. Inheritance is one of the three tenets of OO design (inheritance, polymorphism, encapsulation). class Person { String Title; String Name; Int Age } class Employee : Person { Int Salary; String Title; } This is inheritance at work. The Employee "is a" Person or inherits from Person. All inheritance relationships are "is-a" relationships. Employee also shadows the Title property from Person, meaning Employee.Title will return the Title for the Employee and not the Person. Composition Composition is favoured over inheritance. To put it very simply you would have: class Person { String Title; String Name; Int Age; public Person(String title, String name, String age) { this.Title = title; this.Name = name; this.Age = age; } } class Employee { Int Salary; private Person person; public Employee(Person p, Int salary) { this.person = p; this.Salary = salary; } } Person johnny = new Person ("Mr.", "John", 25); Employee john = new Employee (johnny, 50000); Composition is typically "has a" or "uses a" relationship. Here the Employee class has a Person. It does not inherit from Person but instead gets the Person object passed to it, which is why it "has a" Person. Composition over Inheritance Now say you want to create a Manager type so you end up with: class Manager : Person, Employee { ... } This example will work fine, however, what if Person and Employee both declared Title? Should Manager.Title return "Manager of Operations" or "Mr."? Under composition this ambiguity is better handled: Class Manager { public string Title; public Manager(Person p, Employee e) { this.Title = e.Title; } } The Manager object is composed of an Employee and a Person. The Title behaviour is taken from Employee. This explicit composition removes ambiguity among other things and you'll encounter fewer bugs. A: My general rule of thumb: Before using inheritance, consider if composition makes more sense. Reason: Subclassing usually means more complexity and connectedness, i.e. harder to change, maintain, and scale without making mistakes. A much more complete and concrete answer from Tim Boudreau of Sun: Common problems to the use of inheritance as I see it are: * *Innocent acts can have unexpected results - The classic example of this is calls to overridable methods from the superclass constructor, before the subclasses instance fields have been initialized. In a perfect world, nobody would ever do that. This is not a perfect world. *It offers perverse temptations for subclassers to make assumptions about order of method calls and such - such assumptions tend not to be stable if the superclass may evolve over time. See also my toaster and coffee pot analogy. *Classes get heavier - you don't necessarily know what work your superclass is doing in its constructor, or how much memory it's going to use. So constructing some innocent would-be lightweight object can be far more expensive than you think, and this may change over time if the superclass evolves *It encourages an explosion of subclasses. Classloading costs time, more classes costs memory. This may be a non-issue until you're dealing with an app on the scale of NetBeans, but there, we had real issues with, for example, menus being slow because the first display of a menu triggered massive class loading. We fixed this by moving to more declarative syntax and other techniques, but that cost time to fix as well. *It makes it harder to change things later - if you've made a class public, swapping the superclass is going to break subclasses - it's a choice which, once you've made the code public, you're married to. So if you're not altering the real functionality to your superclass, you get much more freedom to change things later if you use, rather than extend the thing you need. Take, for example, subclassing JPanel - this is usually wrong; and if the subclass is public somewhere, you never get a chance to revisit that decision. If it's accessed as JComponent getThePanel() , you can still do it (hint: expose models for the components within as your API). *Object hierarchies don't scale (or making them scale later is much harder than planning ahead) - this is the classic "too many layers" problem. I'll go into this below, and how the AskTheOracle pattern can solve it (though it may offend OOP purists). ... My take on what to do, if you do allow for inheritance, which you may take with a grain of salt is: * *Expose no fields, ever, except constants *Methods shall be either abstract or final *Call no methods from the superclass constructor ... all of this applies less to small projects than large ones, and less to private classes than public ones A: With all the undeniable benefits provided by inheritance, here's some of its disadvantages. Disadvantages of Inheritance: * *You can't change the implementation inherited from super classes at runtime (obviously because inheritance is defined at compile time). *Inheritance exposes a subclass to details of its parent class implementation, that's why it's often said that inheritance breaks encapsulation (in a sense that you really need to focus on interfaces only not implementation, so reusing by sub classing is not always preferred). *The tight coupling provided by inheritance makes the implementation of a subclass very bound up with the implementation of a super class that any change in the parent implementation will force the sub class to change. *Excessive reusing by sub-classing can make the inheritance stack very deep and very confusing too. On the other hand Object composition is defined at runtime through objects acquiring references to other objects. In such a case these objects will never be able to reach each-other's protected data (no encapsulation break) and will be forced to respect each other's interface. And in this case also, implementation dependencies will be a lot less than in case of inheritance. A: Inheritance is very powerful, but you can't force it (see: the circle-ellipse problem). If you really can't be completely sure of a true "is-a" subtype relationship, then it's best to go with composition. A: As many people told, I will first start with the check - whether there exists an "is-a" relationship. If it exists I usually check the following: Whether the base class can be instantiated. That is, whether the base class can be non-abstract. If it can be non-abstract I usually prefer composition E.g 1. Accountant is an Employee. But I will not use inheritance because a Employee object can be instantiated. E.g 2. Book is a SellingItem. A SellingItem cannot be instantiated - it is abstract concept. Hence I will use inheritacne. The SellingItem is an abstract base class (or interface in C#) What do you think about this approach? Also, I support @anon answer in Why use inheritance at all? The main reason for using inheritance is not as a form of composition - it is so you can get polymorphic behaviour. If you don't need polymorphism, you probably should not be using inheritance. @MatthieuM. says in https://softwareengineering.stackexchange.com/questions/12439/code-smell-inheritance-abuse/12448#comment303759_12448 The issue with inheritance is that it can be used for two orthogonal purposes: interface (for polymorphism) implementation (for code reuse) REFERENCE * *Which class design is better? *Inheritance vs. Aggregation A: Composition v/s Inheritance is a wide subject. There is no real answer for what is better as I think it all depends on the design of the system. Generally type of relationship between object provide better information to choose one of them. If relation type is "IS-A" relation then Inheritance is better approach. otherwise relation type is "HAS-A" relation then composition will better approach. Its totally depend on entity relationship. A: I see no one mentioned the diamond problem, which might arise with inheritance. In a glance, if classes B and C inherit A and both override method X, and a fourth class D, inherits from both B and C, and does not override X, which implementation of X D is supposed to use? Wikipedia offers a nice overview of the topic being discussed in this question. A: Inheritance creates a strong relationship between a subclass and super class; subclass must be aware of super class'es implementation details. Creating the super class is much harder, when you have to think about how it can be extended. You have to document class invariants carefully, and state what other methods overridable methods use internally. Inheritance is sometimes useful, if the hierarchy really represents a is-a-relationship. It relates to Open-Closed Principle, which states that classes should be closed for modification but open to extension. That way you can have polymorphism; to have a generic method that deals with super type and its methods, but via dynamic dispatch the method of subclass is invoked. This is flexible, and helps to create indirection, which is essential in software (to know less about implementation details). Inheritance is easily overused, though, and creates additional complexity, with hard dependencies between classes. Also understanding what happens during execution of a program gets pretty hard due to layers and dynamic selection of method calls. I would suggest using composing as the default. It is more modular, and gives the benefit of late binding (you can change the component dynamically). Also it's easier to test the things separately. And if you need to use a method from a class, you are not forced to be of certain form (Liskov Substitution Principle). A: Suppose an aircraft has only two parts: an engine and wings. Then there are two ways to design an aircraft class. Class Aircraft extends Engine{ var wings; } Now your aircraft can start with having fixed wings and change them to rotary wings on the fly. It's essentially an engine with wings. But what if I wanted to change the engine on the fly as well? Either the base class Engine exposes a mutator to change its properties, or I redesign Aircraft as: Class Aircraft { var wings; var engine; } Now, I can replace my engine on the fly as well. A: If you want the canonical, textbook answer people have been giving since the rise of OOP (which you see many people giving in these answers), then apply the following rule: "if you have an is-a relationship, use inheritance. If you have a has-a relationship, use composition". This is the traditional advice, and if that satisfies you, you can stop reading here and go on your merry way. For everyone else... is-a/has-a comparisons have problems For example: * *A square is-a rectangle, but if your rectangle class has setWidth()/setHeight() methods, then there's no reasonable way to make a Square inherit from Rectangle without breaking Liskov's substitution principle. *An is-a relationship can often be rephrased to sound like a has-a relationship. For example, an employee is-a person, but a person also has-an employment status of "employed". *is-a relationships can lead to nasty multiple inheritance hierarchies if you're not careful. After all, there's no rule in English that states that an object is exactly one thing. *People are quick to pass this "rule" around, but has anyone ever tried to back it up, or explain why it's a good heuristic to follow? Sure, it fits nicely into the idea that OOP is supposed to model the real world, but that's not in-and-of-itself a reason to adopt a principle. See this StackOverflow question for more reading on this subject. To know when to use inheritance vs composition, we first need to understand the pros and cons of each. The problems with implementation inheritance Other answers have done a wonderful job at explaining the issues with inheritance, so I'll try to not delve into too many details here. But, here's a brief list: * *It can be difficult to follow a logic that weaves between base and sub-class methods. *Carelessly implementing one method in your class by calling another overridable method will cause you to leak implementation details and break encapsulation, as the end-user could override your method and detect when you internally call it. (See "Effective Java" item 18). *The fragile base problem, which simply states that your end-user's code will break if they happen to depend on the leakage of implementation details when you attempt to change them. To make matters worse, most OOP languages allow inheritance by default - API designers who aren't proactively preventing people from inheriting from their public classes need to be extra cautious whenever they refactor their base classes. Unfortunately, the fragile base problem is often misunderstood, causing many to not understand what it takes to maintain a class that anyone can inherit from. *The deadly diamond of death The problems with composition * *It can sometimes be a little verbose. That's it. I'm serious. This is still a real issue and can sometimes create conflict with the DRY principle, but it's generally not that bad, at least compared to the myriad of pitfalls associated with inheritance. When should inheritance be used? Next time you're drawing out your fancy UML diagrams for a project (if you do that), and you're thinking about adding in some inheritance, please adhere to the following advice: don't. At least, not yet. Inheritance is sold as a tool to achieve polymorphism, but bundled with it is this powerful code-reuse system, that frankly, most code doesn't need. The problem is, as soon as you publicly expose your inheritance hierarchy, you're locked into this particular style of code-reuse, even if it's overkill to solve your particular problem. To avoid this, my two cents would be to never expose your base classes publicly. * *If you need polymorphism, use an interface. *If you need to allow people to customize the behavior of your class, provide explicit hook-in points via the strategy pattern, it's a more readable way to accomplish this, plus, it's easier to keep this sort of API stable as you're in full control over what behaviors they can and can not change. *If you're trying to follow the open-closed principle by using inheritance to avoid adding a much-needed update to a class, just don't. Update the class. Your codebase will be much cleaner if you actually take ownership of the code you're hired to maintain instead of trying to tack stuff onto the side of it. If you're scared about introducing bugs, then get the existing code under test. *If you need to reuse code, start out by trying to use composition or helper functions. Finally, if you've decided that there's no other good option, and you must use inheritance to achieve the code-reuse that you need, then you can use it, but, follow these four P.A.I.L. rules of restricted inheritance to keep it sane. * *Use inheritance as a private implementation detail. Don't expose your base class publicly, use interfaces for that. This lets you freely add or remove inheritance as you see fit without making a breaking change. *Keep your base class abstract. It makes it easier to divide out the logic that needs to be shared from the logic that doesn't. *Isolate your base and child classes. Don't let your subclass override base class methods (use the strategy pattern for that), and avoid having them expect properties/methods to exist on each other, use other forms of code-sharing to achieve that. Use appropriate language features to force all methods on the base class to be non-overridable ("final" in Java, or non-virtual in C#). *Inheritance is a last resort. The Isolate rule in particular may sound a little rough to follow, but if you discipline yourself, you'll get some pretty nice benefits. In particular, it gives you the freedom to avoid all of the main nasty pitfalls associated with the inheritance that were mentioned above. * *It's much easier to follow the code because it doesn't weave in and out of base/sub classes. *You can not accidentally leak when your methods are internally calling other overridable methods if you never make any of your methods overridable. In other words, you won't accidentally break encapsulation. *The fragile base class problem stems from the ability to depend on accidentally leaked implementation details. Since the base class is now isolated, it will be no more fragile than a class depending on another via composition. *The deadly diamond of death isn't an issue anymore, since there's simply no need to have multiple layers of inheritance. If you have the abstract base classes B and C, which both share a lot of functionality, just move that functionality out of B and C and into a new abstract base class, class D. Anyone who inherited from B should update to inherit from both B and D, and anyone who inherited from C should inherit from C and D. Since your base classes are all private implementation details, it shouldn't be too difficult to figure out who's inheriting from what, to make these changes. Conclusion My primary suggestion would be to use your brain on this matter. What's far more important than a list of dos and don'ts about when to use inheritance is an intuitive understanding of inheritance and its associated pros and cons, along with a good understanding of the other tools out there that can be used instead of inheritance (composition isn't the only alternative. For example, the strategy pattern is an amazing tool that's forgotten far too often). Perhaps when you have a good, solid understanding of all of these tools, you'll choose to use inheritance more often than I would recommend, and that's completely fine. At least, you're making an informed decision, and aren't just using inheritance because that's the only way you know how to do it. Further reading: * *An article I wrote on this subject, that dives even deeper and provides examples. *A webpage talking about three different jobs that inheritance does, and how those jobs can be done via other means in the Go language. *A list of reasons why it can be good to declare your class as non-inheritable (e.g. "final" in Java). *The "Effective Java" book by Joshua Bloch, item 18, which discusses composition over inheritance, and some of the dangers of inheritance. A: Prefer composition over inheritance as it is more malleable / easy to modify later, but do not use a compose-always approach. With composition, it's easy to change behavior on the fly with Dependency Injection / Setters. Inheritance is more rigid as most languages do not allow you to derive from more than one type. So the goose is more or less cooked once you derive from TypeA. My acid test for the above is: * *Does TypeB want to expose the complete interface (all public methods no less) of TypeA such that TypeB can be used where TypeA is expected? Indicates Inheritance. * *e.g. A Cessna biplane will expose the complete interface of an airplane, if not more. So that makes it fit to derive from Airplane. *Does TypeB want only some/part of the behavior exposed by TypeA? Indicates need for Composition. * *e.g. A Bird may need only the fly behavior of an Airplane. In this case, it makes sense to extract it out as an interface / class / both and make it a member of both classes. Update: Just came back to my answer and it seems now that it is incomplete without a specific mention of Barbara Liskov's Liskov Substitution Principle as a test for 'Should I be inheriting from this type?' A: You need to have a look at The Liskov Substitution Principle in Uncle Bob's SOLID principles of class design. :) A: Another, very pragmatic reason, to prefer composition over inheritance has to do with your domain model, and mapping it to a relational database. It's really hard to map inheritance to the SQL model (you end up with all sorts of hacky workarounds, like creating columns that aren't always used, using views, etc). Some ORMLs try to deal with this, but it always gets complicated quickly. Composition can be easily modeled through a foreign-key relationship between two tables, but inheritance is much harder. A: While in short words I would agree with "Prefer composition over inheritance", very often for me it sounds like "prefer potatoes over coca-cola". There are places for inheritance and places for composition. You need to understand difference, then this question will disappear. What it really means for me is "if you are going to use inheritance - think again, chances are you need composition". You should prefer potatoes over coca cola when you want to eat, and coca cola over potatoes when you want to drink. Creating a subclass should mean more than just a convenient way to call superclass methods. You should use inheritance when subclass "is-a" super class both structurally and functionally, when it can be used as superclass and you are going to use that. If it is not the case - it is not inheritance, but something else. Composition is when your objects consists of another, or has some relationship to them. So for me it looks like if someone does not know if he needs inheritance or composition, the real problem is that he does not know if he want to drink or to eat. Think about your problem domain more, understand it better. A: To address this question from a different perspective for newer programmers: Inheritance is often taught early when we learn object-oriented programming, so it's seen as an easy solution to a common problem. I have three classes that all need some common functionality. So if I write a base class and have them all inherit from it, then they will all have that functionality and I'll only need to maintain it in once place. It sounds great, but in practice it almost never, ever works, for one of several reasons: * *We discover that there are some other functions that we want our classes to have. If the way that we add functionality to classes is through inheritance, we have to decide - do we add it to the existing base class, even though not every class that inherits from it needs that functionality? Do we create another base class? But what about classes that already inherit from the other base class? *We discover that for just one of the classes that inherits from our base class we want the base class to behave a little differently. So now we go back and tinker with our base class, maybe adding some virtual methods, or even worse, some code that says, "If I'm inherited type A, do this, but if I'm inherited type B, do that." That's bad for lots of reasons. One is that every time we change the base class, we're effectively changing every inherited class. So we're really changing class A, B, C, and D because we need a slightly different behavior in class A. As careful as we think we are, we might break one of those classes for reasons that have nothing to do with those classes. *We might know why we decided to make all of these classes inherit from each other, but it might not (probably won't) make sense to someone else who has to maintain our code. We might force them into a difficult choice - do I do something really ugly and messy to make the change I need (see the previous bullet point) or do I just rewrite a bunch of this. In the end, we tie our code in some difficult knots and get no benefit whatsoever from it except that we get to say, "Cool, I learned about inheritance and now I used it." That's not meant to be condescending because we've all done it. But we all did it because no one told us not to. As soon as someone explained "favor composition over inheritance" to me, I thought back over every time I tried to share functionality between classes using inheritance and realized that most of the time it didn't really work well. The antidote is the Single Responsibility Principle. Think of it as a constraint. My class must do one thing. I must be able to give my class a name that somehow describes that one thing it does. (There are exceptions to everything, but absolute rules are sometimes better when we're learning.) It follows that I cannot write a base class called ObjectBaseThatContainsVariousFunctionsNeededByDifferentClasses. Whatever distinct functionality I need must be in its own class, and then other classes that need that functionality can depend on that class, not inherit from it. At the risk of oversimplifying, that's composition - composing multiple classes to work together. And once we form that habit we find that it's much more flexible, maintainable, and testable than using inheritance. A: This rule is a complete nonsense. Why? The reason is that in every case it is possible to tell whether to use composition or inheritance. This is determined by the answer to a question: "IS something A something else" or "HAS something A something else". You cannot "prefer" to make something to be something else or to have something else. Strict logical rules apply. Also there are no "contrived examples" because in every situation an answer to this question can be given. If you cannot answer this question there is something else wrong. This includes overlapping responsibilities of classess which are usually the result of wrong use of interfaces, less often by rewriting same code in different classess. To avoid this situations I also recommend to use good names for classes , that fully resemble their responsibilities. A: Just opinion, inheritance should only be used when: * *Both classes are in the same logical domain *The subclass is a proper subtype of the superclass *The superclass’s implementation is necessary or appropriate for the subclass *The enhancements made by the subclass are primarily additive. There are times when all of these things converge: * *Higher-level domain modeling *Frameworks and framework extensions *Differential programming A: What do you want to force yourself (or another programmer) to adhere to and when do you want to allow yourself (or another programmer) more freedom. It has been argued that inheritance is helpful when you want to force someone into a way of dealing with/solving a particular problem so they can't head off in the wrong direction. Is-a and Has-a is a helpful rule of thumb. A: Simply, implementation is like rules you (the class) should follow But extend is like using common code for a lot of classes, and maybe just one of them that you need to override.
{ "language": "en", "url": "https://stackoverflow.com/questions/49002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1940" }
Q: REST how to handle query parameters when put to resource? I have a REST data service where I want to allow the users to create new items with HTTP PUT using different formats like json,xml,csv. I'm unsure how to best handle the format specification in the url: PUT /ressource/ID/json PUT /ressource/ID/xml or PUT /ressource/ID?format=json PUT /ressource/ID?format=xml So what is the best way to specify a format indicator? If I specify the format with an query parameter and want to do a PUT how can I do this with curl? curl -T test/data.json -d "format=json" http://localhost:5000/resource/33 does not work. curl -T test/data.json http://localhost:5000/update?format=json works, but I would rather let curl build the query parameters instead of adding them by myself. A: A general principle of RESTful web services is to use the features built-in to HTTP, when applicable. In this case, you can indicate the format of your PUT request's content by setting the Content-Type header to application/json or application/xml.
{ "language": "en", "url": "https://stackoverflow.com/questions/49011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Programmatically retrieve database table creation script in .NET I need to be able to retrieve a CREATE TABLE script to recreate a specific table in a SQL Server (2000) database, programmatically (I'm using C#). Obviously you can do this with Enterprise Manager but I would like to automate it. I've been playing around with SQLDMO which offers a Backup command, but I'm not sure if this will give a SQL CREATE TABLE script and seems to require the creation of a SQL Service device, and I would prefer to avoid modifying the database. A: Take a look at my solution. It's a sql script which uses the INFORMATION_SCHEMA tables to get the necessary information. It's basic, but might work for you. A: You can use SMO to generate the scripts. For more info see: http://www.sqlteam.com/article/scripting-database-objects-using-smo-updated
{ "language": "en", "url": "https://stackoverflow.com/questions/49014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: What does the const operator mean when used with a method in C++? Given a declaration like this: class A { public: void Foo() const; }; What does it mean? Google turns up this: Member functions should be declared with the const keyword after them if they can operate on a const (this) object. If the function is not declared const, in can not be applied to a const object, and the compiler will give an error message. But I find that somewhat confusing; can anyone out there put it in better terms? Thanks. A: This is not an answer, just a side comment. It is highly recommended to declare variables and constants const as much as possible. * *This communicates your intent to users of your class (even/especially yourself). *The compiler will keep you honest to those intentions. -- i.e., it's like compiler checked documentation. *By definition, this prevents state changes you weren't expecting and can, possibly, allow you to make reasonable assumptions while in your methods. *const has a funny way of propagating through your code. Thus, it's a really good idea to start using const as early and as often as possible. Deciding to start const-ifying your code late in the game can be painful (easy, but annoying). If you're using a language with static, compile time checks it's a great idea to make as much use of them as possible... it's just another kind of testing really. A: Functions with const qualifier are not allowed to modify any member variables. For example: class A { int x; mutable int y; void f() const { x = 1; // error y = 1; // ok because y is mutable } }; A: C++ objects can be declared to be const: const A obj = new A(); When an object is const, the only member functions that can be called on that object are functions declared to be const. Making an object const can be interpreted as making the object readonly. A const object cannot be changed, i.e. no data members of the object can be changed. Declaring a member function const means that the function is not allowed to make any changes to the data members of the object. A: Two suggested best practices from experience: (1) Declare const functions whenever possible. At first, I found this to be just extra work, but then I started passing my objects to functions with signatures like f(const Object& o), and suddenly the compiler barfed on a line in f such as o.GetAValue(), because I hadn't marked GetAValue as a const function. This can surprise you especially when you subclass something and don't mark your version of the virtual methods as const - in that case the compile could fail on some function you've never heard of before that was written for the base class. (2) Avoid mutable variables when it's practical. A tempting trap can be to allow read operations to alter state, such as if you're building a "smart" object that does lazy or asynchronous i/o operations. If you can manage this with only one small mutable variable (like a bool), then, in my experience, this makes sense. However, if you find yourself marking every member variable as mutable in order to keep some operations const, you're defeating the purpose of the const keyword. What can go wrong is that a function which thinks it's not altering your class (since it only calls const methods) my invoke a bug in your code, and it could take a lot of effort to even realize this bug is in your class, since the other coder (rightly) assumes your data is const because he or she is only calling const methods. A: const has a funny way of propagating through your code. Thus, it's a really good idea to start using const as early and as often as possible. Deciding to start const-ifying your code late in the game can be painful (easy, but annoying). Additionally, you will easily run into problems if methods that should be const aren't! This will creep through the code as well, and make it worse and worse. A: Consider a variation of your class A. class A { public: void Foo() const; void Moo(); private: int m_nState; // Could add mutable keyword if desired int GetState() const { return m_nState; } void SetState(int val) { m_nState = val; } }; const A *A1 = new A(); A *A2 = new A(); A1->Foo(); // OK A2->Foo(); // OK A1->Moo(); // Error - Not allowed to call non-const function on const object instance A2->Moo(); // OK The const keyword on a function declaration indicates to the compiler that the function is contractually obligated not to modify the state of A. Thus you are unable to call non-const functions within A::Foo nor change the value of member variables. To illustrate, Foo() may not invoke A::SetState as it is declared non-const, A::GetState however is ok because it is explicitly declared const. The member m_nState may not be changed either unless declared with the keyword mutable. One example of this usage of const is for 'getter' functions to obtain the value of member variables. @1800 Information: I forgot about mutable! The mutable keyword instructs the compiler to accept modifications to the member variable which would otherwise cause a compiler error. It is used when the function needs to modify state but the object is considered logically consistent (constant) regardless of the modification. A: that will cause the method to not be able to alter any member variables of the object
{ "language": "en", "url": "https://stackoverflow.com/questions/49035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Different sizeof results Why does n not equal to 8 in the following function? void foo(char cvalue[8]) { int n = sizeof (cvalue); } But n does equal to 8 in this version of the function: void bar() { char cvalue[8]; int n = sizeof (cvalue); } A: Because you can't pass entire arrays as function parameters in C. You're actually passing a pointer to it; the brackets are syntactic sugar. There are no guarantees the array you're pointing to has size 8, since you could pass this function any character pointer you want. // These all do the same thing void foo(char cvalue[8]) void foo(char cvalue[]) void foo(char *cvalue) A: C and C++ arrays are not first class objects; you cannot pass arrays to functions, they always decay to pointers. You can, however, pass pointers and references to arrays. This prevents the array bounds from decaying. So this is legal: template<typename T, size_t N> void foo(const T(&arr)[N]) { int n = sizeof(arr); } A: In the first example, cvalue as passed parameter is in really just a pointer to a character array and when you take the sizeof() of it, you get the size of the pointer. In the second case, where you've declared it as a local variable, you get the size of the the entire array. A: The size of the parameter on 32-bit systems will be 4 and on 64-bit systems compiled with -m64 will be 8. This is because arrays are passed as pointers in functions. The pointer is merely a memory address.
{ "language": "en", "url": "https://stackoverflow.com/questions/49046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Renaming the containing project folder in VS.net under TFS I have a vs.net project, and after some refactoring, have modified the name of the project. How can I easily rename the underlying windows folder name to match this new project name under a TFS controlled project and solution? Note, I used to be able to do by fiddling with things in the background using SourceSafe ... A: Just right click on the folder in TFS, and select Rename. Once you commit the rename, TFS will make the changes on disk for you. As Kevin pointed out, you will want to make sure that everything is checked in, because TFS will remove the old folder and everything in it, and pull down the renamed folder with the current version of the files in it. One final note: You can't rename a folder that you haven't mapped, or that you haven't done a "Get" from. I don't know why, but TFS will disable the Rename option in these cases. At least that's what happened to me, if I remember correctly. A: * *Check in all pending changes within the folder and ensure that all other team members to do the same. *Ensure that you have a copy of the folder in your working directory (otherwise, you will not have the option to rename the folder in the Source Control Explorer in the next step). Get latest version on the folder to get a copy if you don't already have one. *Close the solution. *Rename the folder within the Source Control Explorer. This will move all of the files that are tracked in source control from the original folder on your file system to the new one. Note that files not tracked by source control will remain in the original folder - you will probably want to remove this folder once you have confirmed that there are no files there that you need. *Open the solution and select 'No' when prompted to get projects that were newly added to the solution from source control. You will get a warning that one of the projects in the solution could not be loaded. *Select the project within Solution Explorer. Note that it will be grayed out and marked as 'Unavailable'. *Open the Properties pane. *Edit the 'File Path' property either directly or using the '...' button. Note also that this property is only editable in Visual Studio 2010. In newer versions of Visual Studio, you will need to manually edit the project paths within the solution file. *Right-click on the project in the Solution Explorer and select Reload Project from the context menu. If you get an error message saying that the project cannot be loaded from the original folder, try closing the solution, deleting the suo file in the same folder as the solution file then reopening the solution. *Check in the changes as a single changeset. *Have other team members 'Get latest version' for the solution (right click on the solution within Solution Explorer and select 'Get Latest Version' from the context menu. Note: Other suggested solutions that involve removing and then re-adding the project to the solution will break project references. If you perform these steps then you might also consider renaming the following to suit. * *Project File *Default/Root Namespace *Assembly Also, consider modifying the values of the following assembly attributes. * *AssemblyProductAttribute *AssemblyDescriptionAttribute *AssemblyTitleAttribute A: You could just rename the project (.Xproj file and project folder) in TFS, delete the local folder structure and all of its contents, then do a get latest for the project. All of this depends the notion of your source repository is completely up to date and compilable. A: Here are steps that worked for me in Visual Studio 2008 with TFS: * *Close solution. *Rename project folders in Source Control Explorer (right-click -> rename). This duplicates code into newly named folders. *Open the solution, and in Solution Explorer, remove the old folders/projects and add the new, properly named, duplicates, (on old projects, right-click -> remove, then on the solution, right-click->Add->Existing Project...) OR: After step 2, you can open the solution's .sln file in a text editor, and manually update the project folder names. If you do this, you might need to manually check-out the .sln file to be sure your changes will be checked in (<- important!). A: My particular configuration is VS2010 connecting to TFS2008. I tried some of the other solutions here, but had problems. I found the following worked for me:- * *Remove project in folder to be renamed from solution *Save solution *Rename folder containing removed project in TFS source control (this renames locally on your hard disk) *Add project back to the solution from the new location *Save solution *Commit to source control Now, you'll have the folder rename and the solution re-map all committed under one changeset. A: For Visual Studio 2019 and ASP.NET Core 2.2 I had to modify Scott Munro's answer slightly. Here's what worked for me: Before you start: Check in all changes and make a backup of your solution. * *Rename the project using the Solution Explorer (i.e. right click the project and click rename). *Rename the project folder in Source Control Explorer. *Open your solution's .sln file in a text editor, and fix the file path. (Replace any references to "OldProjectName" with "NewProjectName"). *Reload your solution in Visual Studio. *Find and replace all the old namespaces with the new project name. *Done. Check in changes. I had to do step 1 first in order to get the project to load after fixing the file path in step 3.
{ "language": "en", "url": "https://stackoverflow.com/questions/49066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68" }
Q: Adding command recall to a Unix command line application I'm working on a command line application for Solaris, written in Java6. I'd like to be able to scroll through a history of previous commands using the up and down arrows like many Unix tools allow (shells, VIM command mode prompt, etc). Is there any standard way of achieving this, or do I have to roll my own? A: Yes, use the GNU readline library. A: I think you are looking for something like JLine but I've never used it so cannot attest to its quality. She can apparently deal with autocompletion and command line history, and the last release was recently (feb this year) so it's by no means dead. A: ledit is great on linux for that sort of thing. It's probably easily compiled on solaris. Clarification: ledit wraps the call to your other command line app, and can even be passed a file to persistently store your history. Here's the homepage: http://cristal.inria.fr/~ddr/ledit/ A: There is a SourceForge project, http://java-readline.sourceforge.net/, that provides JNI-based bindings to GNU readline. I've played around with it (not used in an actual project), and it certainly covers all of the functionality. A: warning: GNU readline is subject to GPL licensing terms: Readline is free software, distributed under the terms of the GNU General Public License, version 2. This means that if you want to use Readline in a program that you release or distribute to anyone, the program must be free software and have a GPL-compatible license. If you would like advice on making your license GPL-compatible, contact [email protected]. In other words, use of Readline spreads the GPL-ness from a library to the entire program. (Contrast with LGPL, which allows runtime linking to a library, and requires open-sourcing only for improvements to the library itself.) For those of us in the commercial world, even if we're not developing commercial applications, this is a show-stopper. Anyway, the wikipedia page lists several alternatives, including JLine, which sounds promising. Just as an aside: I work for a company that designs medical products. We make zero (0) dollars off of PC software. Nearly all our software runs on the embedded systems that we design (and we don't make any money off sales/upgrades of this software, only the products themselves); sometimes we do have software diagnostic tools that can run on the end-users' PCs. (design/manufacture/test software that's not released to customers I would think might be possible to use GPL libraries but I'm not sure) Medical products have fairly tight controls; you basically have to prove to the FDA that it's safe for users, it's not like the end user can decide "oh, I don't like this software, I'll just tweak it or use company XYZ's aftermarket replacement" -- that would leave device manufacturers open to a huge liability.
{ "language": "en", "url": "https://stackoverflow.com/questions/49075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Where WCF and ADO.Net Data services stand? I am bit confused about ADO.Net Data Services. Is it just meant for creating RESTful web services? I know WCF started in the SOAP world but now I hear it has good support for REST. Same goes for ADO.Net data services where you can make it work in an RPC model if you cannot look at everything from a resource oriented view. At least from the demos I saw recently, it looks like ADO.Net Data Services is built on WCF stack on the server. Please correct me if I am wrong. I am not intending to start a REST vs SOAP debate but I guess things are not that crystal clear anymore. Any suggestions or guidelines on what to use where? A: In my view ADO.Net data services is for creating restful services that are closely aligned with your domain model, that is the models themselves are published rather then say some form of DTO etc. Using it for RPC style services seems like a bad fit, though unfortunately even some very basic features like being able to perform a filtered counts etc. aren't available which often means you'll end up using some RPC just to meet the requirements of your customers i.e. so you can display a paged grid etc. WCF 3.5 pre-SP1 was a fairly weak RESTful platform, with SP1 things have improved in both Uri templates and with the availability of ATOMPub support, such that it's becoming more capable, but they don't really provide any elegant solution for supporting say JSON, XML, ATOM or even something more esoteric like payload like CSV simultaneously, short of having to make use of URL rewriting and different extension, method name munging etc. - rather then just selecting a serializer/deserializer based on the headers of the request. With WCF it's still difficult to create services that work in a more a natural restful manor i.e. where resources include urls, and you can transition state by navigating through them - it's a little clunky - ADO.Net data services does this quite well with it's AtomPub support though. My recommendation would be use web services where they're naturally are services and strong service boundaries being enforced, use ADO.Net Data services for rich web-style clients (websites, ajax, silverlight) where the composability of the url queries can save a lot of plumbing and your domain model is pretty basic... and roll your own REST layer (perhaps using an MVC framework as a starting point) if you need complete control over the information i.e. if you're publishing an API for other developers to consume on a social platform etc. My 2ø worth! A: Using WCF's rest binding is very valid when working with code that doesn't interact with a database at all. The HTTP verbs don't always have to go against a data provider. A: Actually, there are options to filter and skip to get the page like feature among others. See here:
{ "language": "en", "url": "https://stackoverflow.com/questions/49089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Online Interactive Consoles Where can I find an online interactive console for programming language or api? * *Ruby *Python *Groovy *PHP? *Perl? *Scheme *Java *C? A: @kuszi Put a great answer as a comment to the question, but I almost missed it because it was a comment rather than the answer. This link goes to a huuuge list of REPs and REPLs for tons of languages. A: * *Ruby *Python A: repl.it supports multiple languages including Python, Ruby, Lua, Scheme, CoffeeScript, QBasic, Forth,...the list goes on. A: Google has an online interactive shell for Python. A: Skulpt is a Python implementation in JavaScript. Pretty cool. A: _Why made one for Ruby A: http://www.codepad.org/? It has support for a few languages, including perl, scheme, c/c++, python, lua and more. A: For Java you could try http://www.javarepl.com (or console version at https://github.com/albertlatacz/java-repl) A: You can play around with jsScheme for Scheme, but it's a toy and shouldn't replace a console-based interpreter. A: Google Chrome Python shell https://chrome.google.com/extensions/detail/gdiimmpmdoofmahingpgabiikimjgcia A: You can try this http://doc.pyschools.com/console. It is actually an editor, and is good for testing your python code online when you do not have it installed on your computer. A: python web console, and I was able to run the code below # Script text here import itertools g = itertools.chain("AB", range(2)) print g.next() print g.next() print g.next() print g.next() A: http://repl.it/ is a Python in a browser without Java or Silverlight (as well as dozen of other languages compiled to JavaScript). A: Firebug Lite for Javascript. And, Rainbow 9 was one of the first examples of online REPLs. A: http://lotrepls.appspot.com/ is a console that works reasonably well for all these scripting languages: * *python *ruby *groovy *beanshell *clojure *javascript *scala *scheme Just hit CTRL+SPACE to switch languages or use the metacommand '/switch', for example '/switch clojure' to start coding in clojure.
{ "language": "en", "url": "https://stackoverflow.com/questions/49092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Can cout alter variables somehow? So I have a function that looks something like this: float function(){ float x = SomeValue; return x / SomeOtherValue; } At some point, this function overflows and returns a really large negative value. To try and track down exactly where this was happening, I added a cout statement so that the function looked like this: float function(){ float x = SomeValue; cout << x; return x / SomeOtherValue; } and it worked! Of course, I solved the problem altogether by using a double. But I'm curious as to why the function worked properly when I couted it. Is this typical, or could there be a bug somewhere else that I'm missing? (If it's any help, the value stored in the float is just an integer value, and not a particularly big one. I just put it in a float to avoid casting.) A: Printing a value to cout should not change the value of the paramter in any way at all. However, I have seen similar behaviour, adding debugging statements causes a change in the value. In those cases, and probably this one as well my guess was that the additional statements were causing the compiler's optimizer to behave differently, so generate different code for your function. Adding the cout statement means that the vaue of x is used directly. Without it the optimizer could remove the variable, so changing the order of the calculation and therefore changing the answer. A: As an aside, it's always a good idea to declare immutable variables using const: float function(){ const float x = SomeValue; cout << x; return x / SomeOtherValue; } Among other things this will prevent you from unintentionally passing your variables to functions that may modify them via non-const references. A: Welcome to the wonderful world of floating point. The answer you get will likely depend on the floating point model you compiled the code with. This happens because of the difference between the IEEE spec and the hardware the code is running on. Your CPU likely has 80 bit floating point registers that get use to hold the 32-bit float value. This means that there is far more precision while the value stays in a register than when it is forced to a memory address (also known as 'homing' the register). When you passed the value to cout the compiler had to write the floating point to memory, and this results in a lost of precision and interesting behaviour WRT overflow cases. See the MSDN documentation on VC++ floating point switches. You could try compiling with /fp:strict and seeing what happens. A: cout causes a reference to the variable, which often will cause the compiler to force it to spill it to the stack. Because it is a float, this likely causes its value to be truncated from the double or long double representation it would normally have. Calling any function (non-inlined) that takes a pointer or reference to x should end up causing the same behavior, but if the compiler later gets smarter and learns to inline it, you'll be equally screwed :) A: I dont think the cout has any effect on the variable, the problem would have to be somewhere else.
{ "language": "en", "url": "https://stackoverflow.com/questions/49098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What's the cleanest way to simulate pass-by-reference in Actionscript 3.0? Actionscript 3.0 (and I assume Javascript and ECMAScript in general) lacks pass-by-reference for native types like ints. As a result I'm finding getting values back from a function really clunky. What's the normal pattern to work around this? For example, is there a clean way to implement swap( intA, intB ) in Actionscript? A: I Believe the best you can do is pass a container object as an argument to a function and change the values of some properties in that object: function swapAB(aValuesContainer:Object):void { if (!(aValuesContainer.hasOwnProperty("a") && aValuesContainer.hasOwnProperty("b"))) throw new ArgumentError("aValuesContainer must have properties a and b"); var tempValue:int = aValuesContainer["a"]; aValuesContainer["a"] = aValuesContainer["b"]; aValuesContainer["b"] = tempValue; } var ints:Object = {a:13, b:25}; swapAB(ints); A: I suppose an alternative would be somewhere defining this sort of thing ... public class Reference { public var value:*; } Then use functions that take some number of Reference arguments to act as "pointers" if you're really just looking for "out" parameters and either initialize them on the way in or not and your swap would become: function swap(Reference a, Reference b) { var tmp:* = a.value; a.value = b.value; b.value = tmp; } And you could always go nuts and define specific IntReference, StringReference, etc. A: This is nitpicking, but int, String, Number and the others are passed by reference, it's just that they are immutable. Of course, the effect is the same as if they were passed by value. A: You could also use a wrapper instead of int: public class Integer { public var value:int; public function Integer(value:int) { this.value = value; } } Of course, this would be more useful if you could use operator overloading... A: Just look at some Java code. Java has had the convention that reference types are passed by reference and primitive types are passed by value since it's inception. It's a very good model in many ways. But talking about swap, the best and easiest way to do a swap in Java/AS3 is with the following three lines: var temp:int = array[i]; array[j] = array[i]; array[i] = temp; Theres not really any reason to use a function to do a simple swap, when you can do it faster with just 3 lines. A: It is annoying. But if you use different idioms than in e.g. C#, you can get reasonable-quality results. If you need to pass a lot of parameters back and forth, pass in an object filled with the needed data, and change the object's parameters when you return. The Object class is for just this sort of thing. If you just need to return a bunch of data, return an Object. This is more in keeping with the ECMAScript style than pass-by-ref semantics. A: Destructuring assignment (e.g. [a,b] = [b,a]) isn't defined in the ECMA-262 3 specification, and it's not implemented in JavaScript 1.5, which is the version equivalent to the JScript implementation in IE. I've seen this syntax in the AS4 specifications preview though, and I believe it's part of JavaScript 1.7. A: If ActionScript works like Javascript, [a,b] = [b,a]
{ "language": "en", "url": "https://stackoverflow.com/questions/49107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How do I write a 'for' loop in Bash? I'm looking for the basic loop like: for(int i = 0; i < MAX; i++) { doSomething(i); } but for Bash. A: for ((i = 0 ; i < max ; i++ )); do echo "$i"; done A: #! /bin/bash function do_something { echo value=${1} } MAX=4 for (( i=0; i<MAX; i++ )) ; { do_something ${i} } Here's an example that can also work in older shells, while still being efficient for large counts: Z=$(date) awk 'BEGIN { for ( i=0; i<4; i++ ) { print i,"hello",ENVIRON["Z"]; } }' But good luck doing useful things inside of awk: How do I use shell variables in an awk script? A: The Bash for consists on a variable (the iterator) and a list of words where the iterator will, well, iterate. So, if you have a limited list of words, just put them in the following syntax: for w in word1 word2 word3 do doSomething($w) done Probably you want to iterate along some numbers, so you can use the seq command to generate a list of numbers for you: (from 1 to 100 for example) seq 1 100 and use it in the for loop: for n in $(seq 1 100) do doSomething($n) done Note the $(...) syntax. It's a Bash behaviour, and it allows you to pass the output from one command (in our case from seq) to another (the for). This is really useful when you have to iterate over all directories in some path, for example: for d in $(find $somepath -type d) do doSomething($d) done The possibilities are infinite to generate the lists. A: Bash 3.0+ can use this syntax: for i in {1..10} ; do ... ; done ...which avoids spawning an external program to expand the sequence (such as seq 1 10). Of course, this has the same problem as the for(()) solution, being tied to Bash and even a particular version (if this matters to you). A: I commonly like to use a slight variant on the standard for loop. I often use this to run a command on a series of remote hosts. I take advantage of Bash's brace expansion to create for loops that allow me to create non-numerical for loops. Example: I want to run the uptime command on frontend hosts 1-5 and backend hosts 1-3: % for host in {frontend{1..5},backend{1..3}}.mycompany.com do ssh $host "echo -n $host; uptime" done I typically run this as a single-line command with semicolons on the ends of the lines instead of the more readable version above. The key usage consideration are that braces allow you to specify multiple values to be inserted into a string (e.g. pre{foo,bar}post results in prefoopost, prebarpost) and allow counting/sequences by using the double periods (you can use a..z, etc.). However, the double period syntax is a new feature of Bash 3.0; earlier versions will not support this. A: Try the Bash built-in help: help for Output for: for NAME [in WORDS ... ;] do COMMANDS; done The `for' loop executes a sequence of commands for each member in a list of items. If `in WORDS ...;' is not present, then `in "$@"' is assumed. For each element in WORDS, NAME is set to that element, and the COMMANDS are executed. for ((: for (( exp1; exp2; exp3 )); do COMMANDS; done Equivalent to (( EXP1 )) while (( EXP2 )); do COMMANDS (( EXP3 )) done EXP1, EXP2, and EXP3 are arithmetic expressions. If any expression is omitted, it behaves as if it evaluates to 1. A: From this site: for i in $(seq 1 10); do echo $i done A: If you're interested only in Bash, the "for(( ... ))" solution presented above is the best, but if you want something POSIX SH compliant that will work on all Unices, you'll have to use "expr" and "while", and that's because "(())" or "seq" or "i=i+1" are not that portable among various shells. A: I use variations of this all the time to process files... for files in *.log; do echo "Do stuff with: $files"; echo "Do more stuff with: $files"; done; If processing lists of files is what you're interested in, look into the -execdir option for files.
{ "language": "en", "url": "https://stackoverflow.com/questions/49110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "118" }
Q: What's the best .NET library for OpenID and ASP.NET MVC? I'm looking at using OpenID for my authentication scheme and wanted to know what the best .NET library is to use for MVC specific applications? thx A: There's another library called ExtremeSwank. This article by Andrew Arnott, a dotnetopenid developer, might also help you "Why DotNetOpenID as your C# OpenID library of choice". A: .Net OpenID project is the best library to use right now that I know of. I think SO used it also. The source includes a sample ASP.NET MVC project using the library. Scott Hanselman did a post on how to use the .Net OpenID project inside of ASP.NET MVC. A: We have been using .Net Open Id project and are pretty happy with it so far. Andrew Arnott does a great work of answering the queries and suggesting workarounds if you are struck. Give it a try and you will love it :) A: The project is shifted to http://www.dotnetopenauth.net/
{ "language": "en", "url": "https://stackoverflow.com/questions/49134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Calling python from a c++ program for distribution I would like to call python script files from my c++ program. I am not sure that the people I will distribute to will have python installed. A: Use system call to run a python script from C++ #include<iostream> #include <cstdlib> using namespace std; int main () { int result = system("/usr/bin/python3 testGen1.py 1"); cout << result; } A: I would like to call python script files from my c++ program. This means that you want to embed Python in your C++ application. As mentioned in Embedding Python in Another Application: Embedding Python is similar to extending it, but not quite. The difference is that when you extend Python, the main program of the application is still the Python interpreter, while if you embed Python, the main program may have nothing to do with Python — instead, some parts of the application occasionally call the Python interpreter to run some Python code. I suggest that you first go through Embedding Python in Another Application. Then refer the following examples * *Embedding Python in C/C++: Part I *Embedding Python in C/C++: Part II *Embedding Python in Multi-Threaded C/C++ Applications If you like Boost.Python, you may visit the following links: * *Embedding Python with Boost.Python Part 1 A: Boost has a python interface library which could help you. Boost.Python A: Interestingly, nobody has mentioned pybind11, yet. From their documentation: pybind11 is a lightweight header-only library that exposes C++ types in Python and vice versa, mainly to create Python bindings of existing C++ code. Its goals and syntax are similar to the excellent Boost.Python library by David Abrahams: to minimize boilerplate code in traditional extension modules by inferring type information using compile-time introspection. [...] Since its creation, this library has grown beyond Boost.Python in many ways, leading to dramatically simpler binding code in many common situations. Concretely, calling into a Python function (called embedding) is as simple as this (taken from the documentation): #include <pybind11/embed.h> // everything needed for embedding namespace py = pybind11; int main() { py::scoped_interpreter guard{}; // start the interpreter and keep it alive py::print("Hello, World!"); // use the Python API } A: Embeding the Python interpreter inside your C++ app will let you run Python programs using your application run Python scripts. It will also make it easier possible for those scripts to call C++ functions in your application. If this is what you want then the Boost library mentioned previously may be what you want to make it easier to create the link. In the past I have used SWIG to generate Python interfaces to C++ code. It was not clear from your question whether you wanted the Python scripts to call your C++ program or whether you just wanted the C++ to call Python. Many of the Python functions use modules which are not built into the Python interpreter. If your Python scripts call these functions then you will either need to have your users install Python or include the python runtime files with your application. It will depend on what modules you import in you Python scripts. A: Boost is probably the best choice, however if you're wanting something that's more standalone, and if this is for use with Windows (which seems feasible given that they are the people least likely to have Python installed), then you can use py2exe to create a DLL with entry points suitable for COM objects. You can then interface with the library via COM. (Obviously this is not at all useful as a cross-platform solution). A: Using Inter Process Communication (IPC) over socket can be a possible solution. Use a local network socket to listen/trasfer commands between both.
{ "language": "en", "url": "https://stackoverflow.com/questions/49137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: Storing MD5 Hash in SQL Server In SQL Server would a varbinary(16) be the most efficient way of storing an MD5 hash? Won't be doing anything with it except returning it in a linq query. A: Null values change things: A null varbinary(16) is 2 bytes. A null binary(16) is 16 bytes. 16 bytes stored in varbinary(16) takes 18 bytes. 16 bytes in binary(16) takes 16 bytes. https://stackoverflow.com/a/3731195 A: Based on the documentation on MSDN and my experience, binary is better, since the md5 hash does not vary in size. The size for a binary data type is n bytes, so the size of the data. The size of a varbinary data type is n bytes + 2 bytes on top of the size of the data.
{ "language": "en", "url": "https://stackoverflow.com/questions/49138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How can I make an EXE file from a Python program? I've used several modules to make EXEs for Python, but I'm not sure if I'm doing it right. How should I go about this, and why? Please base your answers on personal experience, and provide references where necessary. A: Auto PY to EXE - A .py to .exe converter using a simple graphical interface built using Eel and PyInstaller in Python. py2exe is probably what you want, but it only works on Windows. PyInstaller works on Windows and Linux. Py2app works on the Mac. A: Also known as Frozen Binaries but not the same as as the output of a true compiler- they run byte code through a virtual machine (PVM). Run the same as a compiled program just larger because the program is being compiled along with the PVM. Py2exe can freeze standalone programs that use the tkinter, PMW, wxPython, and PyGTK GUI libraties; programs that use the pygame game programming toolkit; win32com client programs; and more. The Stackless Python system is a standard CPython implementation variant that does not save state on the C language call stack. This makes Python more easy to port to small stack architectures, provides efficient multiprocessing options, and fosters novel programming structures such as coroutines. Other systems of study that are working on future development: Pyrex is working on the Cython system, the Parrot project, the PyPy is working on replacing the PVM altogether, and of course the founder of Python is working with Google to get Python to run 5 times faster than C with the Unladen Swallow project. In short, py2exe is the easiest and Cython is more efficient for now until these projects improve the Python Virtual Machine (PVM) for standalone files. A: py2exe: py2exe is a Python Distutils extension which converts Python scripts into executable Windows programs, able to run without requiring a Python installation. A: Not on the freehackers list is gui2exe which can be used to build standalone Windows executables, Linux applications and Mac OS application bundles and plugins starting from Python scripts. A: Use cx_Freeze to make exe your python program A: See a short list of python packaging tools on FreeHackers.org. A: I found this presentation to be very helpfull. How I Distribute Python applications on Windows - py2exe & InnoSetup From the site: There are many deployment options for Python code. I'll share what has worked well for me on Windows, packaging command line tools and services using py2exe and InnoSetup. I'll demonstrate a simple build script which creates windows binaries and an InnoSetup installer in one step. In addition, I'll go over common errors which come up when using py2exe and hints on troubleshooting them. This is a short talk, so there will be a follow-up Open Space session to share experience and help each other solve distribution problems.
{ "language": "en", "url": "https://stackoverflow.com/questions/49146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "117" }
Q: How do I create a MessageBox in C#? I have just installed C# for the first time, and at first glance it appears to be very similar to VB6. I decided to start off by trying to make a 'Hello, World!' UI Edition. I started in the Form Designer and made a button named "Click Me!" proceeded to double-click it and typed in MessageBox("Hello, World!"); I received the following error: MessageBox is a 'type' but used as a 'variable' Fair enough, it seems in C# MessageBox is an Object. I tried the following MessageBox a = new MessageBox("Hello, World!"); I received the following error: MessageBox does not contain a constructor that takes '1' arguments Now I am stumped. Please help. A: It is a static function on the MessageBox class, the simple way to do this is using MessageBox.Show("my message"); in the System.Windows.Forms class. You can find more on the msdn page for this here . Among other things you can control the message box text, title, default button, and icons. Since you didn't specify, if you are trying to do this in a webpage you should look at triggering the javascript alert("my message"); or confirm("my question"); functions. A: Try below code: MessageBox.Show("Test Information Message", "Caption", MessageBoxButtons.OK, MessageBoxIcon.Information); A: MessageBox.Show also returns a DialogResult, which if you put some buttons on there, means you can have it returned what the user clicked. Most of the time I write something like if (MessageBox.Show("Do you want to continue?", "Question", MessageBoxButtons.YesNo) == MessageBoxResult.Yes) { //some interesting behaviour here } which I guess is a bit unwieldy but it gets the job done. See https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.dialogresult for additional enum options you can use here. A: Code summary: using System.Windows.Forms; ... MessageBox.Show( "hello world" ); Also (as per this other stack post): In Visual Studio expand the project in Solution Tree, right click on References, Add Reference, Select System.Windows.Forms on Framework tab. This will get the MessageBox working in conjunction with the using System.Windows.Forms reference from above. A: Also you can use a MessageBox with OKCancel options, but it requires many codes. The if block is for OK, the else block is for Cancel. Here is the code: if (MessageBox.Show("Are you sure you want to do this?", "Question", MessageBoxButtons.OKCancel, MessageBoxIcon.Question) == DialogResult.OK) { MessageBox.Show("You pressed OK!"); } else { MessageBox.Show("You pressed Cancel!"); } You can also use a MessageBox with YesNo options: if (MessageBox.Show("Are you sure want to doing this?", "Question", MessageBoxButtons.YesNo, MessageBoxIcon.Question) == DialogResult.Yes) { MessageBox.Show("You are pressed Yes!"); } else { MessageBox.Show("You are pressed No!"); } A: In the System.Windows.Forms class, you can find more on the MSDN page for this here. Among other things you can control the message box text, title, default button, and icons. Since you didn't specify, if you are trying to do this in a webpage you should look at triggering the javascript alert("my message"); or confirm("my question"); functions. A: This is some of the things you can put into a message box. Enjoy MessageBox.Show("Enter the text for the message box", "Enter the name of the message box", (Enter the button names e.g. MessageBoxButtons.YesNo), (Enter the icon e.g. MessageBoxIcon.Question), (Enter the default button e.g. MessageBoxDefaultButton.Button1) More information can be found here A: I got the same error 'System.Windows.Forms.MessageBox' is a 'type' but is used like a 'variable', even if using: MessageBox.Show("Hello, World!"); I guess my initial attempts with invalid syntax caused some kind of bug and I ended up fixing it by adding a space between "MessageBox.Show" and the brackets (): MessageBox.Show ("Hello, World!"); Now using the original syntax without the extra space works again: MessageBox.Show("Hello, World!");
{ "language": "en", "url": "https://stackoverflow.com/questions/49147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Importing JavaScript in JSP tags I have a .tag file that requires a JavaScript library (as in a .js file). Currently I am just remembering to import the .js file in every JSP that uses the tag but this is a bit cumbersome and prone to error. Is there a way to do the importing of the .js inside the JSP tag? (for caching reasons I would want the .js to be a script import) A: There is no reason you cannot have a script tag in the body, even though it is preferable for it to be in the head. Just emit the script tag before you emit your tag's markup. The only thing to consider is that you do not want to include the script more than once if you use the jsp tag on the page more than once. The way to solve that is to remember that you have already included the script, by addng an attribute to the request object. A: Short of just including the js in every page automatically, I do not think so. It really would not be something that tags are designed to to. Without knowing what your tag is actually doing (presumably its its outputting something in the body section) then there is no way that it will be able to get at the head to put the declaration there. One solution that might (in my head) work would be to have an include that copies verbatim what you have in the head after the place in the head to import tags right up to where you want to use the tag. This is really not something that you would want to do. You would have to have multiple 'header' files to import depending on the content and where you want to use the tag. Maintenance nightmare. Just a bad idea all round. Any solution I can think of would require more work than manually just adding in the declaration. I think you are out of luck and stuck with manually putting it in. edit: Just import it in every page. It will be cached and then this problem goes away.
{ "language": "en", "url": "https://stackoverflow.com/questions/49156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: GreaseMonkey script to auto login using HTTP authentication I've got quite a few GreaseMonkey scripts that I wrote at my work which automatically log me into the internal sites we have here. I've managed to write a script for nearly each one of these sites except for our time sheet application, which uses HTTP authentication. Is there a way I can use GreaseMonkey to log me into this site automatically? Edit: I am aware of the store password functionality in browsers, but my scripts go a step further by checking if I'm logged into the site when it loads (by traversing HTML) and then submitting a post to the login page. This removes the step of having to load up the site, entering the login page, entering my credentials, then hitting submit A: It is possible to log in using HTTP authentication by setting the "Authorization" HTTP header, with the value of this header set to the string "basic username:password", but with the "username:password" portion of the string Base 64 encoded. http://frontier.userland.com/stories/storyReader$2159 A bit of researching found that GreaseMonkey has a a function built into it where you can send GET / POST requests to the server called GM_xmlhttpRequest http://diveintogreasemonkey.org/api/gm_xmlhttprequest.html So putting it all together (and also getting this JavaScript code to convert strings into base64 I get the following http://www.webtoolkit.info/javascript-base64.html var loggedInText = document.getElementById('metanav').firstChild.firstChild.innerHTML; if (loggedInText != "logged in as jklp") { var username = 'jklp'; var password = 'jklpPass'; var base64string = Base64.encode(username + ":" + password); GM_xmlhttpRequest({ method: 'GET', url: 'http://foo.com/trac/login', headers: { 'User-agent': 'Mozilla/4.0 (compatible) Greasemonkey/0.3', 'Accept': 'application/atom+xml,application/xml,text/xml', 'Authorization':'Basic ' + base64string, } }); } So when I now visit the site, it traverses the DOM and if I'm not logged in, it automagically logs me in. A: HTTP authentication information is sent on every request, not just to log in. The browser will cache the login information for the session after you log in the first time. So, you don't really save anything by trying to check if you are already logged in. You could also forget about greasemonkey altogether and just give your login into on the url like so: http://username:password@host/ Of course, saving this in a bookmark may be a security risk, but not more-so than saving your password in the browser. A: Why don't you use Firefox (I assume you're using Firefox) to remember your credentials using the Password Manager? I found this link: HTTP Authentication with HTML Forms. Looks like you can use javascript to do HTTP authentication. I don't think you can have Greasemonkey interrupt when you are first navigating to a URL though. You might have to setup some sort of launching point that you can use to have greasemonkey automatically redirect + login. For example, you can have the local page that takes the destination URL in the query string, have Greasemonkey automatically do the authenticate + redirect. The only problem is that you'll have to wrap the site bookmarks with your launching page for the bookmarks you use as entry points. A: "http://username:password@host/" doesn't work on IE, FireFox works ok.
{ "language": "en", "url": "https://stackoverflow.com/questions/49158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I turn a python program into an .egg file? How do I turn a python program into an .egg file? A: Also, if you need to get an .egg package off a single .py file app, check this link: EasyInstall - Packaging others projects as eggs. A: Setuptools is the software that creates .egg files. It's an extension of the distutils package in the standard library. The process involves creating a setup.py file, then python setup.py bdist_egg creates an .egg package. A: Python has its own package for creating distributions that is called distutils. However instead of using Python’s distutils’ setup function, we’re using setuptools’ setup. We’re also using setuptools’ find_packages function which will automatically look for any packages in the current directory and add them to the egg. To create said egg, you’ll need to run the following from the command line: c:\Python34\python.exe setup.py bdist_egg
{ "language": "en", "url": "https://stackoverflow.com/questions/49164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Connecting to registry remotely, and getting exceptions I've inherited a hoary old piece of code (by hoary, I mean warty with lots of undocumented bug fixes than WTF-y) and there's one part that's giving me a bit of trouble. Here's how it connects to the remote registry to get the add/remove programs key: try { remoteKey = RegistryKey.OpenRemoteBaseKey( RegistryHive.LocalMachine, addr.Value).OpenSubKey( "SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Uninstall"); return 1; } catch (IOException e) { IOException myexception = e; //Console.WriteLine("{0}: {1}: {2}", // e.GetType().Name, e.Message, addr.Value); return 2; } catch (UnauthorizedAccessException e) { UnauthorizedAccessException myexception = e; //Console.WriteLine("{0}: {1}: {2}", // e.GetType().Name, e.Message, addr.Value); return 3; } catch (System.Security.SecurityException e) { System.Security.SecurityException myexception = e; //Console.WriteLine("{0}: {1}: {2}", // e.GetType().Name, e.Message, addr.Value); return 4; } Now, I have two problems: * *I know why the IOException - if it's a non-Windows machine it'll throw that. The difference between UnauthorizedAccessException and SecurityException I'm not so clear on. Anyone got any ideas? *This entire bit of code was designed before anyone had thought you might not use your local logon for everything. I can't work out how you do authentication for remotely connecting to the registry, anyway, and this code looks like it's only used in one case, when it can't get this information from WMI. Any help with either would be great. A: You probably have to use impersonation to change the credentials of the thread that calls the remote registry methods. See here (linky) for some information on MSDN. Basically, your thread has a security context that is used to make managed and unmanaged calls. A: According to MSDN, UnauthorizedAccessException is not thrown by OpenSubKey. So I think it's not needed. A: John's pointer to MSDN answered what UnauthorizedAccessException is for - it only appears when you try to access a key remotely, using OpenRemoteBaseKey. We're a little wary about changing the security context on the computer - I've found a reference here about using WMI (which we're already using for the vast majority of the heavy lifting) to access the registry, so I might try that instead.
{ "language": "en", "url": "https://stackoverflow.com/questions/49166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I embed Perl inside a C++ application? I would like to call Perl script files from my c++ program. I am not sure that the people I will distribute to will have Perl installed. Basically I'm looking for a .lib file that I can use that has an Apache like distribution license. A: I'm currently writing a library for embedding Perl in C++, but it's not finished yet. In any case I would recommend against using the EP library. Not only has it not been maintained for years, but is also has some severe architectural deficiencies and is rather limited in its scope. If you are interested in alpha software you can contact me about it, otherwise I'd advice you to use the raw API. A: You can embed perl into your app. * *Perl Embedding by John Quillan *C++ wrapper around Perl C API A: To call perl from C++ you need to use the API, as someone else mentioned; the basic tutorial is available in the perlxstut documentation. Note that you will most probably need more than just a ".lib", because you'll need a lot of tiny modules which are located in the "lib" directory of the perl distrib: strict.pm, etc. That's a not a big deal though, I guess; the apache example you mentioned has the same constraint of delivering some default configuration files etc. However, to distribute Perl, on Windows (I guess you're on Windows since you mentionned a .lib file), the ActiveState distribution which everyone uses might cause some licensing headache. It's not really clear to me, but it seems like you cannot redistribute ActivePerl in a commercial product. Note that, if you want to embed Perl in a C++ program, you might have to recompile it anyway, to have the same compilation flags on Perl and on your program.
{ "language": "en", "url": "https://stackoverflow.com/questions/49168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: MFC: MessageBox during a Drag-Drop I need to display an error message on rejecting a drop in my application. I tried this in the OnDrop() but then the source application hangs until my message box is dismissed. How can I do that? A: You can always call PostMessage with a private message in the WM_APP range and in the message handler show the error. That way you show the error after the drag and drop operation is really over and there is no danger of messing up anything. A: You're right. But all the data I need to report in the message box is in the OnDrop. A: If you need data you can copy it in the OnDrop, store it in some temporary location, then in the WM_APP range message pass the index to the data in temporary location. The handler for the WM_APP message can clean up the temporary data after showing the message box.
{ "language": "en", "url": "https://stackoverflow.com/questions/49183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: ASP.NET MVC Preview 4 - Stop Url.RouteUrl() etc. using existing parameters I have an action like this: public class News : System.Web.Mvc.Controller { public ActionResult Archive(int year) { / *** / } } With a route like this: routes.MapRoute( "News-Archive", "News.mvc/Archive/{year}", new { controller = "News", action = "Archive" } ); The URL that I am on is: News.mvc/Archive/2008 I have a form on this page like this: <form> <select name="year"> <option value="2007">2007</option> </select> </form> Submitting the form should go to News.mvc/Archive/2007 if '2007' is selected in the form. This requires the form 'action' attribute to be "News.mvc/Archive". However, if I declare a form like this: <form method="get" action="<%=Url.RouteUrl("News-Archive")%>"> it renders as: <form method="get" action="/News.mvc/Archive/2008"> Can someone please let me know what I'm missing? A: You have a couple problems, I think. First, your route doesn't have a default value for "year", so the URL "/News.mvc/Archive" is actually not valid for routing purposes. Second, you're expect form values to show up as route parameters, but that's not how HTML works. If you use a plain form with a select and a submit, your URLs will end up having "?year=2007" on the end of them. This is just how GET-method forms are designed to work in HTML. So you need to come to some conclusion about what's important. * *If you want the user to be able to select something from the dropdown and that changes the submission URL, then you're going to have to use Javascript to achieve this (by intercepting the form submit and formulating the correct URL). *If you're okay with /News.mvc/Archive?year=2007 as your URL, then you should remove the {year} designator from the route entirely. You can still leave the "int year" parameter on your action, since form values will also populate action method parameters. A: I think I've worked out why - the route includes {year} so the generated routes always will too.. If anyone can confirm this? A: Solution Okay here is the solution, (thanks to Brad for leading me there). 1) Require default value in route: routes.MapRoute( "News-Archive", "News.mvc/Archive/{year}", new { controller = "News", action = "Archive", year = 0 } ); 2) Add a redirect to parse GET parameters as though they are URL segments. public ActionResult Archive(int year) { if (!String.IsNullOrEmpty(Request["year"])) { return RedirectToAction("Archive", new { year = Request["year"] }); } } 3) Make sure you have your redirect code for Request params before any code for redirecting "default" year entries. i.e. public ActionResult Archive(int year) { if (!String.IsNullOrEmpty(Request["year"])) { return RedirectToAction("Archive", new { year = Request["year"] }); } if (year == 0) { /* ... */ } /* ... */ } 3) Explicitly specify the default value for year in the Url.RouteUrl() call: Url.RouteUrl("News-Archive", new { year = 0 })
{ "language": "en", "url": "https://stackoverflow.com/questions/49194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What language should I learn as a bridge to C (and derivatives) The first language I learnt was PHP, but I have more recently picked up Python. As these are all 'high-level' languages, I have found them a bit difficult to pick up. I also tried to learn Objective-C but I gave up. So, what language should I learn to bridge between Python to C A: The best place to start learning C is the book "The C Programming Language" by Kernighan and Ritchie. You will recognise a lot of things from PHP, and you will be surprised how much PHP (and Perl, Python etc) do for you. Oh and you also will need a C compiler, but i guess you knew that. A: I generally agree with most of the others - There's not really a good stepping stone language. It is, however, useful to understand what is difficult about learning C, which might help you understand what's making it difficult for you. I'd say the things that would prove difficult in C for someone coming from PHP would be : * *Pointers and memory management This is pretty much the reason you're learning C I imagine, so there's not really any getting around it. Learning lower level assembly type languages might make this easier, but C is probably a bridge to do that, not the other way around. *Lack of built in data structures PHP and co all have native String types, and useful things like hash tables built in, which is not the case in C. In C, a String is just an array of characters, which means you'll need to do a lot more work, or look seriously at libraries which add the features you're used to. *Lack of built in libraries Languages like PHP nowadays almost always come with stacks of libraries for things like database connections, image manipulation and stacks of other things. In C, this is not the case other than a very thin standard library which revolves mostly around file reading, writing and basic string manipulation. There are almost always good choices available to fill these needs, but you need to include them yourself. *Suitability for high level tasks If you try to implement the same type of application in C as you might in PHP, you'll find it very slow going. Generating a web page, for example, isn't really something plain C is suited for, so if you're trying to do that, you'll find it very slow going. *Preprocessor and compilation Most languages these days don't have a preprocessor, and if you're coming from PHP, the compilation cycle will seem painful. Both of these are performance trade offs in a way - Scripting languages make the trade off in terms of developer efficiency, where as C prefers performance. I'm sure there are more that aren't springing to mind for me right now. The moral of the story is that trying to understand what you're finding difficult in C may help you proceed. If you're trying to generate web pages with it, try doing something lower level. If you're missing hash tables, try writing your own, or find a library. If you're struggling with pointers, stick with it :) A: It's not clear why you need a bridge language. Why don't you start working with C directly? C is a very simple language itself. I think that hardest part for C learner is pointers and everything else related to memory management. Also C lang is oriented on structured programming, so you will need to learn how to implement data structures and algorithms without OOP goodness. Actually, your question is pretty hard, usually people go from low level langs to high level and I can understand frustration of those who goes in other direction. A: Learning any language takes time, I always ensure I have a measurable goal; I set myself an objective, then start learning the language to achieve this objective, as opposed to trying to learn every nook and cranny of the language and syntax. C is not easy, pointers can be hard to comprehend if you’re not coming assembler roots. I first learned C++, then retro fit C to my repertoire but I started with x86 and 68000 assembler. A: Python is about as close to C as you're going to get. It is in fact a very thin wrapper around C in a lot of places. However, C does require that you know a little more about how the computer works on a low level. Thus, you may benefit from trying an assembly language. LC-3 is a simple assembly language with a simulated machine. Alternatively, you could try playing with an interactive C interpreter like CINT. Finally, toughing it out and reading K&R's book is usually the best approach. A: Forget Java - it is not going to bring you anywhere closer to C (you have allready proved that you don't have a problem learning new syntax). Either read K&R or go one lower: Learn about the machine itself. The only tricky part in C is pointers and memory management (which is closely related to pointers, but also has a bit to do with how functions are called). Learning a (simple, maybe even "fake" assembly) language should help you out here. Then, start reading up on the standard library provided by C. It will be your daily bread and butter. Oh: another tip! If you really do want to bridge, try FORTH. It helped me get into pointers. Also, using the win32 api from Visual Basic 6.0 can teach you some stuff about pointers ;) A: C is a bridge onto itself. K&R is the only programming language book you can read in one sitting and almost never pick it up again ... A: My suggestion is to get a good C-book that is relevant to what you want to do. I agree that K & R is considered to be "The book" on C, but I found "UNIX Systems Programming" by Kay A. Robbins and Steven Robbins to be more practical and hands on. The book is full of clean and short code snippets you can type in, compile and try in just a few minutes each. There is a preview at http://books.google.com/books?id=tdsZHyH9bQEC&printsec=frontcover (Hyperlinking it didn't work.) A: I'm feeling your pain, I also learned PHP first and I'm trying to learn C++, it's not easy, and I am really struggling, It's been 2 years since I started on c++ and Still the extent of what I can do is cout, cin, and math. If anyone reads this and wonders where to start, START LOWER. A: Java might actually be a good option here, believe it or not. It is strongly based on C/C++, so if you can get the syntax and the strong typing, picking up C might be easier. The benefit is you can learn the lower level syntax without having to learn pointers (since memory is managed for you just like in Python and PHP). You will, however, learn a similar concept... references (or objects in general). Also, it is strongly Object Oriented, so it may be difficult to pick up on that if you haven't dealt with OOP yet.... you might be better off just digging in with C like others suggested, but it is an option. A: I think C++ is a good "bridge" to C. I learned C++ first at University, and since it's based on C you'll learn a lot of the same concepts - perhaps most notably pointers - but also Object Oriented Design. OO can be applied to all kinds of modern languages, so it's worth learning. After learning C++, I found it wasn't too hard to pick up the differences between C++ and C as required (for example, when working on devices that didn't support C++). A: try to learn a language which you are comfortable with, try different approach and the basics. A: Languages are easy to learn (especially one like C)... the hard part is learning the libraries and/or coding style of the language. For instance, I know C++ fairly well, but most C/C++ code I see confuses me because the naming conventions are so different from what I work with on a daily basis. Anyway, I guess what I'm trying to say is don't worry too much about the syntax, focus on said language's library. This isn't specific to C, you can say the same about c#, vb.net, java and just about every other language out there. A: Pascal! Close enough syntax, still requires you to do some memory management, but not as rough for beginners.
{ "language": "en", "url": "https://stackoverflow.com/questions/49195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Storing third-party libraries in source control Should libraries that the application relies on be stored in source control? One part of me says it should and another part say's no. It feels wrong to add a 20mb library that dwarfs the entire app just because you rely on a couple of functions from it (albeit rather heavily). Should you just store the jar/dll or maybe even the distributed zip/tar of the project? What do other people do? A: I generally store them in the repository, but I do sympathise with your desire to keep the size down. If you don't store them in the repository, the absolutely do need to be archived and versioned somehow, and your build system needs to know how to get them. Lots of people in Java world seem to use Maven for fetching dependencies automatically, but I've not used I, so I can't really recommend for or against it. One good option might be to keep a separate repository of third party systems. If you're on Subversion, you could then use subversion's externals support to automatically check out the libraries form the other repository. Otherwise, I'd suggest keeping an internal Anonymous FTP (or similar) server which your build system can automatically fetch requirements from. Obviously you'll want to make sure you keep all the old versions of libraries, and have everything there backed up along with your repository. A: store everything you will need to build the project 10 years from now.I store the entire zip distribution of any library, just in case Edit for 2017: This answer did not age well:-). If you are still using something old like ant or make, the above still applies. If you use something more modern like maven or graddle (or Nuget on .net for example), with dependency management, you should be running a dependency management server, in addition to your version control server. As long as you have good backups of both, and your dependency management server does not delete old dependencies, you should be ok. For an example of a dependency management server, see for example Sonatype Nexus or JFrog Artifcatory, among many others. A: What I have is an intranet Maven-like repository where all 3rd party libraries are stored (not only the libraries, but their respective source distribution with documentation, Javadoc and everything). The reason are the following: * *why storing files that don't change into a system specifically designed to manage files that change? *it dramatically fasten the check-outs *each time I see "something.jar" stored under source control I ask "and which version is it?" A: I put everything except the JDK and IDE in source control. Tony's philosophy is sound. Don't forget database creation scripts and data structure update scripts. Before wikis came out, I used to even store our documentation in source control. A: My preference is to store third party libraries in a dependency repository (Artifactory with Maven for example) rather than keeping them in Subversion. Since third party libraries aren't managed or versioned like source code, it doesn't make a lot of sense to intermingle them. Remote developers also appreciate not having to download large libraries over a slow WPN link when they can get them more easily from any number of public repositories. A: As well as having third party libraries in your repository, it's worth doing it in such a way that makes it easy to track and merge in future updates to the library easily (for example, security fixes etc.). If you are using Subversion using a proper vendor branch is worthwhile. If you know that it'd be a cold day in hell before you'll be modifying your third party's code then (as @Matt Sheppard said) an external makes sense and gives you the added benefit that it becomes very easy to switch up to the latest version of the library should security updates or a must-have new feature make that desirable. Also, you can skip externals when updating your code base saving on the long slow load process should you need to. @Stu Thompson mentions storing documentation etc. in source control. In bigger projects I've stored our entire "clients" folder in source control including invoices / bills/ meeting minutes / technical specifications etc. The whole shooting match. Although, ahem, do remember to store these in a SEPARATE repository from the one you'll be making available to: other developers; the client; your "browser source view"...cough... :) A: Don't store the libraries; they're not strictly speaking part of your project and uselessy take up room in your revision control system. Do, however, use maven (or Ivy for ant builds) to keep track of what versions of external libraries your project uses. You should run a mirror of the repo within your organisation (that is backed up) to ensure you always have the dependencies under your control. This ought to give you the best of both worlds; external jars outside your project, but still reliably available and centrally accessible. A: At a previous employer we stored everything necessary to build the application(s) in source control. Spinning up a new build machine was a matter of syncing with the source control and installing the necessary software. A: We store the libraries in source control because we want to be able to build a project by simply checking out the source code and running the build script. If you aren't able to get latest and build in one step then you're only going to run into problems later on. A: never store your 3rd party binaries in source control. Source control systems are platforms that support concurrent file sharing, parallel work, merging efforts, and change history. Source control is not an FTP site for binaries. 3rd party assemblies are NOT source code; they change maybe twice per SDLC. The desire to be able to wipe your workspace clean, pull everything down from source control and build does not mean 3rd party assemblies need to be stuck in source control. You can use build scripts to control pulling 3rd party assemblies from a distribution server. If you are worried about controlling what branch/version of your application uses a particular 3rd party component, then you can control that through build scripts as well. People have mentioned Maven for Java, and you can do something similar with MSBuild for .Net. A: Store third party libraries in source control so they are available if you check your code out to a new development environment. Any "includes" or build commands that you may have in build scripts should also reference these "local" copies. As well as ensuring that third party code or libraries that you depend on are always available to you, it should also mean that code is (almost) ready to build on a fresh PC or user account when new developers join the team. A: Store the libraries! The repository should be a snapshot of what is required to build a project at any moment in time. As the project requires different version of external libraries you will want to update / check in the newer versions of these libraries. That way you will be able to get all the right version to go with an old snapshot if you have to patch an older release etc. A: Personally I have a dependancies folder as part of my projects and store referenced libraries in there. I find this makes life easier as I work on a number of different projects, often with inter-depending parts that need the same version of a library meaning it's not always feasible to update to the latest version of a given library. Having all dependancies used at compile time for each project means that a few years down the line when things have moved on, I can still build any part of a project without worrying about breaking other parts. Upgrading to a new version of a library is simply a case of replacing the file and rebuilding related components, not too difficult to manage if need be. Having said that, I find most of the libraries I reference are relatively small weighing in at around a few hundred kb, rarely bigger, which makes it less of an issue for me to just stick them in source control. A: Use git subprojects, and either reference from the 3rd party library's main git repository, or (if it doesn't have one) create a new git repository for each required library. There's nothing reason why you're limited to just one git repository, and I don't recommend you use somebody else's project as merely a directory in your own. A: store everything you'll need to build the project, so you can check it out and build without doing anything. (and, as someone who has experienced the pain - please keep a copy of everything needed to get the controls installed and working on a dev platform. I once got a project that could build - but without an installation file and reg keys, you couldn't make any alterations to the third-party control layout. That was a fun rewrite) A: You have to store everything you need in order to build the project. Furthermore different versions of your code may have different dependencies on 3rd parties. You'll want to branch your code into maintenance version together with its 3rd party dependencies... A: Personally what I have done and have so far liked the results is store libraries in a separate repository and then link to each library that I need in my other repositories through the use of the Subversion svn:externals feature. This works nice because I can keep versioned copies of most of our libraries (mainly managed .NET assemblies) in source control without them bulking up the size of our main source code repository at all. Having the assemblies stored in the repository in this fashion makes it so that the build server doesn't have to have them installed to make a build. I will say that getting a build to succeed in absence of Visual Studio being installed was quite a chore but now that we got it working we are happy with it. Note that we don't currently use many commercial third-party control suites or that sort of thing much so we haven't run into licensing issues where it may be required to actually install an SDK on the build server but I can see where that could easily become a problem. Unfortunately I don't have a solution for that and will plan on addressing it when I first run into it.
{ "language": "en", "url": "https://stackoverflow.com/questions/49196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "87" }
Q: Is there a FLASH editor that supports autocomplete & step-into debugging? I'm considering using Flash but I'm pretty addicted to autocomplete and Step-at-a-time debugging. A: By using Eclipse with the ActionScript plugin you get full code hinting in the same format that you do with intellisense. Or you can use FlashDevelop that has both intellisense and can debug (trace) your code. A: If you mean ActionScript I once heard that PrimalScript will do Intellisense. Never tested it though. As for debugging, MAYBE, PrimalScope will have that too. I'd recommend you try before you buy, though. (They both have trials.) A: FDT is a plugin for Eclipse which many say (including myself) is the best editor when it comes to writing actionscript. FDT supports as2 & as3, including the new apis from Flash player 10. I haven't used Visual Studio myself but i'm guessing it's pretty much the same regarding intellisense (all versions of FDT). FDT Enterprise also supports debugging including breakpoints and stepping through your code (= not just trace). http://fdt.powerflasher.com/ A: Both FlexBuilder and Flash CS3 have autocompletion and debuggers that can step through code one line at a time. Both can be used to develop in pure ActionScript (i.e. FlexBuilder isn't just for writing Flex applications). A: it will be cool to have autocompletion in flash CS3 like the one that comes with eclipse for java (with hints, hierarchies , implementation, autoimports for clases etc . .), i dont know why everyone wants to separate the code from graphical editor , when thats the best part of flash , anyway i always recommend to stay with eclipse since it is very robust and its available for a lot of languages so in case you need to learn other language you already know the IDE and how to use autocompletion. A: FlashDevelop is a lightweight Flash editor but it doesn't have a debugger. But FlexBuilder is a much popular and full-featured. A: Adobe has said that the next version of Flash (CS5) will have a lot of coding improvements, including code hinting, auto-complete, and auto-import for custom classes. But whether it will pull even with Eclipse and FlashDevelop is still an open question.
{ "language": "en", "url": "https://stackoverflow.com/questions/49199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Cycle count measurement I have a MS Visual Studio 2005 application solution. All the code is in C. I want to measure the number of cycles taken for execution by particular functions. Is there any Win32 API which I can use to get the cycle count? I have used gettimeofday() to get time in micro seconds, but I want to know the cycles consumed. A: Both Intel and AMD offer windows libraries and tools to access the performance counters on their cpus. These give access not only to cycle counts, but also to cache line hits and misses and TLB flush counts. The Intel tools are marketed under the name VTune, while AMD calls theirs CodeAnalyst.
{ "language": "en", "url": "https://stackoverflow.com/questions/49207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can I use a key blob generated from Win32 CryptoAPI in my .NET application? I have an existing application that is written in C++ for Windows. This application uses the Win32 CryptoAPI to generate a TripleDES session key for encrypting/decrypting data. We're using the exponent of one trick to export the session key out as a blob, which allows the blob to be stored somewhere in a decrypted format. The question is how can we use this in our .NET application (C#). The framework encapsulates/wraps much of what the CryptoAPI is doing. Part of the problem is the CryptAPI states that the TripleDES algorithm for the Microsoft Enhanced Cryptographic Provider is 168 bits (3 keys of 56 bits). However, the .NET framework states their keys are 192 bits (3 keys of 64 bits). Apparently, the 3 extra bytes in the .NET framework is for parity? Anyway, we need to read the key portion out of the blob and somehow be able to use that in our .NET application. Currently we are not getting the expected results when attempting to use the key in .NET. The decryption is failing miserably. Any help would be greatly appreciated. Update: I've been working on ways to resolve this and have come up with a solution that I will post in time. However, still would appreciate any feedback from others. A: Intro I'm Finally getting around to posting the solution. I hope it provides some help to others out there that might be doing similar type things. There really isn't much reference to doing this elsewhere. Prerequisites In order for a lot of this to make sense it's necessary to read the exponent of one trick, which allows you to export a session key out to a blob (a well known byte structure). One can then do what they wish with this byte stream, but it holds the all important key. MSDN Documentation is Confusing In this particular example, I'm using the Microsoft Enhanced Cryptographic Provider, with the Triple DES (CALG_3DES) algorithm. The first thing that threw me for a loop was the fact that the key length is listed at 168 bits, with a block length of 64 bits. How can the key length be 168? Three keys of 56 bits? What happens to the other byte? So with that information I started to read elsewhere how the last byte is really parity and for whatever reason CryptoAPI strips that off. Is that really the case? Seems kind of crazy that they would do that, but OK. Consumption of Key in .NET Using the TripleDESCryptoServiceProvider, I noticed the remarks in the docs indicated that: This algorithm supports key lengths from 128 bits to 192 bits in increments of 64 bits. So if CryptoAPI has key lengths of 168, how will I get that into .NET which supports only supports multiples of 64? Therefore, the .NET side of the API takes parity into account, where the CryptoAPI does not. As one could imagine... confused was I. So with all of this, I'm trying to figure out how to reconstruct the key on the .NET side with the proper parity information. Doable, but not very fun... let's just leave it at that. Once I got all of this in place, everything ended up failing with a CAPITAL F. Still with me? Good, because I just fell off my horse again. Light Bulbs and Fireworks Low and behold, as I'm scraping MSDN for every last bit of information I find a conflicting piece in the Win32 CryptExportKey function. Low and behold I find this piece of invaluble information: For any of the DES key permutations that use a PLAINTEXTKEYBLOB, only the full key size, including parity bit, may be exported. The following key sizes are supported. Algorithm Supported key size CALG_DES 64 bits CALG_3DES_112 128 bits CALG_3DES 192 bits So it does export a key that is a multiple of 64 bits! Woohoo! Now to fix the code on the .NET side. .NET Import Code Tweak The byte order is important to keep in mind when importing a byte stream that contains a key that was exported as a blob from the CryptoAPI. The two API's do not use the same byte order, therefore, as @nic-strong indicates, reversing the byte array is essential before actually trying to use the key. Other than that, things work as expected. Simply solved: Array.Reverse( keyByteArray ); Conclusion I hope this helps somebody out there. I spent way too much time trying to track this down. Leave any comments if you have further questions and I can attempt to help fill in any details. Happy Crypto! A: Ok, forget the last answer I can't read :) You are working with 3Des keys not RSA keys. I worked on a bunch of code to share keys between .NET, CryptoAPI and openssl. Found a lot of good example code here for doing the key conversions: http://www.jensign.com/JavaScience/cryptoutils/index.html There is some 3des stuff in some of those examples, but it was related to openssl -> .NET iirc. I also just looked back over the RSA key code and one thing I notice I am doing is using Array.Reverse() on all the key parts of the RSA key (D,DP,DQ,InverseQ,Modulus,P,Q) i guess to convert endian. I remember that being non-obvious when first tackling the problem. Hope some of that helps. Good luck.
{ "language": "en", "url": "https://stackoverflow.com/questions/49211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Populating a list of integers in .NET I need a list of integers from 1 to x where x is set by the user. I could build it with a for loop eg assuming x is an integer set previously: List<int> iList = new List<int>(); for (int i = 1; i <= x; i++) { iList.Add(i); } This seems dumb, surely there's a more elegant way to do this, something like the PHP range method A: LINQ to the rescue: // Adding value to existing list var list = new List<int>(); list.AddRange(Enumerable.Range(1, x)); // Creating new list var list = Enumerable.Range(1, x).ToList(); See Generation Operators on LINQ 101 A: I'm one of many who has blogged about a ruby-esque To extension method that you can write if you're using C#3.0: public static class IntegerExtensions { public static IEnumerable<int> To(this int first, int last) { for (int i = first; i <= last; i++) { yield return i; } } } Then you can create your list of integers like this List<int> = first.To(last).ToList(); or List<int> = 1.To(x).ToList(); A: If you're using .Net 3.5, Enumerable.Range is what you need. Generates a sequence of integral numbers within a specified range. A: Here is a short method that returns a List of integers. public static List<int> MakeSequence(int startingValue, int sequenceLength) { return Enumerable.Range(startingValue, sequenceLength).ToList<int>(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/49214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "84" }
Q: How can I map a list of strings to my entity using NHibernate? I've got two tables in my database: Articles and Tags The Tags tables consist of ArticleID (foreign key) and a Tag (varchar). Now I need to map an articles tags into a readonly collection on Article entity, either using IList Tags or ReadOnlyCollection Tags. I've consulted the NHibernate reference material, but I can't seem to figure when to use Set, Bag and the other Nhibernate collections. I've seen examples using the ISet collection, but I really don't like to tie my entity classes to a NHibernate type. How can I do the mapping in NHibernate? edit: I ended up using a <bag> instead, as it doesn't require an index: <bag name="Tags" table="Tag" access="nosetter.camelcase" lazy="false"> <key column="ArticleId" /> <element column="Tag" type="System.String" /> </bag> A: The type of collection to use in your mapping depends on how you want to represent the collection in code. The settings map like so: * *The <list> maps directly to an IList. *The <map> maps directly to an IDictionary. *The <bag> maps to an IList. A does not completely comply with the IList interface because the Add() method is not guaranteed to return the correct index. An object can be added to a <bag> without initializing the IList. Make sure to either hide the IList from the consumers of your API or make it well documented. *The <set> maps to an Iesi.Collections.ISet. That interface is part of the Iesi.Collections assembly distributed with NHibernate. so if you want an IList to be returned, then you would use the <list> mapping. In your case, I'd probably map using the <list> mapping.
{ "language": "en", "url": "https://stackoverflow.com/questions/49220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: 128 bit data encryption using Java I need to store some sensitive data by encrypting it with atleast 128 bit key. I investigated into javax.crypto package and found that there are certain Cipher names, like PBEWithMD5AndDES or PBEWithSHA1AndDESede which provides encryption upto 56 bit and 80 bit (http://en.wikipedia.org/wiki/DESede). I referred other guys posts but those are mainly using RSA and in my understanding RSA is generally suitable for encrypting the communication data (with private-public key pair). My need is different, I just want to store the data and retrieve it back by decrypting it. Therefore I don't need any private-public key pairs. Please let me know if you have any idea about this. A: Use Advanced Encryption Standard (AES). It supports Key lengths of 128, 192, or 256 bits. The algorithm is simple. The Sun Java website has a section explaining how to do AES encryption in Java. From Wikipedia... ... the Advanced Encryption Standard (AES), also known as Rijndael, is a block cipher adopted as an encryption standard by the U.S. government. It has been analyzed extensively and is now used worldwide, as was the case with its predecessor, the Data Encryption Standard (DES)... So as a rule of thumb you are not supposed to use DES or its variants because it is being phased out. As of now, it is better to use AES. There are other options like Twofish, Blowfish etc also. Note that Twofish can be considered as an advanced version of Blowfish. A: I have had good success in the past with http://www.bouncycastle.org/ (they have a C# version as well). A: You need to download and install the unlimited strength JCE policy file for your JDK. For JDK 6, it is on http://java.sun.com/javase/downloads/index.jsp at the very bottom. A: Combining 3 different replies gives what I think is the correct answer. Download encryption libraries from Bouncycastle then you need to download the "Unlimited Strength Jurisdiction Policy" from Oracle (the files are at the bottom of the download page). Make sure you read the Readme-file on how to install it. Once you have done this, and using the sample code supplied with the Bountycastle package you should be able to encrypt your data. You can go with a tripple DES implementation, which will give you 112 bits key (often referred to as 128 bit, but only 112 of them are actually secure), or as previously stated, you can use AES. My money would be on AES. A: I'm not a crypto expert by any means (so take this suggestion with a grain of salt), but I have used Blowfish before, and I think you can use it for what you need. There is also a newer algorithm by the same guy called Twofish. Here is a website with a Java implementation, but be careful of the license (it says free for non-commercial use). You can find that link also from Bruce Schneier's website (the creator of both algorithms). A: Thanks Michael, after trying out many things in JCE, I finally settled for bouncycastle. JCE supports AES for encryption and PBE for password based encryption but it does not support combination of both. I wanted the same thing and that I found in bouncycastle. The example is at : http://forums.sun.com/thread.jspa?messageID=4164916
{ "language": "en", "url": "https://stackoverflow.com/questions/49226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Crash reporting in C for Linux Following this question: Good crash reporting library in c# Is there any library like CrashRpt.dll that does the same on Linux? That is, generate a failure report including a core dump and any necessary environment and notify the developer about it? Edit: This seems to be a duplicate of this question A: See Getting stack traces on Unix systems, automatically on Stack Overflow. A: Compile your code with debug symbols, enter unlimit coredumpsize in your shell and you'll get a coredump in the same folder as the binary. Use gdb/ddd - open the program first and then open the core dump. You can check this out for additional info. A: @Ionut This handles generating the core dump, but it doesn't handle notifying the developer when other users had crashes. A: Note: there are two interesting registers in an x86 seg-fault crash. The first, EIP, specifies the code address at which the exception occurred. In RichQ's answer, he uses addr2line to show the source line that corresponds to the crash address. But EIP can be invalid; if you call a function pointer that is null, it can be 0x00000000, and if you corrupt your call stack, the return can pop any random value into EIP. The second, CR2, specifies the data address which caused the segmentation fault. In RichQ's example, he is setting up i as a null pointer, then accessing it. In this case, CR2 would be 0x00000000. But if you change: int j = *i to: int j = i[2]; Then you are trying to access address 0x00000008, and that's what would be found in CR2. A: Nathan, under what circumstances in a segment base non-zero? I've never seen that occur in my 5 years of Linux application development. Thanks. A: @Martin I do architectural validation for x86, so I'm very familiar with the architecture the processor provides, but very unfamiliar with how it's used. That's what I based my comment on. If CR2 can be counted on to give the correct answer, then I stand corrected. A: Nathan, I wasn't insisting that you were incorrect; I was just saying that in my (limited) experience with Linux, the segment base is always zero. Maybe that's a good question for me to ask...
{ "language": "en", "url": "https://stackoverflow.com/questions/49251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: ruby method names For a project I am working on in ruby I am overriding the method_missing method so that I can set variables using a method call like this, similar to setting variables in an ActiveRecord object: Object.variable_name= 'new value' However, after implementing this I found out that many of the variable names have periods (.) in them. I have found this workaround: Object.send('variable.name=', 'new value') However, I am wondering is there a way to escape the period so that I can use Object.variable.name= 'new value' A: Don't do it! Trying to create identifiers that are not valid in your language is not a good idea. If you really want to set variables like that, use attribute macros: attr_writer :bar attr_reader :baz attr_accessor :foo Okay, now that you have been warned, here's how to do it. Just return another instance of the same class every time you get a regular accessor, and collect the needed information as you go. class SillySetter def initialize path=nil @path = path end def method_missing name,value=nil new_path = @path ? "#{@path}.#{name}" : name if name.to_s[-1] == ?= puts "setting #{new_path} #{value}" else return self.class.new(path=new_path) end end end s = SillySetter.new s.foo = 5 # -> setting foo= 5 s.foo.bar.baz = 4 # -> setting foo.bar.baz= 4 I didn't want to encourage ruby sillyness, but I just couldn't help myself! A: The only reason I can think of to do this, is if you really REALLY hate the person who is going to be maintaining this code after you. And I don't mean 'he ran over my dog' hatred. I mean real steaming pulsing vein in temple hatred. So, in short, don't. :-) A: If there's no hope of changing the canonical names, you could alias the getters and setters manually: def variable_name send 'variable.name' end def variable_name=(value) send 'variable.name=', value end
{ "language": "en", "url": "https://stackoverflow.com/questions/49252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the cleanest way to direct wxWidgets to always use wxFileConfig? I am writing my first serious wxWidgets program. I'd like to use the wxConfig facility to make the program's user options persistent. However I don't want wxConfigBase to automatically use the Windows registry. Even though I'm initially targeting Windows, I'd prefer to use a configuration (eg .ini) file. Does anyone know a clean and simple way of doing this ? Thanks. A: According to the source of wx/config.h file, all you need is to define the wxUSE_CONFIG_NATIVE symbol to 0 in your project and then it will always use wxFileConfig. A: The cleanest and simplest way is to use wxFileConfig class.
{ "language": "en", "url": "https://stackoverflow.com/questions/49258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Deploying a project using LINQ to SQL I am working on a winforms application using LINQ to SQL - and am building the app using a SQL Express instance on my workstation. The final installation of the project will be on a proper SQL Server 2005. The database has the same name, and all tables are identical but the hostname is different. The only way I have found to make my app work from one machine to the next is to re-open the code in Visual Studio, delete all of the objects referring to the SQL express instance from my .mdbl, save the project, connect to the other server, drag all of the references back on, and rebuild the application for release once more. an answer here suggested that one can simply modify the app.config/web.config file - whih would work for asp.net, but this is a winforms application. What's the correct way of making LINQ to SQL apps use a new database without having to re-open the app in visual studio? A: If I understand your problem correctly, you simply change the database's connection string in your app.config / web.config. Edit, post clarification: You have the connection strings stored somewhere. They might be in the app.config of your server. Still, you get them from somewhere and that somewhere may be in an app.config. Use that then :) A: One good solution is to add another connection to the dbml file itself. You can get to this by right-clicking on the field of the design surface and selecting properties. From there, you can add another connection string. Instead of deleting everything and redragging, just change the string and recompile. But if you want to get fancy-schmancy, you can have the program auto-detect whether it is being run locally or not, using this neat utility function: detect local And go from there to set the appropriate connection string based on the results. A: A more useful answer... app.config ends up as appname.exe.config when it has been built. rather than opening Visual Studio and modifying app.config, you can simply edit the appname.exe.config file, and restart the app. A: I believe you can store the connection information in an app.config file and retrieve it from there. Here is a post about doing that with LINQ to SQL. Once you deploy it to a production server, you can just edit the XML to change the data source.
{ "language": "en", "url": "https://stackoverflow.com/questions/49260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Approximate string matching algorithms Here at work, we often need to find a string from the list of strings that is the closest match to some other input string. Currently, we are using Needleman-Wunsch algorithm. The algorithm often returns a lot of false-positives (if we set the minimum-score too low), sometimes it doesn't find a match when it should (when the minimum-score is too high) and, most of the times, we need to check the results by hand. We thought we should try other alternatives. Do you have any experiences with the algorithms? Do you know how the algorithms compare to one another? I'd really appreciate some advice. PS: We're coding in C#, but you shouldn't care about it - I'm asking about the algorithms in general. Oh, I'm sorry I forgot to mention that. No, we're not using it to match duplicate data. We have a list of strings that we are looking for - we call it search-list. And then we need to process texts from various sources (like RSS feeds, web-sites, forums, etc.) - we extract parts of those texts (there are entire sets of rules for that, but that's irrelevant) and we need to match those against the search-list. If the string matches one of the strings in search-list - we need to do some further processing of the thing (which is also irrelevant). We can not perform the normal comparison, because the strings extracted from the outside sources, most of the times, include some extra words etc. Anyway, it's not for duplicate detection. A: Related to the Levenstein distance: you might wish to normalize it by dividing the result with the length of the longer string, so that you always get a number between 0 and 1 and so that you can compare the distance of pair of strings in a meaningful way (the expression L(A, B) > L(A, C) - for example - is meaningless unless you normalize the distance). A: We are using the Levenshtein distance method to check for duplicate customers in our database. It works quite well. A: Alternative algorithms to look at are agrep (Wikipedia entry on agrep), FASTA and BLAST biological sequence matching algorithms. These are special cases of approximate string matching, also in the Stony Brook algorithm repositry. If you can specify the ways the strings differ from each other, you could probably focus on a tailored algorithm. For example, aspell uses some variant of "soundslike" (soundex-metaphone) distance in combination with a "keyboard" distance to accomodate bad spellers and bad typers alike. A: OK, Needleman-Wunsch(NW) is a classic end-to-end ("global") aligner from the bioinformatics literature. It was long ago available as "align" and "align0" in the FASTA package. The difference was that the "0" version wasn't as biased about avoiding end-gapping, which often allowed favoring high-quality internal matches easier. Smith-Waterman, I suspect you're aware, is a local aligner and is the original basis of BLAST. FASTA had it's own local aligner as well that was slightly different. All of these are essentially heuristic methods for estimating Levenshtein distance relevant to a scoring metric for individual character pairs (in bioinformatics, often given by Dayhoff/"PAM", Henikoff&Henikoff, or other matrices and usually replaced with something simpler and more reasonably reflective of replacements in linguistic word morphology when applied to natural language). Let's not be precious about labels: Levenshtein distance, as referenced in practice at least, is basically edit distance and you have to estimate it because it's not feasible to compute it generally, and it's expensive to compute exactly even in interesting special cases: the water gets deep quick there, and thus we have heuristic methods of long and good repute. Now as to your own problem: several years ago, I had to check the accuracy of short DNA reads against reference sequence known to be correct and I came up with something I called "anchored alignments". The idea is to take your reference string set and "digest" it by finding all locations where a given N-character substring occurs. Choose N so that the table you build is not too big but also so that substrings of length N are not too common. For small alphabets like DNA bases, it's possible to come up with a perfect hash on strings of N characters and make a table and chain the matches in a linked list from each bin. The list entries must identify the sequence and start position of the substring that maps to the bin in whose list they occur. These are "anchors" in the list of strings to be searched at which an NW alignment is likely to be useful. When processing a query string, you take the N characters starting at some offset K in the query string, hash them, look up their bin, and if the list for that bin is nonempty then you go through all the list records and perform alignments between the query string and the search string referenced in the record. When doing these alignments, you line up the query string and the search string at the anchor and extract a substring of the search string that is the same length as the query string and which contains that anchor at the same offset, K. If you choose a long enough anchor length N, and a reasonable set of values of offset K (they can be spread across the query string or be restricted to low offsets) you should get a subset of possible alignments and often will get clearer winners. Typically you will want to use the less end-biased align0-like NW aligner. This method tries to boost NW a bit by restricting it's input and this has a performance gain because you do less alignments and they are more often between similar sequences. Another good thing to do with your NW aligner is to allow it to give up after some amount or length of gapping occurs to cut costs, especially if you know you're not going to see or be interested in middling-quality matches. Finally, this method was used on a system with small alphabets, with K restricted to the first 100 or so positions in the query string and with search strings much larger than the queries (the DNA reads were around 1000 bases and the search strings were on the order of 10000, so I was looking for approximate substring matches justified by an estimate of edit distance specifically). Adapting this methodology to natural language will require some careful thought: you lose on alphabet size but you gain if your query strings and search strings are of similar length. Either way, allowing more than one anchor from different ends of the query string to be used simultaneously might be helpful in further filtering data fed to NW. If you do this, be prepared to possibly send overlapping strings each containing one of the two anchors to the aligner and then reconcile the alignments... or possibly further modify NW to emphasize keeping your anchors mostly intact during an alignment using penalty modification during the algorithm's execution. Hope this is helpful or at least interesting. A: Use FM Index with Backtracking, similar to the one in Bowtie fuzzy aligner A: In order to minimize mismatches due to slight variations or errors in spelling, I've used the Metaphone algorithm, then Levenshtein distance (scaled to 0-100 as a percentage match) on the Metaphone encodings for a measure of closeness. That seems to have worked fairly well. A: To expand on Cd-MaN's answer, it sounds like you're facing a normalization problem. It isn't obvious how to handle scores between alignments with varying lengths. Given what you are interested in, you may want to obtain p-values for your alignment. If you are using Needleman-Wunsch, you can obtain these p-values using Karlin-Altschul statistics http://www.ncbi.nlm.nih.gov/BLAST/tutorial/Altschul-1.html BLAST will can local alignment and evaluate them using these statistics. If you are concerned about speed, this would be a good tool to use. Another option is to use HMMER. HMMER uses Profile Hidden Markov Models to align sequences. Personally, I think this is a more powerful approach since it also provides positional information. http://hmmer.janelia.org/
{ "language": "en", "url": "https://stackoverflow.com/questions/49263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: Embedded custom-tag in dynamic content (nested tag) not rendering Embedded custom-tag in dynamic content (nested tag) not rendering. I have a page that pulls dynamic content from a javabean and passes the list of objects to a custom tag for processing into html. Within each object is a bunch of html to be output that contains a second custom tag that I would like to also be rendered. The problem is that the tag invocation is rendered as plaintext. An example might serve me better. 1 Pull information from a database and return it to the page via a javabean. Send this info to a custom tag for outputting. <jsp:useBean id="ImportantNoticeBean" scope="page" class="com.mysite.beans.ImportantNoticeProcessBean"/> <%-- Declare the bean --%> <c:forEach var="noticeBean" items="${ImportantNoticeBean.importantNotices}"> <%-- Get the info --%> <mysite:notice importantNotice="${noticeBean}"/> <%-- give it to the tag for processing --%> </c:forEach> this tag should output a box div like so *SNIP* class for custom tag def and method setup etc out.println("<div class=\"importantNotice\">"); out.println(" " + importantNotice.getMessage()); out.println(" <div class=\"importantnoticedates\">Posted: " + importantNotice.getDateFrom() + " End: " + importantNotice.getDateTo()</div>"); out.println(" <div class=\"noticeAuthor\">- " + importantNotice.getAuthor() + "</div>"); out.println("</div>"); *SNIP* This renders fine and as expected <div class="importantNotice"> <p>This is a very important message. Everyone should pay attenton to it.</p> <div class="importantnoticedates">Posted: 2008-09-08 End: 2008-09-08</div> <div class="noticeAuthor">- The author</div> </div> 2 If, in the above example, for instance, I were to have a custom tag in the importantNotice.getMessage() String: *SNIP* "This is a very important message. Everyone should pay attenton to it. <mysite:quote author="Some Guy">Quote this</mysite:quote>" *SNIP* The important notice renders fine but the quote tag will not be processed and simply inserted into the string and put as plain text/html tag. <div class="importantNotice"> <p>This is a very important message. Everyone should pay attenton to it. <mysite:quote author="Some Guy">Quote this</mysite:quote></p> <div class="importantnoticedates">Posted: 2008-09-08 End: 2008-09-08</div> <div class="noticeAuthor">- The author</div> </div> Rather than <div class="importantNotice"> <p>This is a very important message. Everyone should pay attenton to it. <div class="quote">Quote this <span class="authorofquote">Some Guy</span></div></p> // or wahtever I choose as the output <div class="importantnoticedates">Posted: 2008-09-08 End: 2008-09-08</div> <div class="noticeAuthor">- The author</div> </div> I know this has to do with processors and pre-processors but I am not to sure about how to make this work. A: Just using <bodycontent>JSP</bodycontent> is not enough. You should do soimething like JspFragment body = getJspBody(); StringWriter stringWriter = new StringWriter(); StringBuffer buff = stringWriter.getBuffer(); buff.append("<h1>"); body.invoke(stringWriter); buff.append("</h1>"); out.println(stringWriter); to get inner tags rendered (example is for SimpleTag doTag method). However, in the question's code I see that inner tag is comming from a string which is not rendered as a part of JSP, but just some random string. I do not think you can force JSP translator to parse it. You can use regexp in your case or try to redesign your code in a way to have a jsp like this: <jsp:useBean id="ImportantNoticeBean" scope="page class="com.mysite.beans.ImportantNoticeProcessBean"/> <c:forEach var="noticeBean" items="${ImportantNoticeBean.importantNotices}"> <mysite:notice importantNotice="${noticeBean}"> <mysite:quote author="Some Guy">Quote this</mysite:quote> <mysite:messagebody author="Some Guy" /> </mysite:notice> </c:forEach> I whould go with regexp. A: I would be inclined to change the "architecture of your tagging" in that the data you wish to achieve should not be by tag on the inside of the class as it is "markup" designed for a page(though in obscurity it is possible to get the evaluating program thread of the JSP Servlet engine). What you would probably find better and more within standard procedure would be using "cooperating tags" with BodyTagSupport class extension and return EVAL_BODY_BUFFERED in doStartTag() method to repeat process the body and/or object sharing such as storing retrived data in the application hierarchy of the session or on the session for the user. See oracle j2ee custom tags tutorial for more information.
{ "language": "en", "url": "https://stackoverflow.com/questions/49267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Reading default application settings in C# I have a number of application settings (in user scope) for my custom grid control. Most of them are color settings. I have a form where the user can customize these colors and I want to add a button for reverting to default color settings. How can I read the default settings? For example: * *I have a user setting named CellBackgroundColor in Properties.Settings. *At design time I set the value of CellBackgroundColor to Color.White using the IDE. *User sets CellBackgroundColor to Color.Black in my program. *I save the settings with Properties.Settings.Default.Save(). *User clicks on the Restore Default Colors button. Now, Properties.Settings.Default.CellBackgroundColor returns Color.Black. How do I go back to Color.White? A: Im not really sure this is necessary, there must be a neater way, otherwise hope someone finds this useful; public static class SettingsPropertyCollectionExtensions { public static T GetDefault<T>(this SettingsPropertyCollection me, string property) { string val_string = (string)Settings.Default.Properties[property].DefaultValue; return (T)Convert.ChangeType(val_string, typeof(T)); } } usage; var setting = Settings.Default.Properties.GetDefault<double>("MySetting"); A: @ozgur, Settings.Default.Properties["property"].DefaultValue // initial value from config file Example: string foo = Settings.Default.Foo; // Foo = "Foo" by default Settings.Default.Foo = "Boo"; Settings.Default.Save(); string modifiedValue = Settings.Default.Foo; // modifiedValue = "Boo" string originalValue = Settings.Default.Properties["Foo"].DefaultValue as string; // originalValue = "Foo" A: Properties.Settings.Default.Reset() will reset all settings to their original value. A: I've got round this problem by having 2 sets of settings. I use the one that Visual Studio adds by default for the current settings, i.e. Properties.Settings.Default. But I also add another settings file to my project "Project -> Add New Item -> General -> Settings File" and store the actual default values in there, i.e. Properties.DefaultSettings.Default. I then make sure that I never write to the Properties.DefaultSettings.Default settings, just read from it. Changing everything back to the default values is then just a case of setting the current values back to the default values. A: Reading "Windows 2.0 Forms Programming", I stumbled upon these 2 useful methods that may be of help in this context: ApplicationSettingsBase.Reload ApplicationSettingsBase.Reset From MSDN: Reload contrasts with Reset in that the former will load the last set of saved application settings values, whereas the latter will load the saved default values. So the usage would be: Properties.Settings.Default.Reset() Properties.Settings.Default.Reload() A: How do I go back to Color.White? Two ways you can do: * *Save a copy of the settings before the user changes it. *Cache the user modified settings and save it to Properties.Settings before the application closes. A: I found that calling ApplicationSettingsBase.Reset would have the effect of resetting the settings to their default values, but also saving them at the same time. The behaviour I wanted was to reset them to default values but not to save them (so that if the user did not like the defaults, until they were saved they could revert them back). I wrote an extension method that was suitable for my purposes: using System; using System.Configuration; namespace YourApplication.Extensions { public static class ExtensionsApplicationSettingsBase { public static void LoadDefaults(this ApplicationSettingsBase that) { foreach (SettingsProperty settingsProperty in that.Properties) { that[settingsProperty.Name] = Convert.ChangeType(settingsProperty.DefaultValue, settingsProperty.PropertyType); } } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/49269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Safe integer parsing in Ruby I have a string, say '123', and I want to convert it to the integer 123. I know you can simply do some_string.to_i, but that converts 'lolipops' to 0, which is not the effect I have in mind. I want it to blow up in my face when I try to convert something invalid, with a nice and painful Exception. Otherwise, I can't distinguish between a valid 0 and something that just isn't a number at all. EDIT: I was looking for the standard way of doing it, without regex trickery. A: I like Myron's answer but it suffers from the Ruby disease of "I no longer use Java/C# so I'm never going to use inheritance again". Opening any class can be fraught with danger and should be used sparingly, especially when it's part of Ruby's core library. I'm not saying don't ever use it, but it's usually easy to avoid and that there are better options available, e.g. class IntegerInString < String def initialize( s ) fail ArgumentError, "The string '#{s}' is not an integer in a string, it's just a string." unless s =~ /^\-?[0-9]+$/ super end end Then when you wish to use a string that could be a number it's clear what you're doing and you don't clobber any core class, e.g. n = IntegerInString.new "2" n.to_i # => 2 IntegerInString.new "blob" ArgumentError: The string 'blob' is not an integer in a string, it's just a string. You can add all sorts of other checks in the initialize, like checking for binary numbers etc. The main thing though, is that Ruby is for people and being for people means clarity. Naming an object via its variable name and its class name makes things much clearer. A: I had to deal with this in my last project, and my implementation was similar, but a bit different: class NotAnIntError < StandardError end class String def is_int? self =~ /^-?[0-9]+$/ end def safe_to_i return self.to_i if is_int? raise NotAnIntError, "The string '#{self}' is not a valid integer.", caller end end class Integer def safe_to_i return self end end class StringExtensions < Test::Unit::TestCase def test_is_int assert "98234".is_int? assert "-2342".is_int? assert "02342".is_int? assert !"+342".is_int? assert !"3-42".is_int? assert !"342.234".is_int? assert !"a342".is_int? assert !"342a".is_int? end def test_safe_to_i assert 234234 == 234234.safe_to_i assert 237 == "237".safe_to_i begin "a word".safe_to_i fail 'safe_to_i did not raise the expected error.' rescue NotAnIntError # this is what we expect.. end end end A: someString = "asdfasd123" number = someString.to_i if someString != number.to_s puts "oops, this isn't a number" end Probably not the cleanest way to do it, but should work. A: This might work: i.to_i if i.match(/^\d+$/) A: Also be aware of the affects that the current accepted solution may have on parsing hex, octal, and binary numbers: >> Integer('0x15') # => 21 >> Integer('0b10') # => 2 >> Integer('077') # => 63 In Ruby numbers that start with 0x or 0X are hex, 0b or 0B are binary, and just 0 are octal. If this is not the desired behavior you may want to combine that with some of the other solutions that check if the string matches a pattern first. Like the /\d+/ regular expressions, etc. A: Ruby has this functionality built in: Integer('1001') # => 1001 Integer('1001 nights') # ArgumentError: invalid value for Integer: "1001 nights" As noted in answer by Joseph Pecoraro, you might want to watch for strings that are valid non-decimal numbers, such as those starting with 0x for hex and 0b for binary, and potentially more tricky numbers starting with zero that will be parsed as octal. Ruby 1.9.2 added optional second argument for radix so above issue can be avoided: Integer('23') # => 23 Integer('0x23') # => 35 Integer('023') # => 19 Integer('0x23', 10) # => #<ArgumentError: invalid value for Integer: "0x23"> Integer('023', 10) # => 23 A: Another unexpected behavior with the accepted solution (with 1.8, 1.9 is ok): >> Integer(:foobar) => 26017 >> Integer(:yikes) => 26025 so if you're not sure what is being passed in, make sure you add a .to_s. A: Re: Chris's answer Your implementation let's things like "1a" or "b2" through. How about this instead: def safeParse2(strToParse) if strToParse =~ /\A\d+\Z/ strToParse.to_i else raise Exception end end ["100", "1a", "b2", "t"].each do |number| begin puts safeParse2(number) rescue Exception puts "#{number} is invalid" end end This outputs: 100 1a is invalid b2 is invalid t is invalid
{ "language": "en", "url": "https://stackoverflow.com/questions/49274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "167" }
Q: File downloads in IE6 I've come across a rather interesing (and frustrating) problem with IE6. We are serving up some server generated pdfs and then simply setting headers in PHP to force a browser download of the file. Works fine and all, except in IE6 but only if the windows user account is set to standard user (ie. not administrator). Since this is for a corporate environment, of course all their accounts are setup this way. Weird thing is, that in the download dialog, the Content-Type is not recognized: header( 'Pragma: public' ); header( 'Expires: 0' ); header( 'Cache-Control: must-revalidate, pre-check=0, post-check=0' ); header( 'Cache-Control: public' ); header( 'Content-Description: File Transfer' ); header( 'Content-Type: application/pdf' ); header( 'Content-Disposition: attachment; filename="xxx.pdf"' ); header( 'Content-Transfer-Encoding: binary' ); echo $content; exit; I also tried writing the file content to a temporary file first so I could also set the Content-Length in the header but that didn't help. A: These headers are bogus! Content-Transfer-Encoding: binary This header is copied from e-mail headers. It doesn't apply to HTTP simply because HTTP doesn't have any other mode of transfer than binary. Setting it makes as much sense as setting X-Bits-Per-Byte: 8. Cache-control: pre-check=0, post-check=0 These non-standard values define when IE should check whether cached content is still fresh. 0 is the default, so setting it to 0 is waste of time. These directives apply only to cacheable content, and Expires:0 and must-revalidate hint that you wanted to make it non-cacheable. Content-Description: File Transfer This is another e-mail copycat. By design this header doesn't affect download in any way. It's just informative free-form text. It's technically as useful as X-Hi-Mom: I'm sending you a file! header. header( 'Cache-Control: must-revalidate, pre-check=0, post-check=0' ); header( 'Cache-Control: public' ); In PHP second line completely overwrites the first one. You seem to be stabbing in the dark. What really makes a difference Content-Disposition: attachment You don't have to insert filename there (you can use mod_rewrite or index.php/fakefilename.doc trick – it gives much better support for special characters and works in browsers that ignore the optional Content-Disposition header). In IE it makes difference whether file is in cache or not ("Open" doens't work for non-cacheable files), and whether user has plug-in that claims to support type of file that IE detects. To disable cache you only need Cache-control:no-cache (without 20 extra fake headers), and to make file cacheable you don't have to send anything. NB: PHP has horrible misfeature called session.cache_limiter which hopelessly screws up HTTP headers unlesss you set it to none. ini_set('session.cache_limiter','none'); // tell PHP to stop screwing up HTTP A: some versions of IE seem to take header( 'Expires: 0' ); header( 'Cache-Control: must-revalidate, pre-check=0, post-check=0' ); way too seriously and remove the downloaded content before it's passed to the plugin to display it. Remove these two and you should be fine. And make sure you are not using any server-side GZIP compression when working with PDFs because some versions of Acrobat seem to struggle with this. I know I'm vague here, but above tips are based on real-world experience I got using a web application serving dynamically built PDFs containing barcodes. I don't know what versions are affected, I only know that using the two "tricks" above made the support calls go away :p A: I have had the exact same problem about a year ago, and after much googling and research, my headers (from Java code) look for IE6 & PDFs like this: response.setHeader("Content-Type", "application/pdf "; name=" + file.getName()); response.setContentType("application/pdf"); response.setHeader("Last-Modified", getHeaderDate(file.getFile()); response.setHeader("Content-Length", file.getLength()); Drop everything else. There is apparently something a bit whacky with IE6, caching, forced downloading and plug-ins. I hope this works for you...a small difference for me is that the request initially comes from a Flash swf file. But that should not matter. A: I appreciate the time you guys spent on this post. I tried several combinations and finally got my symfony project to work. Here I post the solutions in case anyone will have the same problem: public function download(sfResponse $response) { $response->clearHttpHeaders(); $response->setHttpHeader('Pragma: public', true); $response->addCacheControlHttpHeader("Cache-control","private"); $response->setContentType('application/octet-stream', true); $response->setHttpHeader('Content-Length', filesize(sfConfig::get('sf_web_dir') . sfConfig::get('app_paths_docPdf') . $this->getFilename()), true); $response->setHttpHeader("Content-Disposition", "attachment; filename=\"". $this->getFilename() ."\""); $response->setHttpHeader('Content-Transfer-Encoding', 'binary', true); $response->setHttpHeader("Content-Description","File Transfer"); $response->sendHttpHeaders(); $response->setContent(readfile(sfConfig::get('sf_web_dir') . sfConfig::get('app_paths_docPdf') . $this->getFilename())); return sfView::NONE; } This works just fine for me in IE6,IE7, Chrome, Firefox. Hope this will help someone. A: As pilif already mentions, make sure to turn off the server-side gzip compression. For me this has caused problems with PDF files (among other types) and for maybe-not-so-obscure reasons also with .zip files both under Internet Explorer and FireFox. As far as I could tell, the last bit of the zip footer would get stripped (at least by FireFox) causing a corrupted format. In PHP you can use the following code: ini_set("zlib.output_compression",0); A: The following bit of Java code works for me (tested on Firefox 2 and 3, IE 6 and 7): response.setHeader("Content-Disposition", "attachment; filename=\"" + file.getName() + "\""); response.setContentType(getServletContext().getMimeType(file.getName())); response.setContentLength(file.length()); No other headers were necessary at all. Also, I tested this code both with gzip compression on and off (using a separate servlet filter that does the compression). Doesn't make any difference (works without any problem in the four browsers I tested it on). Plus, this works for other filetypes as well. A: You can add an additional parameter that the server won't read to the url it might help too. http://www.mycom.com/services/pdf?action=blahblah&filename=pdf0001.pdf I have run into cases where ie will be more likely to read the filename on the end of the url than any of the headers A: I had a similar problem, but it might not be exactly related. My issue was that IE6 seems to have a problem with special characters (specifically slashes) in the file name. Removing these fixed the issue. A: If you are using SSL: Make sure you do not include any cache control (or Pragma) headers. There is a bug in IE6 which will prevent users from downloading files if cache control headers are used. They will get an error message. I pulled my hair out over this for 2 days, so hopefully this message helps someone. A: simply switch to this content type and it will work, also be sure Pragma ist set to something NOT equal "no-cache" header( 'Content-type: application/octet-stream'); # force download, no matter what mimetype header( 'Content-Transfer-Encoding: binary' ); # is always ok, also for plain text
{ "language": "en", "url": "https://stackoverflow.com/questions/49284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Will PRISM help? I am considering building a application using PRISM (Composite WPF Guidance/Library). The application modules will be vertically partitioned (i.e. Customers, Suppliers, Sales Orders, etc). This is still all relatively easy... I also have a Shell with a main region were all the work will happen but now I need the following behavior: I need a menu on my main Shell and when each one of the options gets clicked (like customers, suppliers, etc) I need to find the module and load it into the region (Only 1 view at a time)? Does anybody know of any sample applications with this type of behavior? All the samples are more focused on having all the modules loaded on the main shell? And should my menu bar also be a module? [UPDATE] How do I inject a module into a region based on it being selected from a menu? All the examples show that the module injects the view into the region on initialize? I need to only inject the view if the module is selected on a menu? A: Yes PRISM will help you out here. A number of things here worth mentioning. RE: Is Prism right for me? You can load a Module on Demand. PRISM has the capabilities of loading a module at RunTime, so in your case if you bootup the said solution using Shell and ModuleA. Your user then triggers an event (ie Menu choice) it can then allow you to dynamically load ModuleB and then inject that into play. To be clear though, you really need to sit down and do your homework here as you need to ensure ModuleB doesn't have any of its own dependencies on other modules etc (typically its wise to use an Infrastructure Module. I've used techniques where i have a manifest of modules that i lookup in XML that lists its absolute dependencies and then I make sure they are loaded first, then I load ModuleB). See Load Modules on Demand via PRISM help docs (Development Activities). Also lookup Prepare a Module for Remote Downloading RE: Injecting a view at runtime To inject a View into a Region via Menu is a simple case of accessing the IRegionManager and then adding it. To do this, make sure in your Constructor for the said ViewModel/Presenter/Controller you're using put: MyConstructor(IRegionManager regionManager, IUnityContainer container) As with PRISM you can pretty much add any object you want into your construct and PRISM will ensure it arrives there on time and on budget (hehe). From there its the normal approach you'd take with adding a view... eg: IMyViewInstance myViewInstance = this.container.Resolve<IMyViewInstance>(); IRegion myRegion = this.regionManager.Regions["YourRegion"]; myRegion.add(myViewInstance); myRegion.Active(myViewInstance); And all should come together! :) Note: * *Make sure you set a local reference to the container and regionManager at Construct (this.container = container etc). *If you're not sure where the above namespaces exist, right click on IUnityContainer for example and let Visual Studio RESOLVE it (right click menu) *Put the Add logic into your Menu Event that or use a central method - whichever blows your hair back :) Scott Barnes - Rich Platforms Product Manager - Microsoft. A: Just finished watching Brian Noyes on Prism at dnrTV. This answered all my question... A: It's not clear what you mean saying "find the module and load it into the region". You can load module's view and add it to shell. Composite UI app block and CompositeWPF are built on top of the IoC pattern. It means that your modules should inject their menu items in shell's menu strip or subscribe to events generated by shell. A: You could have your main region be a ContentControl, this way only 1 view will be active at a time. You can also load your modules "On Demand". There is a Quickstart that shows you how to do this. You should also keep in mind that if the module was already initialized once, initializing it for a second time will not execute the Initialize() method on the module. It might be useful that when you click on the menu, this will load the module on demand (which will not load the view yet) and then you can fire an event through EventAggregator, so the module can now add the view (use the named approach for not adding the view twrice) and the Activate the view (which will make sure the view is shwon in the region). Hope this helps, Julian A: to save you time, check John Papa's Presentation Framework article. It will be more easy if you have 3rd object(Screen Conductor) to handle your screens in showing or hiding from regions.
{ "language": "en", "url": "https://stackoverflow.com/questions/49299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to Identify Postback event in Page_Load We have some legacy code that needs to identify in the Page_Load which event caused the postback. At the moment this is implemented by checking the Request data like this... if (Request.Form["__EVENTTARGET"] != null && (Request.Form["__EVENTTARGET"].IndexOf("BaseGrid") > -1 // BaseGrid event ( e.g. sort)        || Request.Form["btnSave"] != null // Save button This is pretty ugly and breaks if someone renames a control. Is there a better way of doing this? Rewriting each page so that it does not need to check this in Page_Load is not an option at the moment. A: This should get you the control that caused the postback: public static Control GetPostBackControl(Page page) { Control control = null; string ctrlname = page.Request.Params.Get("__EVENTTARGET"); if (ctrlname != null && ctrlname != string.Empty) { control = page.FindControl(ctrlname); } else { foreach (string ctl in page.Request.Form) { Control c = page.FindControl(ctl); if (c is System.Web.UI.WebControls.Button) { control = c; break; } } } return control; } Read more about this on this page: http://ryanfarley.com/blog/archive/2005/03/11/1886.aspx A: In addition to the above code, if control is of type ImageButton then add the below code, if (control == null) { for (int i = 0; i < page.Request.Form.Count; i++) { if ((page.Request.Form.Keys[i].EndsWith(".x")) || (page.Request.Form.Keys[i].EndsWith(".y"))) { control = page.FindControl(page.Request.Form.Keys[i].Substring(0, page.Request.Form.Keys[i].Length - 2)); break; } } } A: I am just posting the entire code (which includes the image button / additional control check that causes postback). Thanks Espo. public Control GetPostBackControl(Page page) { Control control = null; string ctrlname = page.Request.Params.Get("__EVENTTARGET"); if ((ctrlname != null) & ctrlname != string.Empty) { control = page.FindControl(ctrlname); } else { foreach (string ctl in page.Request.Form) { Control c = page.FindControl(ctl); if (c is System.Web.UI.WebControls.Button) { control = c; break; } } } // handle the ImageButton postbacks if (control == null) { for (int i = 0; i < page.Request.Form.Count; i++) { if ((page.Request.Form.Keys[i].EndsWith(".x")) || (page.Request.Form.Keys[i].EndsWith(".y"))) { control = page.FindControl(page.Request.Form.Keys[i].Substring(0, page.Request.Form.Keys[i].Length - 2)); break; } } } return control; }
{ "language": "en", "url": "https://stackoverflow.com/questions/49302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can parallel traversals be done in MATLAB just as in Python? Using the zip function, Python allows for loops to traverse multiple sequences in parallel. for (x,y) in zip(List1, List2): Does MATLAB have an equivalent syntax? If not, what is the best way to iterate over two parallel arrays at the same time using MATLAB? A: Tested only in octave... (no matlab license). Variations of arrayfun() exist, check the documentation. dostuff = @(my_ten, my_one) my_ten + my_one; tens = [ 10 20 30 ]; ones = [ 1 2 3]; x = arrayfun(dostuff, tens, ones); x Yields... x = 11 22 33 A: If I'm not mistaken the zip function you use in python creates a pair of the items found in list1 and list2. Basically it still is a for loop with the addition that it will retrieve the data from the two seperate lists for you, instead that you have to do it yourself. So maybe your best option is to use a standard for loop like this: for i=1:length(a) c(i) = a(i) + b(i); end or whatever you have to do with the data. If you really are talking about parallel computing then you should take a look at the Parallel Computing Toolbox for matlab, and more specifically at parfor A: If x and y are column vectors, you can do: for i=[x';y'] # do stuff with i(1) and i(2) end (with row vectors, just use x and y). Here is an example run: >> x=[1 ; 2; 3;] x = 1 2 3 >> y=[10 ; 20; 30;] y = 10 20 30 >> for i=[x';y'] disp(['size of i = ' num2str(size(i)) ', i(1) = ' num2str(i(1)) ', i(2) = ' num2str(i(2))]) end size of i = 2 1, i(1) = 1, i(2) = 10 size of i = 2 1, i(1) = 2, i(2) = 20 size of i = 2 1, i(1) = 3, i(2) = 30 >> A: I would recommend to join the two arrays for the computation: % assuming you have column vectors a and b x = [a b]; for i = 1:length(a) % do stuff with one row... x(i,:); end This will work great if your functions can work with vectors. Then again, many functions can even work with matrices, so you wouldn't even need the loop. A: for (x,y) in zip(List1, List2): should be for example: >> for row = {'string' 10 >> 'property' 100 }' >> fprintf([row{1,:} '%d\n'], row{2, :}); >> end string10 property100 This is tricky because the cell is more than 2x2, and the cell is even transposed. Please try this. And this is another example: >> cStr = cell(1,10);cStr(:)={'string'}; >> cNum=cell(1,10);for cnt=1:10, cNum(cnt)={cnt}; >> for row = {cStr{:}; cNum{:}} >> fprintf([row{1,:} '%d\n'], row{2,:}); >> end string1 string2 string3 string4 string5 string6 string7 string8 string9 string10 A: If I have two arrays al and bl with same dimension No 2 size and I want to iterate through this dimension (say multiply al(i)*bl(:,i)). Then the following code will do: al = 1:9; bl = [11:19; 21:29]; for data = [num2cell(al); num2cell(bl,1)]     [a, b] = data{:};     disp(a*b) end A: for loops in MATLAB used to be slow, but this is not true anymore. So vectorizing is not always the miracle solution. Just use the profiler, and tic and toc functions to help you identify possible bottlenecks.
{ "language": "en", "url": "https://stackoverflow.com/questions/49307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Sharepoint scheduling with SSRS issue I have some scheduled SSRS reports (integrated mode) that get emailed by subscription. All of a sudden the reports have stopped being emailed. I get the error: Failure sending mail: Report Server has encountered a SharePoint error. I don't even know where to start to look as I can't get into SSRS and my Sharepoint knowledge is lacking. Can you help? A: Did you enable trace logging in SharePoint? You can activate it by going to the Central Administration website > Operations > Diagnostic Logging > Trace Logging. Perhaps we can get a more detailed error from there...
{ "language": "en", "url": "https://stackoverflow.com/questions/49316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: VS 2005 & 2008 library linking Is it correct to link a static library (.lib) compiled with VS 2005 with a program which is compiled with VS 2008? Both library and my program are written in C++. This program is run on Windows Mobile 6 Professional emulator. This seems to work, there are no linking errors. However the program crashes during startup because strange things happen inside the linked library. E.g. lib can return a vector of characters with size of big negative number. There are no such problems when the program is compiled with VS 2005. What is even more strange the problem is only when using release configuration for the build. When compiling using debug configuration the problem doesn't occur. A: It's not incorrect to link to an older library in the way you describe, but it doesn't surprise me you're seeing some odd behavior. Couple sanity checks: * *Are both files using the same versions of the same run-time libraries? *Is your .EXE application "seeing" the same header files that the .LIB was built against? Make sure that the _WIN32_WINNT (etc.) macros are declared properly. And when you say .LIB, do you mean a true static library (mylib.lib) or an import library for a DLL (mylib.lib -> mylib.dll)? And what are the compile/link settings for your VS2008 executable project? A: VS2005 and VS2008 use different STL implementations. When the VS2005 code returns a vector, the object has memory layout different from what VS2008 expects. That should be the reason for the broken values you see in the returned date. As a rule of thumb, you should always compile all C++ modules of a project with the same compiler and all settings/#defines equal. One particular #define that causes similar behaviour is the SECURE_SCL #define of VS2008. Two modules compiled with different settings will create exactly your problems, because #defining SECURE_SCL introduces more member variables to various C++ library classes. A: Addition: As Timbo has pointed out, VS 2005 and VS 2008 use different STL implementations. However, you can use VS 2008 to build against the old STL if VS 2005 is installed, too: * *Open your library project in VS 2008. *Go to Tools > Options > Projects and Solutions > VC++ Directories *Select your device platform in the drop-down at the top. *Change the paths from the VS9 to the VS8 folders. This way, you can use VS 2008 to build libraries for use with VS 2005. (Worked for me.)
{ "language": "en", "url": "https://stackoverflow.com/questions/49330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Querying collections of value type in the Criteria API in Hibernate In my database, I have an entity table (let's call it Entity). Each entity can have a number of entity types, and the set of entity types is static. Therefore, there is a connecting table that contains rows of the entity id and the name of the entity type. In my code, EntityType is an enum, and Entity is a Hibernate-mapped class. in the Entity code, the mapping looks like this: @CollectionOfElements @JoinTable( name = "ENTITY-ENTITY-TYPE", joinColumns = @JoinColumn(name = "ENTITY-ID") ) @Column(name="ENTITY-TYPE") public Set<EntityType> getEntityTypes() { return entityTypes; } Oh, did I mention I'm using annotations? Now, what I'd like to do is create an HQL query or search using a Criteria for all Entity objects of a specific entity type. This page in the Hibernate forum says this is impossible, but then this page is 18 months old. Can anyone tell me if this feature has been implemented in one of the latest releases of Hibernate, or planned for the coming release? A: HQL: select entity from Entity entity where :type = some elements(entity.types) I think that you can also write it like: select entity from Entity entity where :type in(entity.types) A: Is your relationship bidirectional, i.e., does EntityType have an Entity property? If so, you can probably do something like entity.Name from EntityType where name = ?
{ "language": "en", "url": "https://stackoverflow.com/questions/49334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to prevent a hyperlink from linking Is it possible to prevent an asp.net Hyperlink control from linking, i.e. so that it appears as a label, without actually having to replace the control with a label? Maybe using CSS or setting an attribute? I know that marking it as disabled works but then it gets displayed differently (greyed out). To clarify my point, I have a list of user names at the top of my page which are built dynamically using a user control. Most of the time these names are linkable to an email page. However if the user has been disabled the name is displayed in grey but currently still links to the email page. I want these disabled users to not link. I know that really I should be replacing them with a label but this does not seem quite as elegant as just removing the linking ability usings CSS say (if thats possible). They are already displayed in a different colour so its obvious that they are disabled users. I just need to switch off the link. A: This sounds like a job for JQuery. Just give a specific class name to all of the HyperLink controls that you want the URLs removed and then apply the following JQuery snippet to the bottom of your page: $(document).ready(function() { $('a.NoLink').removeAttr('href') }); All of the HyperLink controls with the class name "NoLink" will automatically have all of their URLs removed and the link will appear to be nothing more than text. A single line of JQuery can solve your problem. A: I'm curious on what it is you which to accomplish with that. Why use a link at all? Is it just for the formatting? In that case, just use a <span> in HTML and use stylesheets to make the format match the links. Or you use the link and attach an onClick-Event where you "return false;" which will make the browser not do the navigation - if JS is enabled. But: Isn't that terribly confusing for your users? Why create something that looks like a link but does nothing? Can you provide more details? I have this feeling that you are trying to solve a bigger problem which has a way better solution than to cripple a link :-) A: A Hyperlink control will render as a "a" "/a" tag no matter what settings you do. You can customize a CSS class to make the link look like a normal label. Alternatively you can build a custom control that inherits from System.Web.UI.WebControls.HyperLink, and override the Render method protected override void Render(HtmlTextWriter writer) { if (Enabled) base.Render(writer); else { writer.RenderBeginTag(HtmlTextWriterTag.Span); writer.Write(Text); writer.RenderEndTag(HtmlTextWriterTag.Span); } } } Could be a bit overkill, but it will work for your requirements. Plus I find is usefull to have a base asp:CustomHyperlink asp:CustomButton classes in my project files. Makes it easier to define custom behaviour throughout the project. A: If you merely want to modify the appearance of the link so as not to look like a link, you can set the CSS for your "a" tags to not have underlines: a: link, visited, hover, active { text-decoration: none; } Though I would advise against including "hover" here because there will be no other way to know that it's a link. Anyway I agree with @pilif here, this looks like a usability disaster waiting to happen. A: This should work: onclick="return false;" if not, you could change href to "#" also. Making it appear as a rest of text is css, e.g. displaying arrow instead of hand is: a.dummy { cursor:default; } A: If you mean to stop the link from activating, the usual way is to link to "javascript:void(0);", i.e.: <a href="javascript:void(0);">foo</a> A: Thanks for all the input, it looks like the short answer is 'No you can't (well not nicely anyway)', so I'll have to do it the hard way and add the conditional code. A: If you are using databind in asp.net handle the databinding event and just don't set the NavigateUrl if that users is disabled. A: Have you tried just not setting the NavigateUrl property? If this isn't set, it may just render as a span. A: .fusion-link-wrapper { pointer-events: none; } A: Another solution is apply this class on your hyperlink. .avoid-clicks { pointer-events: none; } A: CSS solution to make tags with no href (which is what asp:HyperLink will produce if NavigateURL is bound to null/empty string) visually indistinguishable from the surrounding text: a:not([href]), a:not([href]):hover, a:not([href]):active, a:not([href]):visited { text-decoration: inherit !important; color: inherit !important; cursor: inherit !important; } Unfortunately, this won't tell screen readers not to read it out as a link - though without an href, it's not clickable, so I'm hoping it already won't be identified as such. I haven't had the chance to test it though. (If you also want to do the same to links with href="", as well as those missing an href, you would need to add pointer-events:none as well, since otherwise an empty href will reload the page. This definitely leaves screen readers still treating it as a link, though.) In the OP's use case, if you still have the href being populated from the database but have a boolean value that indicates whether the link should be a 'real' link or not, you should use that to disable the link, and add a:disabled to the selector list above. Then disabled links will also look like plain text rather than a greyed-out link. (Disabling the link will also provide that information to screen readers, so that's better than just using pointer-events: none and a class.) A note of caution - if you add these sorts of rules globally rather than for a specific page, remember to watch out for cases where an tag has no (valid) href, but you are providing a click handler - you still need those to look/act like links.
{ "language": "en", "url": "https://stackoverflow.com/questions/49346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Practical solution to center vertically and horizontally in HTML that works in FF, IE6 and IE7 What can be a practical solution to center vertically and horizontally content in HTML that works in Firefox, IE6 and IE7? Some details: * *I am looking for solution for the entire page. *You need to specify only width of the element to be centered. Height of the element is not known in design time. *When minimizing window, scrolling should appear only when all white space is gone. In other words, width of screen should be represented as: "leftSpace width=(screenWidth-widthOfCenteredElement)/2"+ "centeredElement width=widthOfCenteredElement"+ "rightSpace width=(screenWidth-widthOfCenteredElement)/2" And the same for the height: "topSpace height=(screenHeight-heightOfCenteredElement)/2"+ "centeredElement height=heightOfCenteredElement"+ "bottomSpace height=(screenWidth-heightOfCenteredElement)/2" * *By practical I mean that use of tables is OK. I intend to use this layout mostly for special pages like login. So CSS purity is not so important here, while following standards is desirable for future compatibility. A: From http://www.webmonkey.com/codelibrary/Center_a_DIV #horizon { text-align: center; position: absolute; top: 50%; left: 0px; width: 100%; height: 1px; overflow: visible; display: block } #content { width: 250px; height: 70px; margin-left: -125px; position: absolute; top: -35px; left: 50%; visibility: visible } <div id="horizon"> <div id="content"> <p>This text is<br><emphasis>DEAD CENTRE</emphasis ><br>and stays there!</p> </div><!-- closes content--> </div><!-- closes horizon--> A: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Centering</title> <style type="text/css" media="screen"> body, html {height: 100%; padding: 0px; margin: 0px;} #outer {width: 100%; height: 100%; overflow: visible; padding: 0px; margin: 0px;} #middle {vertical-align: middle} #centered {width: 280px; margin-left: auto; margin-right: auto; text-align:center;} </style> </head> <body> <table id="outer" cellpadding="0" cellspacing="0"> <tr><td id="middle"> <div id="centered" style="border: 1px solid green;"> Centered content </div> </td></tr> </table> </body> </html> Solution from community.contractwebdevelopment.com also is a good one. And if you know height of your content that needs to be centered seems to be better. A: For horizontal: <style> body { text-align:left; } .MainBlockElement { text-align:center; margin: 0 auto; } </style> You need the text-align:left in the body to fix a bug with IE's rendering. A: For this issue you can use this style #yourElement { margin:0 auto; min-width:500px; } A: Is this what you are trying to accomplish? If not, please explain what is different than the image below?
{ "language": "en", "url": "https://stackoverflow.com/questions/49350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to make cruisecontrol only build one project at a time I have just set up cruise control.net on our build server, and I am unable to find a setting to tell it to only build one project at a time. Any ideas? A: If you are using CruiseControl 1.3 or later you can use an Integration Queue These allow you to control which projects can be built concurrently and which must be serialized.
{ "language": "en", "url": "https://stackoverflow.com/questions/49352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Oracle Application Server SSL Certificates preventing connection to Apache service We've got an Apache instance deployed through Oracle Application Server. It's currently installed with the default wallet, and, the self-signed certificate. We've got a GEOTRUST certificiate, imported the Trusted Roots and imported the new Cert to the Wallet Manager. We've then updated the SSL properties of the VHOST and the HTTP_SERVER through Enterprise Manager. Things have restarted fine, however, we now can't connect to the Apache service, we're getting the error: call to NZ function nzos_Handshake failed This seems to point to a problem with the root certs, but in my opinion, these are registered with the Wallet correctly. Anyone seen this before and have some pointers? A: Had the same problem with an Apache/JBoss configuration look at your httpd.conf, you should have three lines: SSLCertificateFile /usr/local/ssl/crt/public.crt SSLCertificateKeyFile /usr/local/ssl/private/private.key SSLCACertificateFile /usr/local/ssl/crt/EV_intermediate.crt The last line is needed because the Geotrust root CA is not known by most older and some newer servers (you would not have to do this with a verisign or instantssl cert, for instance).
{ "language": "en", "url": "https://stackoverflow.com/questions/49355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is a good repository layout for releases and projects in Subversion? We have the standard Subversion trunk/branches/tags layout. We have several branches for medium- and long-term projects, but none so far for a release. This is approaching fast. Should we: * *Mix release branches and project branches together? *Create a releases folder? If so, is there a better name than releases? *Create a projects folder and move the current branches there? If so, is there a better name than projects? I've seen "sandbox" and "spike" in other repositories. *Something else altogether? A: I recommend the following layout, for two reasons: - all stuff related to a given project is within the same part of the tree; makes it easier for people to grasp - permissions handling may be easier this way And by the way: It's a good idea with few repositories, instead of many, because change history normally is better preserved that way (change history is gone if you move files between repositories, unless you take special and somewhat complicated action). In most setups, there should only be two repositories: the main repository, and a sandbox repository for people experimenting with Subversion. project1 trunk branches 1.0 1.1 joes-experimental-feature-branch tags 1.0.0 1.0.1 1.0.2 project2 trunk branches 1.0 1.1 tags 1.0.0 1.0.1 1.0.2 A: Taking off from what others have said, we have a rather rigid structure of progression from alpha, to beta, to production. The alpha code is whatever the head of the trunk is, and is kept stable for the most part, but not always. When we are ready to release, we create a "release branch" that effectively freezes that code, and only bug fixes are applied to it. (These are ported back into the trunk). Also, tags are periodically made as release candidates, and these are the beta versions. Once the code moves to production, the release branch is kept open for support, security, and bug-fixing, and minor versions are tagged and release off of this. Once a particular version is no longer supported, we close the branch. This allows us to have a clear distinction of what bugs were fixed for what releases, and then they get moved into the trunk. Major, long-term, or massive changes that will break the system for long periods of time are also given their own branch, but these are much shorter-lived, and don't have the word "release" in them. A: When we want to prepare for the release of, say, version 3.1, we create a branches/3.1-Release branch, and merge individual commits from trunk as we seem fit (our release-branches usually only receive the most critical fixes from the main development branch). Typically, this release branch lives through the alpha- and beta- testing phases, and is closed when the next release is on the threshold. What you can also do, once your DVDs are pressed or your download package uploaded, is to tag the release branch as released, so you can easily rebuild from exactly the same revision if you need to later. Carl A: We already use tags, although we have the one-big-project structure rather than the many-small-projects you have outlined. In this case, we need to tag, e.g. 1.0.0, but also branch, e.g. 1.0. My concern is mixing project branches and release branches together, e.g. branches this-project that-project the-other-project 1.0 1.1 1.2 tags 1.0.0 1.0.1 1.1.0 1.2.0 1.2.1 A: Where I work, we have "temp-branches" and "release-branches" directories instead of just "branches". So in your case project branches would go under temp-branches, and release branches (created at time of release, of course) would go under release-branches. A: Another important consideration is when to branch and when to close a branch -- which depends on your release schedule but also how long it takes you to test and release. In my experience this takes a lot of management in terms of ensuring everyone in the team knows what the plan is and when to use what, all of which was helped by documeting it all in a release wiki. Not quite the answer you are looking for but I think once you have the structure sorted, for which you've had plenty of good suggestions already, the next challenge is keeping the team informed and on track. A: Releases is the same as tags... Have you got multiple projects inside your trunk? In that case, I would copy the same folders inside tags So trunk fooapp stuff... barapp stuff... tags fooapp 1.0.0 1.0.1 barapp 1.0.0
{ "language": "en", "url": "https://stackoverflow.com/questions/49356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: CSS2 Attribute Selectors with Regex CSS Attribute selectors allow the selection of elements based on attribute values. Unfortunately, I've not used them in years (mainly because they're not supported by all modern browsers). However, I remember distinctly that I was able to use them to adorn all external links with an icon, by using a code similar to the following: a[href=http] { background: url(external-uri); padding-left: 12px; } The above code doesn't work. My question is: How does it work? How do I select all <a> tags whose href attribute starts with "http"? The official CSS spec (linked above) doesn't even mention that this is possible. But I do remember doing this. (Note: The obvious solution would be to use class attributes for distinction. I want to avoid this because I have little influence of the way the HTML code is built. All I can edit is the CSS code.) A: Note that, in Antti's example you'd probably want to add a catch for any absolute links you may have to your own domain, which you probably don't want to flag as 'external', e.g.: a[href^="http://your.domain.com"] { background: none; padding: 0; } And you'd want this after the previous declaration. You might also want to include the full protocol prefix, just in case you have a local document named "http-info.html" that you wish to link to, e.g.: a[href^="http://"] { background: url(external-uri); padding-left: 12px; } Note that, in both these slightly-more complex cases, you should quote the value. These work, for me, in IE7. A: As for CSS 2.1, see http://www.w3.org/TR/CSS21/selector.html#attribute-selectors Executive summary: Attribute selectors may match in four ways: [att] Match when the element sets the "att" attribute, whatever the value of the attribute. [att=val] Match when the element's "att" attribute value is exactly "val". [att~=val] Match when the element's "att" attribute value is a space-separated list of "words", one of which is exactly "val". If this selector is used, the words in the value must not contain spaces (since they are separated by spaces). [att|=val] Match when the element's "att" attribute value is a hyphen-separated list of "words", beginning with "val". The match always starts at the beginning of the attribute value. This is primarily intended to allow language subcode matches (e.g., the "lang" attribute in HTML) as described in RFC 3066 ([RFC3066]). CSS3 also defines a list of selectors, but the compatibility varies hugely. There's also a nifty test suite that that shows which selectors work in your browser. As for your example, a[href^=http] { background: url(external-uri); padding-left: 12px; } should do the trick. Unfortunately, it is not supported by IE. A: Antti's answer is sufficient for selecting anchor's whose href's begin with http and gives a perfect rundown on the available CSS2 regex-esque attribute selectors, like so: Attribute selectors may match in four ways: [att] Match when the element sets the "att" attribute, whatever the value of the attribute. [att=val] Match when the element's "att" attribute value is exactly "val". [att~=val] Match when the element's "att" attribute value is a space-separated list of "words", one of which is exactly "val". If this selector is used, the words in the value must not contain spaces (since they are separated by spaces). [att|=val] Match when the element's "att" attribute value is a hyphen-separated list of "words", beginning with "val". The match always starts at the beginning of the attribute value. This is primarily intended to allow language subcode matches (e.g., the "lang" attribute in HTML) as described in RFC 3066 ([RFC3066]). However, here is the appropriate, UPDATED way to select all outgoing links using the new CSS3 :not pseudo class selector as well as the new *= substring syntax to make sure it disregards any internal links that may still begin with http: a[href^=http]:not([href*="yourdomain.com"]) { background: url(external-uri); padding-left: 12px; } *Note that this is unsupported by IE, up to at least IE8. Thanks, IE, you're the best :P
{ "language": "en", "url": "https://stackoverflow.com/questions/49368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Deploy MySQL Server + DB with .Net application HI All, We have a .Net 2.0 application which has a MySQL backend. We want to be able to deploy MySQl and the DB when we install the application and im trying to find the best solution. The current setup is to copy the required files to a folder on the local machine and then perform a "NET START" commands to install and start the mysql service. Then we restore a backup of the DB to this newly created mysql instance using bat files. Its not an ideal solution at all and im trying to come up with something more robust. The issues are User rights on Vista, and all sorts of small things around installing and starting the service. Its far too fragile to be reliable or at least it appears that way when i am testing it. This is a Client/Server type setup so we only need to install one Server per office but i want to make sure its as hassle free as possible and with as few screens as possible. How would you do it? A: Not sure where you're at in the project, but if it's a simple and small database you might consider converting it to SQLite. It's not ideal for Client/Server operations, but if it's low volume/transactions it might work. A: Use an installer with a worked out script. Any installer like Wise, InstallShield, InnoSetup, etc will probably do. A: We took a different approach on this. We make MySQL xcopy-able, by writting a wrapper to generate the configuration file(my.ini) before calling MySQL (to correctly setup the base path and so on). Then we written another service installed using the standard setup. This service will take care of starting MySQL and other required background program (in our case Apache) for us. Since the MySQL is deploy by us, we wanted to have full control over it. A: With a Client/Sever setup, you're allowed to require that whoever installs the server install it as an admin. That should solve most of your problems. Again- that's the server. The clients might be another story.
{ "language": "en", "url": "https://stackoverflow.com/questions/49378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to lock compiled Java classes to prevent decompilation? How do I lock compiled Java classes to prevent decompilation? I know this must be very well discussed topic on the Internet, but I could not come to any conclusion after referring them. Many people do suggest obfuscator, but they just do renaming of classes, methods, and fields with tough-to-remember character sequences but what about sensitive constant values? For example, you have developed the encryption and decryption component based on a password based encryption technique. Now in this case, any average Java person can use JAD to decompile the class file and easily retrieve the password value (defined as constant) as well as salt and in turn can decrypt the data by writing small independent program! Or should such sensitive components be built in native code (for example, VC++) and call them via JNI? A: @jatanp: or better yet, they can decompile, remove the licensing code, and recompile. With Java, I don't really think there is a proper, hack-proof solution to this problem. Not even an evil little dongle could prevent this with Java. My own biz managers worry about this, and I think too much. But then again, we sell our application into large corporates who tend to abide by licensing conditions--generally a safe environment thanks to the bean counters and lawyers. The act of decompiling itself can be illegal if your license is written correctly. So, I have to ask, do you really need hardened protection like you are seeking for your application? What does your customer base look like? (Corporates? Or the teenage gamer masses, where this would be more of an issue?) A: If you're looking for a licensing solution, you can check out the TrueLicense API. It's based on the use of asymmetrical keys. However, it doesn't mean your application cannot be cracked. Every application can be cracked with enough effort. What really important is, as Stu answered, figuring out how strong protection you need. A: You can use byte-code encryption with no fear. The fact is that the cited above paper “Cracking Java byte-code encryption” contains a logic fallacy. The main claim of the paper is before running all classes must be decrypted and passed to the ClassLoader.defineClass(...) method. But this is not true. The assumption missed here is provided that they are running in authentic, or standard, java run-time environment. Nothing can oblige the protected java app not only to launch these classes but even decrypt and pass them to ClassLoader. In other words, if you are in standard JRE you can't intercept defineClass(...) method because the standard java has no API for this purpose, and if you use modified JRE with patched ClassLoader or any other “hacker trick” you can't do it because protected java app will not work at all, and therefore you will have nothing to intercept. And absolutely doesn't matter which “patch finder” is used or which trick is used by hackers. These technical details are a quite different story. A: I don't think there exists any effective offline antipiracy method. The videogame industry has tried to find that many times and their programs has always been cracked. The only solution is that the program must be run online connected with your servers, so that you can verify the lincense key, and that there is only one active connecion by the licensee at a time. This is how World of Warcraft or Diablo works. Even tough there are private servers developed for them to bypass the security. Having said that, I don't believe that mid/large corporations use illegal copied software, because the cost of the license for them is minimal (perhaps, I don't know how much you are goig to charge for your program) compared to the cost of a trial version. A: As long as they have access to both the encrypted data and the software that decrypts it, there is basically no way you can make this completely secure. Ways this has been solved before is to use some form of external black box to handle encryption/decryption, like dongles, remote authentication servers, etc. But even then, given that the user has full access to their own system, this only makes things difficult, not impossible -unless you can tie your product directly to the functionality stored in the "black box", as, say, online gaming servers. A: Disclaimer: I am not a security expert. This sounds like a bad idea: You are letting someone encrypt stuff with a 'hidden' key that you give him. I don't think this can be made secure. Maybe asymmetrical keys could work: * *deploy an encrypted license with a public key to decrypt *let the customer create a new license and send it to you for encryption *send a new license back to the client. I'm not sure, but I believe the client can actually encrypt the license key with the public key you gave him. You can then decrypt it with your private key and re-encrypt as well. You could keep a separate public/private key pair per customer to make sure you actually are getting stuff from the right customer - now you are responsible for the keys... A: No matter what you do, it can be 'decompiled'. Heck, you can just disassemble it. Or look at a memory dump to find your constants. You see, the computer needs to know them, so your code will need to too. What to do about this? Try not to ship the key as a hardcoded constant in your code: Keep it as a per-user setting. Make the user responsible for looking after that key. A: Some of the more advanced Java bytecode obfuscators do much more than just class name mangling. Zelix KlassMaster, for example, can also scramble your code flow in a way that makes it really hard to follow and works as an excellent code optimizer... Also many of the obfuscators are also able to scramble your string constants and remove unused code. Another possible solution (not necessarily excluding the obfuscation) is to use encrypted JAR files and a custom classloader that does the decryption (preferably using native runtime library). Third (and possibly offering the strongest protection) is to use native ahead of time compilers like GCC or Excelsior JET, for example, that compile your Java code directly to a platform specific native binary. In any case You've got to remember that as the saying goes in Estonian "Locks are for animals". Meaning that every bit of code is available (loaded into memory) during the runtime and given enough skill, determination and motivation, people can and will decompile, unscramble and hack your code... Your job is simply to make the process as uncomfortable as you can and still keep the thing working... A: Q: If I encrypt my .class files and use a custom classloader to load and decrypt them on the fly, will this prevent decompilation? A: The problem of preventing Java byte-code decompilation is almost as old the language itself. Despite a range of obfuscation tools available on the market, novice Java programmers continue to think of new and clever ways to protect their intellectual property. In this Java Q&A installment, I dispel some myths around an idea frequently rehashed in discussion forums. The extreme ease with which Java .class files can be reconstructed into Java sources that closely resemble the originals has a lot to do with Java byte-code design goals and trade-offs. Among other things, Java byte code was designed for compactness, platform independence, network mobility, and ease of analysis by byte-code interpreters and JIT (just-in-time)/HotSpot dynamic compilers. Arguably, the compiled .class files express the programmer's intent so clearly they could be easier to analyze than the original source code. Several things can be done, if not to prevent decompilation completely, at least to make it more difficult. For example, as a post-compilation step you could massage the .class data to make the byte code either harder to read when decompiled or harder to decompile into valid Java code (or both). Techniques like performing extreme method name overloading work well for the former, and manipulating control flow to create control structures not possible to represent through Java syntax work well for the latter. The more successful commercial obfuscators use a mix of these and other techniques. Unfortunately, both approaches must actually change the code the JVM will run, and many users are afraid (rightfully so) that this transformation may add new bugs to their applications. Furthermore, method and field renaming can cause reflection calls to stop working. Changing actual class and package names can break several other Java APIs (JNDI (Java Naming and Directory Interface), URL providers, etc.). In addition to altered names, if the association between class byte-code offsets and source line numbers is altered, recovering the original exception stack traces could become difficult. Then there is the option of obfuscating the original Java source code. But fundamentally this causes a similar set of problems. Encrypt, not obfuscate? Perhaps the above has made you think, "Well, what if instead of manipulating byte code I encrypt all my classes after compilation and decrypt them on the fly inside the JVM (which can be done with a custom classloader)? Then the JVM executes my original byte code and yet there is nothing to decompile or reverse engineer, right?" Unfortunately, you would be wrong, both in thinking that you were the first to come up with this idea and in thinking that it actually works. And the reason has nothing to do with the strength of your encryption scheme.
{ "language": "en", "url": "https://stackoverflow.com/questions/49379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "104" }
Q: What are the preferred conventions in naming attributes, methods and classes in different languages? Are the naming conventions similar in different languages? If not, what are the differences? A: Each language has a specific style. At least one. Each project adopts a specific style. At least, they should. This can sometimes be a different style to the canonical style your language uses - probably based on the dev leaders preferences. Which style to use? If your language ships with a good standard library, try to adopt the conventions in that library. If your language has a canonical book (The C Programming language, The Camel Book, Programming Ruby etc.) use that. Sometimes the language designers (C#, Java spring to mind) actually write a bunch of guidelines. Use those, especially if the community adopts them too. If you use multiple languages remember to stay flexible and adjust your preferred coding style to the language you are using - when coding in Python use a different style to coding in C# etc. A: As others have said, things vary a lot, but here's a rough overview of the most commonly used naming conventions in various languages: lowercase, lowercase_with_underscores: Commonly used for local variables and function names (typical C syntax). UPPERCASE, UPPERCASE_WITH_UNDERSCORES: Commonly used for constants and variables that never change. Some (older) languages like BASIC also have a convention for using all upper case for all variable names. CamelCase, javaCamelCase: Typically used for function names and variable names. Some use it only for functions and combine it with lowercase or lowercase_with_underscores for variables. When javaCamelCase is used, it's typically used both for functions and variables. This syntax is also quite common for external APIs, since this is how the Win32 and Java APIs do it. (Even if a library uses a different convention internally they typically export with the (java)CamelCase syntax for function names.) prefix_CamelCase, prefix_lowercase, prefix_lowercase_with_underscores: Commonly used in languages that don't support namespaces (i.e. C). The prefix will usually denote the library or module to which the function or variable belongs. Usually reserved to global variables and global functions. Prefix can also be in UPPERCASE. Some conventions use lowercase prefix for internal functions and variables and UPPERCASE prefix for exported ones. There are of course many other ways to name things, but most conventions are based on one of the ones mentioned above or a variety on those. BTW: I forgot to mention Hungarian notation on purpose. A: Of course there are some common guidelines but there are also differences due to difference in language syntax\design. For .NET (C#, VB, etc) I would recommend following resource: * *Framework Design Guidelines - definitive book on .NET coding guidelines including naming conventions *Naming Guidelines - guidelines from Microsoft *General Naming Conventions - another set of MS guidelines (C#, C++, VB) A: G'day, One of the best recommendations I can make is to read the relevant section(s) of Steve McConnell's Code Complete (Amazon Link). He has an excellent discussion on naming techniques. HTH cheers, Rob A: I think that most naming conventions will vary but the developer, for example I name variables like: mulitwordVarName, however some of the dev I have worked with used something like mulitword_var_name or multiwordvarname or aj5g54ag or... I think it really depends on your preference. A: Years ago an wise old programmer taught me the evils of Hungarian notation, this was a real legacy system, Microsoft adopted it some what in the Windows SDK, and later in MFC. It was designed around loose typed languages like C, and not for strong typed languages like C++. At the time I was programming Windows 3.0 using Borland's Turbo Pascal 1.0 for Windows, which later became Delphi. Anyway long story short at this time the team I was working on developed our own standards very simple and applicable to almost all languages, based on simple prefixes - * *a - argument *l - local *m - member *g - global The emphasis here is on scope, rely on the compiler to check type, all you need care about is scope, where the data lives. This has many advantages over nasty old Hungarian notation in that if you change the type of something via refactoring you don't have to search and replace all instances of it. Nearly 16 years later I still promote the use of this practice, and have found it applicable to almost every language I have developed in.
{ "language": "en", "url": "https://stackoverflow.com/questions/49382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Creating batch jobs in PowerShell Imagine a DOS style .cmd file which is used to launch interdependent windowed applications in the right order. Example: 1) Launch a server application by calling an exe with parameters. 2) Wait for the server to become initialized (or a fixed amount of time). 3) Launch client application by calling an exe with parameters. What is the simplest way of accomplishing this kind of batch job in PowerShell? A: Remember that PowerShell can access .Net objects. The Start-Sleep as suggested by Blair Conrad can be replaced by a call to WaitForInputIdle of the server process so you know when the server is ready before starting the client. $sp = get-process server-application $sp.WaitForInputIdle() You could also use Process.Start to start the process and have it return the exact Process. Then you don't need the get-process. $sp = [diagnostics.process]::start("server-application", "params") $sp.WaitForInputIdle() $cp = [diagnostics.process]::start("client-application", "params") A: @Lars Truijens suggested Remember that PowerShell can access .Net objects. The Start-Sleep as suggested by Blair Conrad can be replaced by a call to WaitForInputIdle of the server process so you know when the server is ready before starting the client. This is more elegant than sleeping for a fixed (or supplied via parameter) amount of time. However, WaitForInputIdle applies only to processes with a user interface and, therefore, a message loop. so this may not work, depending on the characteristics of launch-server-application. However, as Lars pointed out to me, the question referred to a windowed application (which I missed when I read the question), so his solution is probably best. A: To wait 10 seconds between launching the applications, try launch-server-application serverparam1 serverparam2 ... Start-Sleep -s 10 launch-client-application clientparam1 clientparam2 clientparam3 ... If you want to create a script and have the arguments passed in, create a file called runlinkedapps.ps1 (or whatever) with these contents: launch-server-application $args[0] $args[1] Start-Sleep -s 10 launch-client-application $args[2] $args[3] $args[4] Or however you choose to distribute the server and client parameters on the line you use to run runlinkedapps.ps1. If you want, you could even pass in the delay here, instead of hardcoding 10. Remember, your .ps1 file need to be on your Path, or you'll have to specify its location when you run it. (Oh, and I've assumed that launch-server-application and launch-client-application are on your Path - if not, you'll need to specify the full path to them as well.)
{ "language": "en", "url": "https://stackoverflow.com/questions/49402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do you parse a filename in bash? I have a filename in a format like: system-source-yyyymmdd.dat I'd like to be able to parse out the different bits of the filename using the "-" as a delimiter. A: Depending on your needs, awk is more flexible than cut. A first teaser: # echo "system-source-yyyymmdd.dat" \ |awk -F- '{printf "System: %s\nSource: %s\nYear: %s\nMonth: %s\nDay: %s\n", $1,$2,substr($3,1,4),substr($3,5,2),substr($3,7,2)}' System: system Source: source Year: yyyy Month: mm Day: dd Problem is that describing awk as 'more flexible' is certainly like calling the iPhone an enhanced cell phone ;-) A: Use the cut command. e.g. echo "system-source-yyyymmdd.dat" | cut -f1 -d'-' will extract the first bit. Change the value of the -f parameter to get the appropriate parts. Here's a guide on the Cut command. A: You can use the cut command to get at each of the 3 'fields', e.g.: $ echo "system-source-yyyymmdd.dat" | cut -d'-' -f2 source "-d" specifies the delimiter, "-f" specifies the number of the field you require A: A nice and elegant (in my mind :-) using only built-ins is to put it into an array var='system-source-yyyymmdd.dat' parts=(${var//-/ }) Then, you can find the parts in the array... echo ${parts[0]} ==> system echo ${parts[1]} ==> source echo ${parts[2]} ==> yyyymmdd.dat Caveat: this will not work if the filename contains "strange" characters such as space, or, heaven forbids, quotes, backquotes... A: Another method is to use the shell's internal parsing tools, which avoids the cost of creating child processes: oIFS=$IFS IFS=- file="system-source-yyyymmdd.dat" set $file IFS=$oIFS echo "Source is $2" A: The simplest (and IMO best way) to do this is simply to use read: $ IFS=-. read system source date ext << EOF > foo-bar-yyyymmdd.dat > EOF $ echo $system foo $ echo $source $date $ext bar yyyymmdd dat There are many variations on that theme, many of which are shell dependent: bash$ IFS=-. read system source date ext <<< foo-bar-yyyymmdd.dat echo "$name" | { IFS=-. read system source date ext echo In all shells, the variables are set here...; } echo but only in some shells do they retain their value here
{ "language": "en", "url": "https://stackoverflow.com/questions/49403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: SQL Query to get latest price I have a table containing prices for a lot of different "things" in a MS SQL 2005 table. There are hundreds of records per thing per day and the different things gets price updates at different times. ID uniqueidentifier not null, ThingID int NOT NULL, PriceDateTime datetime NOT NULL, Price decimal(18,4) NOT NULL I need to get today's latest prices for a group of things. The below query works but I'm getting hundreds of rows back and I have to loop trough them and only extract the latest one per ThingID. How can I (e.g. via a GROUP BY) say that I want the latest one per ThingID? Or will I have to use subqueries? SELECT * FROM Thing WHERE ThingID IN (1,2,3,4,5,6) AND PriceDate > cast( convert(varchar(20), getdate(), 106) as DateTime) UPDATE: In an attempt to hide complexity I put the ID column in a an int. In real life it is GUID (and not the sequential kind). I have updated the table def above to use uniqueidentifier. A: I would try something like the following subquery and forget about changing your data structures. SELECT * FROM Thing WHERE (ThingID, PriceDateTime) IN (SELECT ThingID, max(PriceDateTime ) FROM Thing WHERE ThingID IN (1,2,3,4) GROUP BY ThingID ) Edit the above is ANSI SQL and i'm now guessing having more than one column in a subquery doesnt work for T SQL. Marius, I can't test the following but try; SELECT p.* FROM Thing p, (SELECT ThingID, max(PriceDateTime ) FROM Thing WHERE ThingID IN (1,2,3,4) GROUP BY ThingID) m WHERE p.ThingId = m.ThingId and p.PriceDateTime = m.PriceDateTime another option might be to change the date to a string and concatenate with the id so you have only one column. This would be slightly nasty though. A: I think the only solution with your table structure is to work with a subquery: SELECT * FROM Thing WHERE ID IN (SELECT max(ID) FROM Thing WHERE ThingID IN (1,2,3,4) GROUP BY ThingID) (Given the highest ID also means the newest price) However I suggest you add a "IsCurrent" column that is 0 if it's not the latest price or 1 if it is the latest. This will add the possible risk of inconsistent data, but it will speed up the whole process a lot when the table gets bigger (if it is in an index). Then all you need to do is to... SELECT * FROM Thing WHERE ThingID IN (1,2,3,4) AND IsCurrent = 1 UPDATE Okay, Markus updated the question to show that ID is a uniqueid, not an int. That makes writing the query even more complex. SELECT T.* FROM Thing T JOIN (SELECT ThingID, max(PriceDateTime) WHERE ThingID IN (1,2,3,4) GROUP BY ThingID) X ON X.ThingID = T.ThingID AND X.PriceDateTime = T.PriceDateTime WHERE ThingID IN (1,2,3,4) I'd really suggest using either a "IsCurrent" column or go with the other suggestion found in the answers and use "current price" table and a separate "price history" table (which would ultimately be the fastest, because it keeps the price table itself small). (I know that the ThingID at the bottom is redundant. Just try if it is faster with or without that "WHERE". Not sure which version will be faster after the optimizer did its work.) A: If the subquery route was too slow I would look at treating your price updates as an audit log and maintaining a ThingPrice table - perhaps as a trigger on the price updates table: ThingID int not null, UpdateID int not null, PriceDateTime datetime not null, Price decimal(18,4) not null The primary key would just be ThingID and "UpdateID" is the "ID" in your original table. A: Since you are using SQL Server 2005, you can use the new (CROSS|OUTTER) APPLY clause. The APPLY clause let's you join a table with a table valued function. To solve the problem, first define a table valued function to retrieve the top n rows from Thing for a specific id, date ordered: CREATE FUNCTION dbo.fn_GetTopThings(@ThingID AS GUID, @n AS INT) RETURNS TABLE AS RETURN SELECT TOP(@n) * FROM Things WHERE ThingID= @ThingID ORDER BY PriceDateTime DESC GO and then use the function to retrieve the top 1 records in a query: SELECT * FROM Thing t CROSS APPLY dbo.fn_GetTopThings(t.ThingID, 1) WHERE t.ThingID IN (1,2,3,4,5,6) The magic here is done by the APPLY clause which applies the function to every row in the left result set then joins with the result set returned by the function then retuns the final result set. (Note: to do a left join like apply, use OUTTER APPLY which returns all rows from the left side, while CROSS APPLY returns only the rows that have a match in the right side) BlaM: Because I can't post comments yet( due to low rept points) not even to my own answers ^^, I'll answer in the body of the message: -the APPLY clause even, if it uses table valued functions it is optimized internally by SQL Server in such a way that it doesn't call the function for every row in the left result set, but instead takes the inner sql from the function and converts it into a join clause with the rest of the query, so the performance is equivalent or even better (if the plan is chosen right by sql server and further optimizations can be done) than the performance of a query using subqueries), and in my personal experience APPLY has no performance issues when the database is properly indexed and statistics are up to date (just like a normal query with subqueries behaves in such conditions) A: It depends on the nature of how your data will be used, but if the old price data will not be used nearly as regularly as the current price data, there may be an argument here for a price history table. This way, non-current data may be archived off to the price history table (probably by triggers) as the new prices come in. As I say, depending on your access model, this could be an option. A: I'm converting the uniqueidentifier to a binary so that I can get a MAX of it. This should make sure that you won't get duplicates from multiple records with identical ThingIDs and PriceDateTimes: SELECT * FROM Thing WHERE CONVERT(BINARY(16),Thing.ID) IN ( SELECT MAX(CONVERT(BINARY(16),Thing.ID)) FROM Thing INNER JOIN (SELECT ThingID, MAX(PriceDateTime) LatestPriceDateTime FROM Thing WHERE PriceDateTime >= CAST(FLOOR(CAST(GETDATE() AS FLOAT)) AS DATETIME) GROUP BY ThingID) LatestPrices ON Thing.ThingID = LatestPrices.ThingID AND Thing.PriceDateTime = LatestPrices.LatestPriceDateTime GROUP BY Thing.ThingID, Thing.PriceDateTime ) AND Thing.ThingID IN (1,2,3,4,5,6) A: Since ID is not sequential, I assume you have a unique index on ThingID and PriceDateTime so only one price can be the most recent for a given item. This query will get all of the items in the list IF they were priced today. If you remove the where clause for PriceDate you will get the latest price regardless of date. SELECT * FROM Thing thi WHERE thi.ThingID IN (1,2,3,4,5,6) AND thi.PriceDateTime = (SELECT MAX(maxThi.PriceDateTime) FROM Thing maxThi WHERE maxThi.PriceDateTime >= CAST( CONVERT(varchar(20), GETDATE(), 106) AS DateTime) AND maxThi.ThingID = thi.ThingID) Note that I changed ">" to ">=" since you could have a price right at the start of a day A: It must work without using a global PK column (for complex primary keys for example): SELECT t1.*, t2.PriceDateTime AS bigger FROM Prices t1 LEFT JOIN Prices t2 ON t1.ThingID = t2.ThingID AND t1.PriceDateTime < t2.PriceDateTime HAVING t2.PriceDateTime IS NULL A: Try this (provided you only need the latest price, not the identifier or datetime of that price) SELECT ThingID, (SELECT TOP 1 Price FROM Thing WHERE ThingID = T.ThingID ORDER BY PriceDateTime DESC) Price FROM Thing T WHERE ThingID IN (1,2,3,4) AND DATEDIFF(D, PriceDateTime, GETDATE()) = 0 GROUP BY ThingID A: maybe i missunderstood the taks but what about a: SELECT ID, ThingID, max(PriceDateTime), Price FROM Thing GROUP BY ThingID
{ "language": "en", "url": "https://stackoverflow.com/questions/49404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: GSM Modems, PCs, SMS and Telephone Calls What all would be the requirements for the following scenario: A GSM modem connected to a PC running a web based (ASP.NET) application. In the application the user selects a phone number from a list of phone nos. When he clicks on a button named the PC should call the selected phone number. When the person on the phone responds he should be able to have a conversation with the PC user. Similarly there should be a facility to send SMS. Now I don't want any code listings. I just need to know what would be the requirements besides asp.net, database for storing phone numbers, and GSM modem. Any help in terms of reference websites would be highly appreciated. A: I'll pick some points of your very broad question and answer them. Note that there are other points where others may be of more help... First, a GSM modem is probably not the way you'd want to go as they usually don't allow for concurrency. So unless you just want one user at the time to use your service, you'd probably need another solution. Also, think about cost issues - at least where I live, providing such a service would be prohibitively expensive using a normal GSM modem and a normal contract - but this is drifting into off-topicness. The next issue will be to get voice data from the client to the server (which will relay it to the phone system - using whatever practical means). Pure browser based functionality won't be of much help, so you would absolutely need something plugin based. Flash may work, seeing they provide access to the microphone, but please don't ask me about the details. I've never done anything like this. Also, privacy would be a concern. While GSM data is encrypted, the path between client and server is not per default. And even if you use SSL, you'd have to convince your users trusting you that you don't record all the conversations going on, but this too is more of a political than a coding issue. Finally, you'd have to think of bandwidth. Voice uses a lot of it and also it requires low latency. If you use a SIP trunk, you'll need the bandwidth twice per user: Once from and to your client and once from and to the SIP trunk. Calculate with 10-64 KBit/s per user and channel. A feasible architecture would probably be to use a SIP trunk (they optimize on using VoIP as much as possible and thus can provide much lower rates than a GSM provider generally does. Also, they allow for concurrency), an Asterisk box (http://www.asterisk.org - a free PBX), some custom made flash client and a custom made SIP client on the server. All in all, this is quite the undertaking :-) A: You'll need a GSM library. There appear to be a few of these. e.g. http://www.wirelessdevstudio.com/eng/ A: Have a look at the Ekiga project at http://www.Ekiga.org. This provides audio and or video chat between users using the standard SIP (Session Initiation Protocol) over the Internet. Like most SIP clients, it can also be used to make calls to and receive calls from the telephone network, but this requires an account with a commercial service provider (there are many, and fees are quite reasonable compared to normal phone line accounts). Ekiga uses the open source OPAL library to implement SIP communications (OPAL has support for several VoIP and video over IP standards - see www.opalvoip.org for more info).
{ "language": "en", "url": "https://stackoverflow.com/questions/49416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to shortcut time before data after first hit in browser We have a couple of large solutions, each with about 40 individual projects (class libraries and nested websites). It takes about 2 minutes to do a full rebuild all. A couple of specs on the system: * *Visual Studio 2005, C# *Primary project is a Web Application Project *40 projects in total (4 Web projects) *We use the internal VS webserver *We extensively use user controls, right down to a user control which contains a textbox *We have a couple of inline web projects that allows us to do partial deployment *About 120 user controls *About 200.000 lines of code (incl. HTML) *We use Source Safe What I would like to know is how to bring down the time it takes when hitting the site with a browser for the first time. And, I'm not talking about post full deployment - I'm talking about doing a small change in the code, build, refresh browser. This first hit, takes about 1 minute 15 seconds before data gets back. To speed things up, I have experimented a little with Ram disks, specifically changing the <compilation> attribute in web.config, setting the tempDirectory to my Ram disk. This does speed things up a bit. Interestingly though, this totally removed ALL IO access during first hit from the browser. Remarks We never do a full compile during development, only partial. For example, the class library being worked on is compiled and then the main site is compiled which then copies the binaries from the class library to the bin directory. I understand that the asp.net engine needs to parse all the ascx/aspx files after critical files have been changed (bin dir for example) but, what I don't understand is why it needs to do that when only one library dll has been modified. So, anybody know of a way to either: Sub segment the solutions to provide faster first hit or fine tune settings in config files or something. And, again: I'm only talking about development, NOT production deployment, so doing the pre-built compile option is not applicable. Thanks, Ruvan A: Wow, 120 user controls, some of which only contain a single TextBox? This sounds like a lot of code. When you change a library project, all projects that depend on that library project then need to be recompiled, and also every project that depends on them, etc, all the way up the stack. You know you've only made a 1 line change to a function which doesn't affect all of your user controls, but the compiler doesn't know that. And as you're probably aware ASPX and ASCX files are only compiled when the web application is first hit. A possible speed omprovement might be gained by changing your ASCX files into Composite Controls instead, inside another Library Project. These would then be compiled at compile time (if you will) rather than at web application load time.
{ "language": "en", "url": "https://stackoverflow.com/questions/49421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you manage your app when the database goes offline? Take a .Net Winforms App.. mix in a flakey wireless network connection, stir with a few users who like to simply pull the blue plug out occasionally and for good measure, add a Systems Admin that decides to reboot the SQL server box without warning now and again just to keep everyone on their toes. What are the suggestions and strategies for handling this sort of scenario in respect to : * *Error Handling - for example, do you wrap every call to the server with a Try/Catch or do you rely on some form of Generic Error Handling to manage this? If so what does it look like? *Application Management - for example, do you disable the app and not allow users to interact with it until a connection is detected again? What would you do? A: Answer depends on type of your application. There are applications that can work offline - Microsoft Outlook for example. Such applications doesn't treat connectivity exceptions as critical, they can save your work locally and synchronize it later. Another applications such as online games will treat communication problem as critical exception and will quit if connection gets lost. As of error handling, I think that you should control exceptions on all layers rather than relying on some general exception handling piece of code. Your business layer should understand what happened on lower layer (data access layer in our case) and respond correspondingly. Connection lost should not be treated as unexpected exception in my opinion. For good practices of exceptions management I recommend to take a look at Exception Handling Application Block. Concerning application behavior, you should answer yourself on the following question "Does my application have business value for customer in disconnected state?" In many cases it would be beneficial to end user to be able to continues their work in disconnected state. However such behavior tremendously hard to implement. Especially for your scenario Microsoft developed Disconnected Service Agent Application Block A: I have not touched WinForms and .NET for years now, so I can not give you any technical details, but there is the lager picture answer: First and foremost - do not bind your form data directly to a database. Create a separate data/model layer that you bind your form widgets to. From there on, you have several options available to you depending on the level of stability and availability you need to provide. Probably one of the simplest solutions here would be to just enable/disable the parts of application that need to interact with a database based on the connection state. Next level of protection would include caching the part of the data model locally and while the database connection is down, using local cache for viewing and disabling any functions that require explicit database connection. Probably the trickiest thing (that may also provide the most stable experience to the end user) is to replicate the database locally and use some sort of synchronization schema to keep your copy of the database in sync with remote db. A: We have this in our Main() method which traps all unhandled exceptions... Application.ThreadException += new System.Threading.ThreadExceptionEventHandler(UnhandledExceptionCatcher); Thread.GetDomain().UnhandledException += new UnhandledExceptionEventHandler(Application_UnhandledException); and then Application_UnhandledException and UnhandledExceptionCatcher display user friendly messages. In addition the application then emails data such as the stack trace to the developers which can be very useful. It depends on the app of course but for the kind of failures that you describe I would close the app down. A: This may be a little too much support for the offline scenario, but have you considered the "Microsoft Sync Framework"? Included in the framework is the "Sync Services for ADO.NET 2.0", which allows your application to hit a local SQL Server CE instance. This can easily be synchronized with a central SQL Server via a variety of methods. This framework handles the permanent offline scenario, and as I said, it may not be appropriate for your specific requirements, however it will give your application solid offline support. A: In our application, we give the user the option to connect to another server, e.g., if the database connection fails, a dialog box displays saying that the server is unavailable, and they could input another IP address to try. A: Use something like SQLite to store data offline until a connection is available. Update: I believe SQLite is the back end for Google Gears, which from my understanding does what you're looking for in web apps... though I don't know if it can be used in a non web context.
{ "language": "en", "url": "https://stackoverflow.com/questions/49426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Animation Extender Problems I have just started working with the AnimationExtender. I am using it to show a new div with a list gathered from a database when a button is pressed. The problem is the button needs to do a postback to get this list as I don't want to make the call to the database unless it's needed. The postback however stops the animation mid flow and resets it. The button is within an update panel. Ideally I would want the animation to start once the postback is complete and the list has been gathered. I have looked into using the ScriptManager to detect when the postback is complete and have made some progress. I have added two javascript methods to the page. function linkPostback() { var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(playAnimation) } function playAnimation() { var onclkBehavior = $find("ctl00_btnOpenList").get_OnClickBehavior().get_animation(); onclkBehavior.play(); } And I’ve changed the btnOpenList.OnClientClick=”linkPostback();” This almost solves the problem. I’m still get some animation stutter. The animation starts to play before the postback and then plays properly after postback. Using the onclkBehavior.pause() has no effect. I can get around this by setting the AnimationExtender.Enabled = false and setting it to true in the buttons postback event. This however works only once as now the AnimationExtender is enabled again. I have also tried disabling the AnimationExtender via javascript but this has no effect. Is there a way of playing the animations only via javascript calls? I need to decouple the automatic link to the buttons click event so I can control when the animation is fired. Hope that makes sense. Thanks DG A: The flow you are seeing is something like this: * *Click on button *AnimationExtender catches action and call clickOn callback *linkPostback starts asynchronous request for page and then returns flow to AnimationExtender *Animation begins *pageRequest returns and calls playAnimation, which starts the animation again I think there are at least two ways around this issue. It seems you have almost all the javascript you need, you just need to work around AnimationExtender starting the animation on a click. Option 1: Hide the AnimationExtender button and add a new button of your own that plays the animation. This should be as simple as setting the AE button's style to "display: none;" and having your own button call linkPostback(). Option 2: Re-disable the Animation Extender once the animation has finished with. This should work, as long as the playAnimation call is blocking, which it probably is: function linkPostback() { var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(playAnimation) } function playAnimation() { AnimationExtender.Enabled = true; var onclkBehavior = $find("ctl00_btnOpenList").get_OnClickBehavior().get_animation(); onclkBehavior.play(); AnimationExtender.Enabled = false; } As an aside, it seems your general approach may face issues if there is a delay in receiving the pageRequest. It may be a bit weird to click a button and several seconds later have the animation happen. It may be better to either pre-load the data, or to pre-fill the div with some "Loading..." thing, make it about the right size, and then populate the actual contents when it arrives. A: With help from the answer given the final solution was as follows: Add another button and hide it. <input id="btnHdn" runat="server" type="button" value="button" style="display:none;" /> Point the AnimationExtender to the hidden button so the firing of the unwanted click event never happens. <cc1:AnimationExtender ID="aniExt" runat="server" TargetControlID="btnHdn"> Wire the javascript to the button you want to trigger the animation after the postback is complete. <asp:ImageButton ID="btnShowList" runat="server" OnClick="btnShowList_Click" OnClientClick="linkPostback();" /> Add the required Javascript to the page. function linkPostback() { var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(playOpenAnimation) } function playOpenAnimation() { var onclkBehavior = ind("ctl00_aniExt").get_OnClickBehavior().get_animation(); onclkBehavior.play(); var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.remove_endRequest(playOpenAnimation) }
{ "language": "en", "url": "https://stackoverflow.com/questions/49430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Trigger UpdatePanel on mouse over (as tooltip) I need to display additional information, like a tooltip, but it's a lot of info (about 500 - 600 characters) on the items in a RadioButtonList. I now trigger the update on a PanelUpdate when the user selects an item in the RadioButtonList, using OnSelectedIndexChanged and AutoPostBack. What I would like to do, is trigger this on onMouseHover (ie. the user holds the mouse a second or two over the item) rather than mouse click but I cannot find a way to do this. A: You could try setting an AsyncPostBackTrigger on the updatePanel to watch the value of a hidden field. Then in the javascript onMouseHover event, increment the hidden value. This would fire the AsyncPostBackTrigger, updating the UpdatePanel.
{ "language": "en", "url": "https://stackoverflow.com/questions/49431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: When to create Interface Builder plug-in for custom view? When do you recommend integrating a custom view into Interface Builder with a plug-in? When skimming through Apple's Interface Builder Plug-In Programming Guide I found: * *Are your custom objects going to be used by only one application? *Do your custom objects rely on state information found only in your application? *Would it be problematic to encapsulate your custom views in a standalone library or framework? If you answered yes to any of the preceding questions, your objects may not be good candidates for a plug-in. That answers some of my questions, but I would still like your thoughts on when it's a good idea. What are the benefits and how big of a time investment is it? A: It's perfectly reasonable to push the view and controller classes that your application uses out into a separate framework — embedded in your application wrapper — for which you also produce an Interface Builder plug-in. Among other reasons, classes that are commonly used in your application can then be configured at their point of use in Interface Builder, rather than in scattered -awakeFromNib implementations. It's also the only way you can have your objects expose bindings that can be set up in Interface Builder. It's a bit of coding, but for view and controller classes that are used in more than one place, and which require additional set-up before they're actually used, you'll probably save a bunch of time overall. And your experience developing with your own controller and view classes will be like developing with Cocoa's. A: I think the Apple guidelines sum it up nicely. If you're writing a control that will be used in multiple applications and is completely generic, then creating a custom object is a good idea. You'll be able to visualize the look and set properties directly from Interface Builder. If your control is limited to one application, or is tightly coupled with your data, then moving it into a custom object really won't buy you much. It's not difficult to create a custom view, there are a lot of easy to follow guides out there.
{ "language": "en", "url": "https://stackoverflow.com/questions/49442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to make a WCF service STA (single-threaded) I have a WCF service which includes UI components, which forces me to be in STA mode. How do I set the service behaviour to STA-mode? The service uses a reference to a WPF DLL file which opens a UI window (used as view port) for picture analysis. When the service is trying to create an instance of that item (inherits from window) it throws an exception: The calling thread must be an STA A: I'm doing something similar to you. My solution was to route all calls through an STA thread queue. I used a threadsafe collection from the new parallel framework to queue up Actions I wanted to run on a STA thread. I then had X number of STA threads that continually checked the queue for new actions to execute. A: I would investigate using the [STAThread] attribute to switch the threading model. e.g. [STAThread] static void Main() { ServiceBase[] ServicesToRun; ServicesToRun = new ServiceBase[] { new Host() }; ServiceBase.Run(ServicesToRun); } Description of the STAThread attribute But I'm confused why you're using UI components in a web service at all. Can you explain a bit more about why you're trying to do this? A: ServiceBehaviour attribute allows you to specify behavior. In your case for single thread you would use following: [ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single, InstanceContextMode = InstanceContextMode.PerCall)] public class Service : IService { } You might want to read about different InstanceContextModes to help you better choose on how you want service to behave. You also need to add to your app.config new service behavior (or edit existing one): <behavior name="wsSingleThreadServiceBehavior"> <serviceThrottling maxConcurrentCalls="1"/> </behavior> and in your behavior configuration in same app.config set behaviorConfiguration like following: <service behaviorConfiguration="wsSingleThreadServiceBehavior" name="IService"> <endpoint address="" binding="wsHttpBinding" bindingConfiguration="wsEndpointBinding" name="ConveyancingEndpoint" contract="IService" /> </service> Hope this saves you some time
{ "language": "en", "url": "https://stackoverflow.com/questions/49445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }