text
stringlengths
8
267k
meta
dict
Q: What languages support covariance on inherited methods' return types? I originally asked this question, but in finding an answer, discovered that my original problem was a lack of support in C# for covariance on inherited methods' return types. After discovering that, I became curious as to what languages do support this feature. I will accept the answer of whoever can name the most. EDIT: John Millikin correctly pointed out that lots of dynamic languages support this. To clarify: I am only looking for static/strongly typed languages. A: * *C++ *Java *REALbasic *Eiffel *Sather *Modula-3 A: Any dynamic languages, of course -- Python, Ruby, Smalltalk, Javascript, etc. A: Basically what I'm asking is what languages support what I'm trying to do here. Does C# let you specify different data types for the get() and set() methods? If not, I would split them into actual Leg get_leg() and set_leg(DogLeg) functions. Otherwise one of two things will happen: 1) overspecification of get_leg() 2) underspecification of set_leg(). A: C++ supports covariant return types. A: Java added support for this in 1.5. It will not compile in earlier versions. A: As pointed out by Ivan Hamilton and Mat Noguchi, C++ supports the feature. But note that covariant return types are broken for template classes which inherit from some base in MSVC 7.X through 9.X (and probably 6 also). You get error C2555. A: but I think thats what I'm asking for..or is it? I frankly don't know what you're asking. Java apparently has the same support for return-type covariance as C#, so if whatever you're looking for is lacking in C#, it's lacking in Java also.
{ "language": "en", "url": "https://stackoverflow.com/questions/47009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: "Phantom" directories in an SVN repository I've somehow managed to get an SVN repository into a bad state. I've moved a directory and now I can't commit it in its new location. As far as svn status is concerned, the directory is unknown (the name of the directory is type). $ svn status ? type When I try to add the directory, the server says it already exists. $ svn add type svn: warning: 'type' is already under version control If I try to update the directory, it's gone again. $ svn update type svn: '.' is not under version control If I try to commit it, the server complains that it's old parent directory no longer exists. $ svn commit type -m "Moving type" svn: Commit failed (details follow): svn: '/prior/trunk/src/nyu/prior/cvc3/theorem_prover/expression' path not found To add to the mystery, the contents of the directory are marked as modified. $ svn status type A + type M + type/IntegerType.java M + type/BooleanType.java M + type/Type.java M + type/RationalRangeType.java M + type/RationalType.java M + type/IntegerRangeType.java If I try to update from within the directory, I get this. $ cd type $ svn update svn: Two top-level reports with no target Committing from within the directory gives the same path not found error as above. What's going on and how do I fix it? EDIT: @Rob Oxspring caught me out: I got too aggressive moving things around in Eclipse. UPDATE: I'm accepting @Rob Oxspring's answer of "don't do that/just start over" and taking his advice. I'd still be interested if anybody could tell me: (a) what the above error messages mean precisely and (b) how to actually fix the problem. A: The easy way to fix many SVN errors is to move the whole directory away via the OS, update to get another clean copy of it and then merge in anything you changed with some other tool, WinMerge or the like. After that, you can do whatever you were trying to do, but do it correctly :). A: Did you start by just copying/moving the directory with OS commands, or did you start with SVN stuff? If you just copied the files via the OS, you would still have hidden folders containing SVN information pointing to the old location. A: It looks to me like type was created by some Subversion-aware copy command, then moved into the current directory using a Subversion-unaware copy. In my experience, this sort of thing typically occurs when package refactoring operations have been chained together in Eclipse without commits in between. Typically, Subversion doesn't handle it well when you copy/move a locally copied/moved file or folder, although I think version 1.5 may handle it better. To avoid this in the future, commit between such steps. If you'd like to hide the intervening commits then I'd recommend doing the multi-step refactoring on a branch and then merging the changes back into the mainline in that single commit you were after. If it's not too much work, then I'd recommend getting back to a clean working copy and redoing your changes, committing after each step. If you're happy to lose the history, i.e. allowing the new IntegerType.java to not be linked at all to the old IntegerType.java, then you could take the approach suggested by BCS: * *Move your changed files into some temporary location, stripping out any .svn directories *Update your working copy into a clean working state *Copy your changes back to where you want them to be *Commit the resulting working copy A: I'd suggest, to delete (outside subversionso with rm or similar), the directory above test, and then running svn update there. That is, if you don't want to get a whole new working copy as others have suggested, which might be the safest approach. A: I just had this same problem. I fixed it by deleting the .svn folder from the affected folder. A: My experience is that sometimes the local copy gets out of sync with the repository. I usually solve this by going up the local directory tree, starting from the directory with the problem and try to do do cleanup and update with each step. A: What happened is that you made a checkout of a folder, then locally 'svn add'ed and/or modified something to/in this folder, but before you have committed your changes, the original folder was moved (or deleted) from the SVN repository. All you need to do is to switch your current checkout to the new location in the SVN repository. So, supposing you have a checkout of the foo folder from path/to/folder1/foo, and that foo was moved to path/to/foo, you just need to run: $ svn switch path/to/foo That's it... ;-)
{ "language": "en", "url": "https://stackoverflow.com/questions/47022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Recursive function for an xml file (hierarchial data) I have an XML file in the following format: <categories> <category id="1"></category> <category id="2"> <category id="3"></category> <category id="4"> <category id="5"></category> </category> </category> </categories> Can anyone please give me some direction on how I might traverse the file using C#? A: First off, System.XML provides some excellent ways to work with XML. I'm assuming you loaded your XML into an XMLDocument, doing so allows you to use XPath Selectors, or just walk through the DOM. Something like this would walk from whatever element back up to the top using recursion: public XmlNode WalkToTopNode (XmlNode CurrentNode) { if (CurrentNode.ParentNode == null) return CurrentNode; else return WalkToTopNode(CurrentNode.ParentNode); } Using recursion to find a node by ID could be done somewhat like this (Note, I typed this in the textbox, it may be wrong): public XmlNode GetElementById (string id, XmlNode node) { if (node.Attributes["id"] != null && node.Attributes["id"].InnerText == id) { return node; } else { foreach (XmlNode childNode in node.Children) { return GetElementById(id, childNode); } } return null; } However, if you are using recursion when there are so many better node traversal ways built in to System.XML, then perhaps its time to rethink your strategy.
{ "language": "en", "url": "https://stackoverflow.com/questions/47026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Sprintf equivalent in Java Printf got added to Java with the 1.5 release but I can't seem to find how to send the output to a string rather than a file (which is what sprintf does in C). Does anyone know how to do this? A: Since Java 13 you have formatted 1 method on String, which was added along with text blocks as a preview feature 2. You can use it instead of String.format() Assertions.assertEquals( "%s %d %.3f".formatted("foo", 123, 7.89), "foo 123 7.890" ); A: // Store the formatted string in 'result' String result = String.format("%4d", i * j); // Write the result to standard output System.out.println( result ); See format and its syntax A: Strings are immutable types. You cannot modify them, only return new string instances. Because of that, formatting with an instance method makes little sense, as it would have to be called like: String formatted = "%s: %s".format(key, value); The original Java authors (and .NET authors) decided that a static method made more sense in this situation, as you are not modifying the target, but instead calling a format method and passing in an input string. Here is an example of why format() would be dumb as an instance method. In .NET (and probably in Java), Replace() is an instance method. You can do this: "I Like Wine".Replace("Wine","Beer"); However, nothing happens, because strings are immutable. Replace() tries to return a new string, but it is assigned to nothing. This causes lots of common rookie mistakes like: inputText.Replace(" ", "%20"); Again, nothing happens, instead you have to do: inputText = inputText.Replace(" ","%20"); Now, if you understand that strings are immutable, that makes perfect sense. If you don't, then you are just confused. The proper place for Replace() would be where format() is, as a static method of String: inputText = String.Replace(inputText, " ", "%20"); Now there is no question as to what's going on. The real question is, why did the authors of these frameworks decide that one should be an instance method, and the other static? In my opinion, both are more elegantly expressed as static methods. Regardless of your opinion, the truth is that you are less prone to make a mistake using the static version, and the code is easier to understand (No Hidden Gotchas). Of course there are some methods that are perfect as instance methods, take String.Length() int length = "123".Length(); In this situation, it's obvious we are not trying to modify "123", we are just inspecting it, and returning its length. This is a perfect candidate for an instance method. My simple rules for Instance Methods on Immutable Objects: * *If you need to return a new instance of the same type, use a static method. *Otherwise, use an instance method. A: Both solutions workto simulate printf, but in a different way. For instance, to convert a value to a hex string, you have the 2 following solutions: * *with format(), closest to sprintf(): final static String HexChars = "0123456789abcdef"; public static String getHexQuad(long v) { String ret; if(v > 0xffff) ret = getHexQuad(v >> 16); else ret = ""; ret += String.format("%c%c%c%c", HexChars.charAt((int) ((v >> 12) & 0x0f)), HexChars.charAt((int) ((v >> 8) & 0x0f)), HexChars.charAt((int) ((v >> 4) & 0x0f)), HexChars.charAt((int) ( v & 0x0f))); return ret; } *with replace(char oldchar , char newchar), somewhat faster but pretty limited: ... ret += "ABCD". replace('A', HexChars.charAt((int) ((v >> 12) & 0x0f))). replace('B', HexChars.charAt((int) ((v >> 8) & 0x0f))). replace('C', HexChars.charAt((int) ((v >> 4) & 0x0f))). replace('D', HexChars.charAt((int) ( v & 0x0f))); ... *There is a third solution consisting of just adding the char to ret one by one (char are numbers that add to each other!) such as in: ... ret += HexChars.charAt((int) ((v >> 12) & 0x0f))); ret += HexChars.charAt((int) ((v >> 8) & 0x0f))); ... ...but that'd be really ugly. A: You can do a printf with a PrintStream to anything that is an OutputStream. Somehow like this, printing into a string stream: PrintStream ps = new PrintStream(baos); ps.printf("there is a %s from %d %s", "hello", 3, "friends"); System.out.println(baos.toString()); This outputs following text there is a hello from 3 friends The string stream can be created like this ByteArrayOutputStream: ByteArrayOutputStream baos = new ByteArrayOutputStream(); You can accumulate many formats: PrintStream ps = new PrintStream(baos); ps.printf("there is a %s from %d %s ", "hello", 3, "friends"); ps.printf("there are %d % from a %", 2, "kisses", "girl"); System.out.println(baos.toString()); This outputs there is a hello from 3 friends there are 2 kisses from a girl Call reset on ByteArrayOutputStream to generate a clean new string ps.printf("there is a %s from %d %s", "flip", 5, "haters"); baos.reset(); //need reset to write new string ps.printf("there are %d % from a %", 2, "kisses", "girl"); System.out.println(baos.toString()); The output will be there are 2 kisses from a girl
{ "language": "en", "url": "https://stackoverflow.com/questions/47045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "306" }
Q: What code would I use to convert a SQL like expression to a regex on the fly? I'm looking to convert a SQL like statement on the fly to the equivalent regex i.e. LIKE '%this%' LIKE 'Sm_th' LIKE '[C-P]arsen' What's the best approach to doing this? P.S. I'm looking to do this on the .Net Framework (C#). A: The following Regex converts an SQL like pattern into a Regex pattern with the help of a MatchEvaluator delegate. It correctly handles square bracket blocks and escapes special Regex characters. string regexPattern = "^" + Regex.Replace( likePattern, @"[%_]|\[[^]]*\]|[^%_[]+", match => { if (match.Value == "%") { return ".*"; } if (match.Value == "_") { return "."; } if (match.Value.StartsWith("[") && match.Value.EndsWith("]")) { return match.Value; } return Regex.Escape(match.Value); }) + "$"; A: In addition to @Nathan-Baulch's solution you can use the code below to also handle the case where a custom escape character has been defined using the LIKE '!%' ESCAPE '!' syntax. public Regex ConvertSqlLikeToDotNetRegex(string regex, char? likeEscape = null) { var pattern = string.Format(@" {0}[%_]| [%_]| \[[^]]*\]| [^%_[{0}]+ ", likeEscape); var regexPattern = Regex.Replace( regex, pattern, ConvertWildcardsAndEscapedCharacters, RegexOptions.IgnorePatternWhitespace); regexPattern = "^" + regexPattern + "$"; return new Regex(regexPattern, !m_CaseSensitive ? RegexOptions.IgnoreCase : RegexOptions.None); } private string ConvertWildcardsAndEscapedCharacters(Match match) { // Wildcards switch (match.Value) { case "%": return ".*"; case "_": return "."; } // Remove SQL defined escape characters from C# regex if (StartsWithEscapeCharacter(match.Value, likeEscape)) { return match.Value.Remove(0, 1); } // Pass anything contained in []s straight through // (These have the same behaviour in SQL LIKE Regex and C# Regex) if (StartsAndEndsWithSquareBrackets(match.Value)) { return match.Value; } return Regex.Escape(match.Value); } private static bool StartsAndEndsWithSquareBrackets(string text) { return text.StartsWith("[", StringComparison.Ordinal) && text.EndsWith("]", StringComparison.Ordinal); } private bool StartsWithEscapeCharacter(string text, char? likeEscape) { return (likeEscape != null) && text.StartsWith(likeEscape.ToString(), StringComparison.Ordinal); } A: From your example above, I would attack it like this (I speak in general terms because I do not know C#): Break it apart by LIKE '...', put the ... pieces into an array. Replace unescaped % signs by .*, underscores by ., and in this case the [C-P]arsen translates directly into regex. Join the array pieces back together with a pipe, and wrap the result in parentheses, and standard regex bits. The result would be: /^(.*this.*|Sm.th|[C-P]arsen)$/ The most important thing here is to be wary of all the ways you can escape data, and which wildcards translate to which regular expressions. % becomes .* _ becomes . A: I found a Perl module called Regexp::Wildcards. You can try to port it or try Perl.NET. I have a feeling you can write something up yourself too.
{ "language": "en", "url": "https://stackoverflow.com/questions/47052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: "Getting" the path in Linux I am writing a C program in Linux. Commands like execv() require a path in the form of a C string. Is there a command that will return the current path in the form of a C-style string? A: The path argument to execv() is the path to the application you wish to execute, not the current working directory (which will be returned by getcwd()) or the shell search path (which will be returned by getenv("PATH")). Depending on what you're doing, you may get more mileage out of the system() function in the C library rather than the lower-level exec() family. A: This is not ANSI C: #include <unistd.h> char path[MAXPATHLEN]; getcwd(path, MAXPATHLEN); printf("pwd -> %s\n", path); A: getcwd(): SYNOPSIS #include <unistd.h> char *getcwd(char *buf, size_t size); DESCRIPTION The getcwd() function shall place an absolute pathname of the current working directory in the array pointed to by buf, and return buf. The pathname copied to the array shall contain no components that are symbolic links. The size argument is the size in bytes of the character array pointed to by the buf argument. If buf is a null pointer, the behavior of getcwd() is unspecified. RETURN VALUE Upon successful completion, getcwd() shall return the buf argument. Otherwise, getcwd() shall return a null pointer and set errno to indicate the error. The contents of the array pointed to by buf are then undefined.... A: If the path can be a relative path, you should be able to use '.' or './' as the path. I'm not sure if it will work, but you could try it. A: You need to grab the environment variable PWD (present working directory). I'm not sure what the library it is in, but it is a standard Linux header. I was thinking of getenv() which would help if you also need to run system commands and need the various bin paths located in PATH.
{ "language": "en", "url": "https://stackoverflow.com/questions/47066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Javascript tree views that support multiple item drag/drop We are currently using the ExtJS tree view in an application - a requirement has arisen requiring a user to select multiple nodes (which the tree view supports currently through a pluggable selection model) - but you can not then drag the multiple selections to another part of the tree. Does anyone know of an ajax control (commercial or non-commercial) that supports multiple-selection drag / drop - or a example of enabling this functionality in ExtJS? A: Check out this post in the ExtJS forum that details how you can enable multi-select in a Javascript tree. http://extjs.com/forum/showthread.php?t=28115 A: Got the same issue. I just found the solution : ..new Ext.tree.TreePanel({ ... selModel : new Ext.tree.MultiSelectionModel() ... }) A: Ha i just asked the exact same question in their forum... want to acheive the goal without using the Custom User-Extension tough. http://www.extjs.com/forum/showthread.php?97463-3.2.0-Treepanel-MultiSelectionModel-and-Dragdrop&p=459962 Just for reference. Regards, Fabian Loibl
{ "language": "en", "url": "https://stackoverflow.com/questions/47078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do you prevent SQL injection in LAMP applications? Here are a few possibilities to get the conversation started: * *Escape all input upon initialization. *Escape each value, preferably when generating the SQL. The first solution is suboptimal, because you then need to unescape each value if you want to use it in anything other than SQL, like outputting it on a web page. The second solution makes much more sense, but manually escaping each value is a pain. I'm aware of prepared statements, however I find MySQLi cumbersome. Also, separating the query from the inputs concerns me, because although it's crucial to get the order correct it's easy to make a mistake, and thus write the wrong data to the wrong fields. A: as @Rob Walker states, parameterized queries are your best bet. If you're using the latest and greatest PHP, I'd highly recommend taking a look at PDO (PHP Data Objects). This is a native database abstraction library that has support for a wide range of databases (including MySQL of course) as well as prepared statements with named parameters. A: Prepared statements are the best answer. You have testing because you can make mistakes! See this question. A: I would go with using prepared statements. If you want to use prepared statements, you probably want to check out the PDO functions for PHP. Not only does this let you easily run prepared statements, it also lets you be a little more database agnostic by not calling functions that begin with mysql_, mysqli_, or pgsql_. A: PDO may be worth it some day, but it's not just there yet. It's a DBAL and it's strengh is (supposedly) to make switching between vendors more easier. It's not really build to catch SQL injections. Anyhow, you want to escape and sanatize your inputs, using prepared statements could be a good measure (I second that). Although I believe it's much easier, e.g. by utilizing filter. A: I've always used the first solution because 99% of the time, variables in $_GET, $_POST, and $_COOKIE are never outputted to the browser. You also won't ever mistakenly write code with an SQL injection (unless you don't use quotes in the query), whereas with the second solution you could easily forget to escape one of your strings eventually. Actually, the reason I've always done it that way was because all my sites had the magic_quotes setting on by default, and once you've written a lot of code using one of those two solutions, it takes a lot of work to change to the other one.
{ "language": "en", "url": "https://stackoverflow.com/questions/47087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Best way in asp.net to force https for an entire site? About 6 months ago I rolled out a site where every request needed to be over https. The only way at the time I could find to ensure that every request to a page was over https was to check it in the page load event. If the request was not over http I would response.redirect("https://example.com") Is there a better way -- ideally some setting in the web.config? A: The IIS7 module will let you redirect. <rewrite> <rules> <rule name="Redirect HTTP to HTTPS" stopProcessing="true"> <match url="(.*)"/> <conditions> <add input="{HTTPS}" pattern="^OFF$"/> </conditions> <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="SeeOther"/> </rule> </rules> </rewrite> A: What you need to do is : 1) Add a key inside of web.config, depending upon the production or stage server like below <add key="HttpsServer" value="stage"/> or <add key="HttpsServer" value="prod"/> 2) Inside your Global.asax file add below method. void Application_BeginRequest(Object sender, EventArgs e) { //if (ConfigurationManager.AppSettings["HttpsServer"].ToString() == "prod") if (ConfigurationManager.AppSettings["HttpsServer"].ToString() == "stage") { if (!HttpContext.Current.Request.IsSecureConnection) { if (!Request.Url.GetLeftPart(UriPartial.Authority).Contains("www")) { HttpContext.Current.Response.Redirect( Request.Url.GetLeftPart(UriPartial.Authority).Replace("http://", "https://www."), true); } else { HttpContext.Current.Response.Redirect( Request.Url.GetLeftPart(UriPartial.Authority).Replace("http://", "https://"), true); } } } } A: In IIS10 (Windows 10 and Server 2016), from version 1709 onwards, there is a new, simpler option for enabling HSTS for a website. Microsoft describe the advantages of the new approach here, and provide many different examples of how to implement the change programmatically or by directly editing the ApplicationHost.config file (which is like web.config but operates at the IIS level, rather than individual site level). ApplicationHost.config can be found in C:\Windows\System32\inetsrv\config. I've outlined two of the example methods here to avoid link rot. Method 1 - Edit the ApplicationHost.config file directly Between the <site> tags, add this line: <hsts enabled="true" max-age="31536000" includeSubDomains="true" redirectHttpToHttps="true" /> Method 2 - Command Line: Execute the following from an elevated command prompt (i.e. right mouse on CMD and run as administrator). Remember to swap Contoso with the name of your site as it appears in IIS Manager. c: cd C:\WINDOWS\system32\inetsrv\ appcmd.exe set config -section:system.applicationHost/sites "/[name='Contoso'].hsts.enabled:True" /commit:apphost appcmd.exe set config -section:system.applicationHost/sites "/[name='Contoso'].hsts.max-age:31536000" /commit:apphost appcmd.exe set config -section:system.applicationHost/sites "/[name='Contoso'].hsts.includeSubDomains:True" /commit:apphost appcmd.exe set config -section:system.applicationHost/sites "/[name='Contoso'].hsts.redirectHttpToHttps:True" /commit:apphost The other methods Microsoft offer in that articles might be better options if you are on a hosted environment where you have limited access. Keep in mind that IIS10 version 1709 is available on Windows 10 now, but for Windows Server 2016 it is on a different release track, and won't be released as a patch or service pack. See here for details about 1709. A: If SSL support is not configurable in your site (ie. should be able to turn https on/off) - you can use the [RequireHttps] attribute on any controller / controller action you wish to secure. A: This is a fuller answer based on @Troy Hunt's. Add this function to your WebApplication class in Global.asax.cs: protected void Application_BeginRequest(Object sender, EventArgs e) { // Allow https pages in debugging if (Request.IsLocal) { if (Request.Url.Scheme == "http") { int localSslPort = 44362; // Your local IIS port for HTTPS var path = "https://" + Request.Url.Host + ":" + localSslPort + Request.Url.PathAndQuery; Response.Status = "301 Moved Permanently"; Response.AddHeader("Location", path); } } else { switch (Request.Url.Scheme) { case "https": Response.AddHeader("Strict-Transport-Security", "max-age=31536000"); break; case "http": var path = "https://" + Request.Url.Host + Request.Url.PathAndQuery; Response.Status = "301 Moved Permanently"; Response.AddHeader("Location", path); break; } } } (To enable SSL on your local build enable it in the Properties dock for the project) A: Please use HSTS (HTTP Strict Transport Security) from http://www.hanselman.com/blog/HowToEnableHTTPStrictTransportSecurityHSTSInIIS7.aspx <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="HTTP to HTTPS redirect" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTPS}" pattern="off" ignoreCase="true" /> </conditions> <action type="Redirect" url="https://{HTTP_HOST}/{R:1}" redirectType="Permanent" /> </rule> </rules> <outboundRules> <rule name="Add Strict-Transport-Security when HTTPS" enabled="true"> <match serverVariable="RESPONSE_Strict_Transport_Security" pattern=".*" /> <conditions> <add input="{HTTPS}" pattern="on" ignoreCase="true" /> </conditions> <action type="Rewrite" value="max-age=31536000" /> </rule> </outboundRules> </rewrite> </system.webServer> </configuration> Original Answer (replaced with the above on 4 December 2015) basically protected void Application_BeginRequest(Object sender, EventArgs e) { if (HttpContext.Current.Request.IsSecureConnection.Equals(false) && HttpContext.Current.Request.IsLocal.Equals(false)) { Response.Redirect("https://" + Request.ServerVariables["HTTP_HOST"] + HttpContext.Current.Request.RawUrl); } } that would go in the global.asax.cs (or global.asax.vb) i dont know of a way to specify it in the web.config A: For those using ASP.NET MVC. You can use the following to force SSL/TLS over HTTPS over the whole site in two ways: The Hard Way 1 - Add the RequireHttpsAttribute to the global filters: GlobalFilters.Filters.Add(new RequireHttpsAttribute()); 2 - Force Anti-Forgery tokens to use SSL/TLS: AntiForgeryConfig.RequireSsl = true; 3 - Require Cookies to require HTTPS by default by changing the Web.config file: <system.web> <httpCookies httpOnlyCookies="true" requireSSL="true" /> </system.web> 4 - Use the NWebSec.Owin NuGet package and add the following line of code to enable Strict Transport Security accross the site. Don't forget to add the Preload directive below and submit your site to the HSTS Preload site. More information here and here. Note that if you are not using OWIN, there is a Web.config method you can read up on on the NWebSec site. // app is your OWIN IAppBuilder app in Startup.cs app.UseHsts(options => options.MaxAge(days: 30).Preload()); 5 - Use the NWebSec.Owin NuGet package and add the following line of code to enable Public Key Pinning (HPKP) across the site. More information here and here. // app is your OWIN IAppBuilder app in Startup.cs app.UseHpkp(options => options .Sha256Pins( "Base64 encoded SHA-256 hash of your first certificate e.g. cUPcTAZWKaASuYWhhneDttWpY3oBAkE3h2+soZS7sWs=", "Base64 encoded SHA-256 hash of your second backup certificate e.g. M8HztCzM3elUxkcjR2S5P4hhyBNf6lHkmjAHKhpGPWE=") .MaxAge(days: 30)); 6 - Include the https scheme in any URL's used. Content Security Policy (CSP) HTTP header and Subresource Integrity (SRI) do not play nice when you imit the scheme in some browsers. It is better to be explicit about HTTPS. e.g. <script src="https://ajax.aspnetcdn.com/ajax/bootstrap/3.3.4/bootstrap.min.js"></script> The Easy Way Use the ASP.NET MVC Boilerplate Visual Studio project template to generate a project with all of this and much more built in. You can also view the code on GitHub. A: It also depends on the brand of your balancer, for the web mux, you would need to look for http header X-WebMux-SSL-termination: true to figure that incoming traffic was ssl. details here: http://www.cainetworks.com/support/redirect2ssl.html A: For @Joe above, "This is giving me a redirect loop. Before I added the code it worked fine. Any suggestions? – Joe Nov 8 '11 at 4:13" This was happening to me as well and what I believe was happening is that there was a load balancer terminating the SSL request in front of the Web server. So, my Web site was always thinking the request was "http", even if the original browser requested it to be "https". I admit this is a bit hacky, but what worked for me was to implement a "JustRedirected" property that I could leverage to figure out the person was already redirected once. So, I test for specific conditions that warrant the redirect and, if they are met, I set this property (value stored in session) prior to the redirection. Even if the http/https conditions for redirection are met the second time, I bypass the redirection logic and reset the "JustRedirected" session value to false. You'll need your own conditional test logic, but here's a simple implementation of the property: public bool JustRedirected { get { if (Session[RosadaConst.JUSTREDIRECTED] == null) return false; return (bool)Session[RosadaConst.JUSTREDIRECTED]; } set { Session[RosadaConst.JUSTREDIRECTED] = value; } } A: I'm going to throw my two cents in. IF you have access to IIS server side, then you can force HTTPS by use of the protocol bindings. For example, you have a website called Blah. In IIS you'd setup two sites: Blah, and Blah (Redirect). For Blah only configure the HTTPS binding (and FTP if you need to, make sure to force it over a secure connection as well). For Blah (Redirect) only configure the HTTP binding. Lastly, in the HTTP Redirect section for Blah (Redirect) make sure to set a 301 redirect to https://blah.com, with exact destination enabled. Make sure that each site in IIS is pointing to it's own root folder otherwise the Web.config will get all screwed up. Also make sure to have HSTS configured on your HTTPSed site so that subsequent requests by the browser are always forced to HTTPS and no redirects occur. A: I spent sometime looking for best practice that make sense and found the following which worked perfected for me. I hope this will save you sometime. Using Config file (for example an asp.net website) https://blogs.msdn.microsoft.com/kaushal/2013/05/22/http-to-https-redirects-on-iis-7-x-and-higher/ or on your own server https://www.sslshopper.com/iis7-redirect-http-to-https.html [SHORT ANSWER] Simply The code below goes inside <system.webServer> <rewrite> <rules> <rule name="HTTP/S to HTTPS Redirect" enabled="true" stopProcessing="true"> <match url="(.*)" /> <conditions logicalGrouping="MatchAny"> <add input="{SERVER_PORT_SECURE}" pattern="^0$" /> </conditions> <action type="Redirect" url="https://{HTTP_HOST}{REQUEST_URI}" redirectType="Permanent" /> </rule> </rules> </rewrite> A: If you are unable to set this up in IIS for whatever reason, I'd make an HTTP module that does the redirect for you: using System; using System.Web; namespace HttpsOnly { /// <summary> /// Redirects the Request to HTTPS if it comes in on an insecure channel. /// </summary> public class HttpsOnlyModule : IHttpModule { public void Init(HttpApplication app) { // Note we cannot trust IsSecureConnection when // in a webfarm, because usually only the load balancer // will come in on a secure port the request will be then // internally redirected to local machine on a specified port. // Move this to a config file, if your behind a farm, // set this to the local port used internally. int specialPort = 443; if (!app.Context.Request.IsSecureConnection || app.Context.Request.Url.Port != specialPort) { app.Context.Response.Redirect("https://" + app.Context.Request.ServerVariables["HTTP_HOST"] + app.Context.Request.RawUrl); } } public void Dispose() { // Needed for IHttpModule } } } Then just compile it to a DLL, add it as a reference to your project and place this in web.config: <httpModules> <add name="HttpsOnlyModule" type="HttpsOnly.HttpsOnlyModule, HttpsOnly" /> </httpModules> A: The other thing you can do is use HSTS by returning the "Strict-Transport-Security" header to the browser. The browser has to support this (and at present, it's primarily Chrome and Firefox that do), but it means that once set, the browser won't make requests to the site over HTTP and will instead translate them to HTTPS requests before issuing them. Try this in combination with a redirect from HTTP: protected void Application_BeginRequest(Object sender, EventArgs e) { switch (Request.Url.Scheme) { case "https": Response.AddHeader("Strict-Transport-Security", "max-age=300"); break; case "http": var path = "https://" + Request.Url.Host + Request.Url.PathAndQuery; Response.Status = "301 Moved Permanently"; Response.AddHeader("Location", path); break; } } Browsers that aren't HSTS aware will just ignore the header but will still get caught by the switch statement and sent over to HTTPS. A: -> Simply ADD [RequireHttps] on top of the public class HomeController : Controller. -> And add GlobalFilters.Filters.Add(new RequireHttpsAttribute()); in 'protected void Application_Start()' method in Global.asax.cs file. Which forces your entire application to HTTPS. A: If you are using ASP.NET Core you could try out the nuget package SaidOut.AspNetCore.HttpsWithStrictTransportSecurity. Then you only need to add app.UseHttpsWithHsts(HttpsMode.AllowedRedirectForGet, configureRoutes: routeAction); This will also add HTTP StrictTransportSecurity header to all request made using https scheme. Example code and documentation https://github.com/saidout/saidout-aspnetcore-httpswithstricttransportsecurity#example-code
{ "language": "en", "url": "https://stackoverflow.com/questions/47089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "211" }
Q: Slow SQL Query due to inner and left join? Can anyone explain this behavior or how to get around it? If you execute this query: select * from TblA left join freetexttable ( TblB, *, 'query' ) on TblA.ID = [Key] inner join DifferentDbCatalog.dbo.TblC on TblA.ID = TblC.TblAID It will be very very very slow. If you change that query to use two inner joins instead of a left join, it will be very fast. If you change it to use two left joins instead of an inner join, it will be very fast. You can observe this same behavior if you use a sql table variable instead of the freetexttable as well. The performance problem arises any time you have a table variable (or freetexttable) and a table in a different database catalog where one is in an inner join and the other is in a left join. Does anyone know why this is slow, or how to speed it up? A: A general rule of thumb is that OUTER JOINs cause the number of rows in a result set to increase, while INNER JOINs cause the number of rows in a result set to decrease. Of course, there are plenty of scenarios where the opposite is true as well, but it's more likely to work this way than not. What you want to do for performance is keep the size of the result set (working set) as small as possible for as long as possible. Since both joins match on the first table, changing up the order won't effect the accuracy of the results. Therefore, you probably want to do the INNER JOIN before the LEFT JOIN: SELECT * FROM TblA INNER JOIN DifferentDbCatalog.dbo.TblC on TblA.ID = TblC.TblAID LEFT JOIN freetexttable ( TblB, *, 'query' ) on TblA.ID = [Key] As a practical matter, the query optimizer should be smart enough to compile to use the faster option, regardless of which order you specified for the joins. However, it's good practice to pretend that you have a dumb query optimizer, and that query operations happen in order. This helps future maintainers spot potential errors or assumptions about the nature of the tables. Because the optimizer should re-write things, this probably isn't good enough to fully explain the behavior you're seeing, so you'll still want to examine the execution plan used for each query, and probably add an index as suggested earlier. This is still a good principle to learn, though. A: What you should usually do is turn on the "Show Actual Execution Plan" option and then take a close look at what is causing the slowdown. (hover your mouse over each join to see the details) You'll want to make sure that you are getting an index seek and not a table scan. I would assume what is happening is that SQL is being forced to pull everything from one table into memory in order to do one of the joins. Sometimes reversing the order that you join the tables will also help things. A: Putting freetexttable(TblB, *, 'query') into a temp table may help if it's getting called repeatedly in the execution plan. A: Index the field you use to perform the join. A good rule of thumb is to assign an index to any commonly referenced foreign or candidate keys.
{ "language": "en", "url": "https://stackoverflow.com/questions/47104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Disallow publishing of debug builds for ClickOnce deployment Is there a way to disallow publishing of debug builds with ClickOnce? I only want to allow release builds through, but right now human error causes a debug build to slip through once in a while. We're publishing the build from within Visual Studio. A: One thing you can do is add a condition to the .csproj or .vbproj file that MSBuild will check when doing a build. The condition would check if a publish is occurring and check if the build is a debug build, then do something like run an external tool or otherwise interrupt the build process or cause it to fail. An example might be something like this: <Choose> <When Condition=" '$(Configuration)'=='Debug' "> <Exec Command="C:\foo.bat" ContinueOnError="false" /> </When> </Choose> Where foo.bat is a batch file that return non-zero, thus stopping the publish from occurring. A: I have started to modify the .csproj files to include the following code to throw an error for debug deploys, effectively preventing the deploy from happening: <!-- The following makes sure we don’t try to publish a configuration that defines the DEBUG constant --> <Target Name="BeforePublish"> <Error Condition="'$(DebugSymbols)' == 'true'" Text="You attempted to publish a configuration that defines the DEBUG constant!" /> </Target> Just place it at the end of the file, right before the </Project> tag. (original source: http://www.nathanpjones.com/wp/2010/05/preventing-clickonce-publishing-a-debug-configuration/comment-page-1/#comment-625) A: I have chosen another solution that worked for me: I couldn't change my build process. So I did Tools → Customize... and change the text of the action, adding an alert like "Publish [CONFIGURE TO RELEASE!]", and placing the Publish button next to the Debug/Release configuration option. It's easy! With this I considerably reduced the risk of human error. Those buttons should always be together.
{ "language": "en", "url": "https://stackoverflow.com/questions/47107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Migrating to a GUI without losing business logic written in COBOL We maintain a system that has over a million lines of COBOL code. Does someone have suggestions about how to migrate to a GUI (probably Windows based) without losing all the business logic we have written in COBOL? And yes, some of the business logic is buried inside the current user interface. A: If it was me I would look into something like this: NetCobol for Windows It should be fairly easy to wrap your COBOL with an interface that exposes the functionality (if it isn't already written that way) and then call it from a .NET application. It took us about 15 years to get off of our mainframe, because we didn't do something like this. A: Writing a screen scraper is probably your best bet. Some of the major ERP systems have done this for years during a transition from server based apps to 3-tier applications. One i have worked with had loads of interesting features such as drop down lists for regularly used fields, date pop ups and even client based macro languages based on the scraping input. These weren't great but worked well for the clients and made sure the applications still worked in a reliable fashion. There is a lot of different ways to put this together, but if you put some thought into it you could probably use java or .net to create a desktop based application and with a little extra effort make a web based implementation. A: Microfocus provide a tool called Enterprise Server which allows COBOL to interact with web services. If you have a COBOL program A and another COBOL program B and A calls B via the interface section, the tool allows you to expose B's interface section as a web service. For program A, you then generate a client proxy and A can now call B via a web service. Of course, because B now has a web service any other type of program (command line, Windows application, Java, ASP etc.) can now also call it. Using this approach, you can "nibble away at the edges" to move the GUI to a modern, browser based approach using something like ASP while still utilising the COBOL business engine. And once you have a decent set of web services, these can be used for any new development which provides a way of moving away from COBOL in the longer term. A: You could use an ESB to expose the back-end legacy services, and then code your GUI to invoke the services via the ESB. Then you can begin replacing the legacy services with implementations on your new platform of choice. The GUI need not be aware of the cut-over of back-end service implementation, as long as the interface to the service does not change - minor changes may hidden from the GUI by the ESB. Business logic that resides in the legacy user interface layer will need to be refactored by extracting the business logic and exposing it as new services on the new platform to be consumed by the new GUI via the ESB. As for the choice of platform for the new GUI, why not consider a web-based UI rather than a native windows platform, then at least updates to the UI will only need to be applied to the web-server rather than having to roll-out changes to each individual work-station.
{ "language": "en", "url": "https://stackoverflow.com/questions/47133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there a way to ensure entire code block execution in a .NET thread? In my C# program, I have a thread that represents a running test, which can be aborted by a click on a button labeled "Interrupt execution". In order for the thread (and therefore the test) to terminate in an elegant manner (and do some vital work beforehand), this button is enabled only in some well-defined moments, in which I catch ThreadAbortedException, do Thread.ResetAbort() and die beautifully (that is, the thread). The problem is that, in the time window in which aborting is possible, there are some tasks that need to be done from start to finish once initiated, and, so, I fear TAE. Locks don't provide a solution for this, and, although finally blocks do, I don't find it elegant to wrap important code in the following manner: try { } finally { // vital code } However, I didn't find any other solution. Is this another way to delay the interference of ThreadAbortException until the end of the block? A: The situations under which a ThreadAbortException will be generated can be complex. A better option might be to ask why you are using ThreadAbortExceptions at all. A better pattern would be to simply use a flag to indicate to the running thread that it should abort, and then test the flag regularly while it is working. If it is set you can throw an exception at that point if you want. With this pattern you are totally in control of when the thread will act on the abort, and don't have to worry about the critical operations being interrupted. A: Use Thread.BeginCriticalRegion() Notifies a host that execution is about to enter a region of code in which the effects of a thread abort or unhandled exception might jeopardize other tasks in the application domain. Thread.BeginCriticalRegion() //do important stuff here Thread.EndCriticalRegion()
{ "language": "en", "url": "https://stackoverflow.com/questions/47144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Customising log4j adapter I want to build my own custom log4j (network) adapter to solve my problem that I posted here. I have looked at the documentation on the log4j and I cant see where/if the developers go into how to do this. Can anyone point me in the right direction to get started? A: Write custom appenders for log4j A: May I suggest that you leverage the existing SyslogAppender?
{ "language": "en", "url": "https://stackoverflow.com/questions/47145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to set the order in subnodes of a tree structure I have a tree representation of pages in a CMS application. I understand how to persist the tree in the database. However, I don't have a good way to: A) Reorder subpages under a particular parent page. B) Provide a UI implementation that allows the user to change the order. Any suggestions? A: A) I have a similar CMS app and I store an ordinal value with the page for a particular tree and sort on this value -- because lots of my pages appear in completely different sites I have to maintain the ordinal number against a page / tree combination. B) I too would like a better way to do this. Currently they click on the node in the treeview and in the main page screen they can move the page around. I've tried drag and drop with java script and other solutions but my users could never work with it without lots of hand holding. I'll be interested in the responses to this one. A: Changing the order itself will require you store some sort of ordering along with each page in the database. Just the current highest / lowest value +/- 1 would probably be a fine starting point. Once you've got that ordering in there, reordering becomes a case of swapping two values or changing the value for one page to be between two others (you could use floats I guess, but you may need to renumber if you split it too many times). Anyway, once you've got that, you need a UI. I've seen a very simple 'swap this with the one above/below' approach which can be a simple web link or an AJAX call. You could also present all the page values to the user and ask them to renumber them as they see fit. If you want to get fancy, JavaScript drag and drop might be a good approach. I've used ExtJS and Mootools as frameworks in this kind of area. If you don't need all the Extjs widgets, I'd say well away from it in future, and look at something like the Mootools Dynamic Sortables demo.
{ "language": "en", "url": "https://stackoverflow.com/questions/47163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Do you know of any "best practice" or "what works" vi tutorial for programmers? There are thousands of vi tutorials on the web, most of them generically listing all the commands. There are even videos on youtube which show basic functionality. But does anyone know of a vi tutorial which focuses on the needs of programmers? For example when I program in Perl with vi, moving to the "next paragraph" is meaningless. I want to know which commands seasoned vi users combine to e.g: * *copy everything inside of parentheses *copy a function *copy and paste a variable (e.g. 2yw) *etc. I am sure there are lots of functions using multiple-file capability, and the maps, macros, reading in of files for template code, regular expression search, jumping to functions, perhaps minimal code completion, or other features that emulate what programmers have gotten used to in Visual Studio and Eclipse, etc. A: A nice collection of vimtips. And the best Vim cheatsheet around. A: I just ended up reading the vim manual a few times, over the years, picking up useful features on each iteration. One thing that really made vim work for me as a perl IDE was starting to use tags, as explained here: http://www.vim.org/tips/tip.php?tip_id=94. Using the pltags script that ships with vim, you can jump around between modules to find your functions, methods, etc. A: If you are a beginner, vimtutor would be a good way to start with. (Type vimtutor on your shell and get going). And once you get hold of the basics of vim, you can look around web and figure out things for yourself. This and this may be an interesting read.
{ "language": "en", "url": "https://stackoverflow.com/questions/47167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is there an event that triggers if the number of ListViewItems in a ListView changes? (Windows Forms) I'd like to enable/disable some other controls based on how many items are in my ListView control. I can't find any event that would do this, either on the ListView itself or on the ListViewItemCollection. Maybe there's a way to generically watch any collection in C# for changes? I'd be happy with other events too, even ones that sometimes fire when the items don't change, but for example the ControlAdded and Layout events didn't work :(. A: @Domenic Not too sure, Never quite got that far in the thought process. Another solution might be to extend ListView, and when adding and removing stuff, instead of calling .items.add, and items.remove, you call your other functions. It would still be possible to add and remove without events being raised, but with a little code review to make sure .items.add and .items.remove weren't called directly, it could work out quite well. Here's a little example. I only showed 1 Add function, but there are 6 you would have to implement, if you wanted to have use of all the available add functions. There's also .AddRange, and .Clear that you might want to take a look at. Public Class MonitoredListView Inherits ListView Public Event ItemAdded() Public Event ItemRemoved() Public Sub New() MyBase.New() End Sub Public Function AddItem(ByVal Text As String) As ListViewItem RaiseEvent ItemAdded() MyBase.Items.Add(Text) End Function Public Sub RemoveItem(ByVal Item As ListViewItem) RaiseEvent ItemRemoved() MyBase.Items.Remove(Item) End Sub End Class A: I can't find any events that you could use. Perhaps you could subclass ListViewItemCollection, and raise your own event when something is added, with code similar to this. Public Class MyListViewItemCollection Inherits ListView.ListViewItemCollection Public Event ItemAdded(ByVal Item As ListViewItem) Sub New(ByVal owner As ListView) MyBase.New(owner) End Sub Public Overrides Function Add(ByVal value As System.Windows.Forms.ListViewItem) As System.Windows.Forms.ListViewItem Dim Item As ListViewItem Item = MyBase.Add(value) RaiseEvent ItemAdded(Item) Return Item End Function End Class A: I think the best thing that you can do here is to subclass ListView and provide the events that you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/47169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I monitor the computer's CPU, memory, and disk usage in Java? I would like to monitor the following system information in Java: * *Current CPU usage** (percent) *Available memory* (free/total) *Available disk space (free/total) *Note that I mean overall memory available to the whole system, not just the JVM. I'm looking for a cross-platform solution (Linux, Mac, and Windows) that doesn't rely on my own code calling external programs or using JNI. Although these are viable options, I would prefer not to maintain OS-specific code myself if someone already has a better solution. If there's a free library out there that does this in a reliable, cross-platform manner, that would be great (even if it makes external calls or uses native code itself). Any suggestions are much appreciated. To clarify, I would like to get the current CPU usage for the whole system, not just the Java process(es). The SIGAR API provides all the functionality I'm looking for in one package, so it's the best answer to my question so far. However, due it being licensed under the GPL, I cannot use it for my original purpose (a closed source, commercial product). It's possible that Hyperic may license SIGAR for commercial use, but I haven't looked into it. For my GPL projects, I will definitely consider SIGAR in the future. For my current needs, I'm leaning towards the following: * *For CPU usage, OperatingSystemMXBean.getSystemLoadAverage() / OperatingSystemMXBean.getAvailableProcessors() (load average per cpu) *For memory, OperatingSystemMXBean.getTotalPhysicalMemorySize() and OperatingSystemMXBean.getFreePhysicalMemorySize() *For disk space, File.getTotalSpace() and File.getUsableSpace() Limitations: The getSystemLoadAverage() and disk space querying methods are only available under Java 6. Also, some JMX functionality may not be available to all platforms (i.e. it's been reported that getSystemLoadAverage() returns -1 on Windows). Although originally licensed under GPL, it has been changed to Apache 2.0, which can generally be used for closed source, commercial products. A: For disk space, if you have Java 6, you can use the getTotalSpace and getFreeSpace methods on File. If you're not on Java 6, I believe you can use Apache Commons IO to get some of the way there. I don't know of any cross platform way to get CPU usage or Memory usage I'm afraid. A: Along the lines of what I mentioned in this post. I recommend you use the SIGAR API. I use the SIGAR API in one of my own applications and it is great. You'll find it is stable, well supported, and full of useful examples. It is open-source with a GPL 2 Apache 2.0 license. Check it out. I have a feeling it will meet your needs. Using Java and the Sigar API you can get Memory, CPU, Disk, Load-Average, Network Interface info and metrics, Process Table information, Route info, etc. A: A lot of this is already available via JMX. With Java 5, JMX is built-in and they include a JMX console viewer with the JDK. You can use JMX to monitor manually, or invoke JMX commands from Java if you need this information in your own run-time. A: The following supposedly gets you CPU and RAM. See ManagementFactory for more details. import java.lang.management.ManagementFactory; import java.lang.management.OperatingSystemMXBean; import java.lang.reflect.Method; import java.lang.reflect.Modifier; private static void printUsage() { OperatingSystemMXBean operatingSystemMXBean = ManagementFactory.getOperatingSystemMXBean(); for (Method method : operatingSystemMXBean.getClass().getDeclaredMethods()) { method.setAccessible(true); if (method.getName().startsWith("get") && Modifier.isPublic(method.getModifiers())) { Object value; try { value = method.invoke(operatingSystemMXBean); } catch (Exception e) { value = e; } // try System.out.println(method.getName() + " = " + value); } // if } // for } A: /* YOU CAN TRY THIS TOO */ import java.io.File; import java.lang.management.ManagementFactory; // import java.lang.management.OperatingSystemMXBean; import java.lang.reflect.Method; import java.lang.reflect.Modifier; import java.lang.management.RuntimeMXBean; import java.io.*; import java.net.*; import java.util.*; import java.io.LineNumberReader; import java.lang.management.ManagementFactory; import com.sun.management.OperatingSystemMXBean; import java.lang.management.ManagementFactory; import java.util.Random; public class Pragati { public static void printUsage(Runtime runtime) { long total, free, used; int mb = 1024*1024; total = runtime.totalMemory(); free = runtime.freeMemory(); used = total - free; System.out.println("\nTotal Memory: " + total / mb + "MB"); System.out.println(" Memory Used: " + used / mb + "MB"); System.out.println(" Memory Free: " + free / mb + "MB"); System.out.println("Percent Used: " + ((double)used/(double)total)*100 + "%"); System.out.println("Percent Free: " + ((double)free/(double)total)*100 + "%"); } public static void log(Object message) { System.out.println(message); } public static int calcCPU(long cpuStartTime, long elapsedStartTime, int cpuCount) { long end = System.nanoTime(); long totalAvailCPUTime = cpuCount * (end-elapsedStartTime); long totalUsedCPUTime = ManagementFactory.getThreadMXBean().getCurrentThreadCpuTime()-cpuStartTime; //log("Total CPU Time:" + totalUsedCPUTime + " ns."); //log("Total Avail CPU Time:" + totalAvailCPUTime + " ns."); float per = ((float)totalUsedCPUTime*100)/(float)totalAvailCPUTime; log( per); return (int)per; } static boolean isPrime(int n) { // 2 is the smallest prime if (n <= 2) { return n == 2; } // even numbers other than 2 are not prime if (n % 2 == 0) { return false; } // check odd divisors from 3 // to the square root of n for (int i = 3, end = (int)Math.sqrt(n); i <= end; i += 2) { if (n % i == 0) { return false; } } return true; } public static void main(String [] args) { int mb = 1024*1024; int gb = 1024*1024*1024; /* PHYSICAL MEMORY USAGE */ System.out.println("\n**** Sizes in Mega Bytes ****\n"); com.sun.management.OperatingSystemMXBean operatingSystemMXBean = (com.sun.management.OperatingSystemMXBean)ManagementFactory.getOperatingSystemMXBean(); //RuntimeMXBean runtimeMXBean = ManagementFactory.getRuntimeMXBean(); //operatingSystemMXBean = (com.sun.management.OperatingSystemMXBean) ManagementFactory.getOperatingSystemMXBean(); com.sun.management.OperatingSystemMXBean os = (com.sun.management.OperatingSystemMXBean) java.lang.management.ManagementFactory.getOperatingSystemMXBean(); long physicalMemorySize = os.getTotalPhysicalMemorySize(); System.out.println("PHYSICAL MEMORY DETAILS \n"); System.out.println("total physical memory : " + physicalMemorySize / mb + "MB "); long physicalfreeMemorySize = os.getFreePhysicalMemorySize(); System.out.println("total free physical memory : " + physicalfreeMemorySize / mb + "MB"); /* DISC SPACE DETAILS */ File diskPartition = new File("C:"); File diskPartition1 = new File("D:"); File diskPartition2 = new File("E:"); long totalCapacity = diskPartition.getTotalSpace() / gb; long totalCapacity1 = diskPartition1.getTotalSpace() / gb; double freePartitionSpace = diskPartition.getFreeSpace() / gb; double freePartitionSpace1 = diskPartition1.getFreeSpace() / gb; double freePartitionSpace2 = diskPartition2.getFreeSpace() / gb; double usablePatitionSpace = diskPartition.getUsableSpace() / gb; System.out.println("\n**** Sizes in Giga Bytes ****\n"); System.out.println("DISC SPACE DETAILS \n"); //System.out.println("Total C partition size : " + totalCapacity + "GB"); //System.out.println("Usable Space : " + usablePatitionSpace + "GB"); System.out.println("Free Space in drive C: : " + freePartitionSpace + "GB"); System.out.println("Free Space in drive D: : " + freePartitionSpace1 + "GB"); System.out.println("Free Space in drive E: " + freePartitionSpace2 + "GB"); if(freePartitionSpace <= totalCapacity%10 || freePartitionSpace1 <= totalCapacity1%10) { System.out.println(" !!!alert!!!!"); } else System.out.println("no alert"); Runtime runtime; byte[] bytes; System.out.println("\n \n**MEMORY DETAILS ** \n"); // Print initial memory usage. runtime = Runtime.getRuntime(); printUsage(runtime); // Allocate a 1 Megabyte and print memory usage bytes = new byte[1024*1024]; printUsage(runtime); bytes = null; // Invoke garbage collector to reclaim the allocated memory. runtime.gc(); // Wait 5 seconds to give garbage collector a chance to run try { Thread.sleep(5000); } catch(InterruptedException e) { e.printStackTrace(); return; } // Total memory will probably be the same as the second printUsage call, // but the free memory should be about 1 Megabyte larger if garbage // collection kicked in. printUsage(runtime); for(int i = 0; i < 30; i++) { long start = System.nanoTime(); // log(start); //number of available processors; int cpuCount = ManagementFactory.getOperatingSystemMXBean().getAvailableProcessors(); Random random = new Random(start); int seed = Math.abs(random.nextInt()); log("\n \n CPU USAGE DETAILS \n\n"); log("Starting Test with " + cpuCount + " CPUs and random number:" + seed); int primes = 10000; // long startCPUTime = ManagementFactory.getThreadMXBean().getCurrentThreadCpuTime(); start = System.nanoTime(); while(primes != 0) { if(isPrime(seed)) { primes--; } seed++; } float cpuPercent = calcCPU(startCPUTime, start, cpuCount); log("CPU USAGE : " + cpuPercent + " % "); try { Thread.sleep(1000); } catch (InterruptedException e) {} } try { Thread.sleep(500); }`enter code here` catch (Exception ignored) { } } } A: In JDK 1.7, you can get system CPU and memory usage via com.sun.management.OperatingSystemMXBean. This is different than java.lang.management.OperatingSystemMXBean. long getCommittedVirtualMemorySize() // Returns the amount of virtual memory that is guaranteed to be available to the running process in bytes, or -1 if this operation is not supported. long getFreePhysicalMemorySize() // Returns the amount of free physical memory in bytes. long getFreeSwapSpaceSize() // Returns the amount of free swap space in bytes. double getProcessCpuLoad() // Returns the "recent cpu usage" for the Java Virtual Machine process. long getProcessCpuTime() // Returns the CPU time used by the process on which the Java virtual machine is running in nanoseconds. double getSystemCpuLoad() // Returns the "recent cpu usage" for the whole system. long getTotalPhysicalMemorySize() // Returns the total amount of physical memory in bytes. long getTotalSwapSpaceSize() // Returns the total amount of swap space in bytes. A: This works for me perfectly without any external API, just native Java hidden feature :) import com.sun.management.OperatingSystemMXBean; ... OperatingSystemMXBean osBean = ManagementFactory.getPlatformMXBean( OperatingSystemMXBean.class); // What % CPU load this current JVM is taking, from 0.0-1.0 System.out.println(osBean.getProcessCpuLoad()); // What % load the overall system is at, from 0.0-1.0 System.out.println(osBean.getSystemCpuLoad()); A: The following code is Linux (maybe Unix) only, but it works in a real project. private double getAverageValueByLinux() throws InterruptedException { try { long delay = 50; List<Double> listValues = new ArrayList<Double>(); for (int i = 0; i < 100; i++) { long cput1 = getCpuT(); Thread.sleep(delay); long cput2 = getCpuT(); double cpuproc = (1000d * (cput2 - cput1)) / (double) delay; listValues.add(cpuproc); } listValues.remove(0); listValues.remove(listValues.size() - 1); double sum = 0.0; for (Double double1 : listValues) { sum += double1; } return sum / listValues.size(); } catch (Exception e) { e.printStackTrace(); return 0; } } private long getCpuT throws FileNotFoundException, IOException { BufferedReader reader = new BufferedReader(new FileReader("/proc/stat")); String line = reader.readLine(); Pattern pattern = Pattern.compile("\\D+(\\d+)\\D+(\\d+)\\D+(\\d+)\\D+(\\d+)") Matcher m = pattern.matcher(line); long cpuUser = 0; long cpuSystem = 0; if (m.find()) { cpuUser = Long.parseLong(m.group(1)); cpuSystem = Long.parseLong(m.group(3)); } return cpuUser + cpuSystem; } A: Make a batch file "Pc.bat" as, typeperf -sc 1 "\mukit\processor(_Total)\%% Processor Time" You can use the class MProcess, /* *Md. Mukit Hasan *CSE-JU,35 **/ import java.io.*; public class MProcessor { public MProcessor() { String s; try { Process ps = Runtime.getRuntime().exec("Pc.bat"); BufferedReader br = new BufferedReader(new InputStreamReader(ps.getInputStream())); while((s = br.readLine()) != null) { System.out.println(s); } } catch( Exception ex ) { System.out.println(ex.toString()); } } } Then after some string manipulation, you get the CPU use. You can use the same process for other tasks. --Mukit Hasan A: Have a look at this very detailled article: http://nadeausoftware.com/articles/2008/03/java_tip_how_get_cpu_and_user_time_benchmarking#UsingaSuninternalclasstogetJVMCPUtime To get the percentage of CPU used, all you need is some simple maths: MBeanServerConnection mbsc = ManagementFactory.getPlatformMBeanServer(); OperatingSystemMXBean osMBean = ManagementFactory.newPlatformMXBeanProxy( mbsc, ManagementFactory.OPERATING_SYSTEM_MXBEAN_NAME, OperatingSystemMXBean.class); long nanoBefore = System.nanoTime(); long cpuBefore = osMBean.getProcessCpuTime(); // Call an expensive task, or sleep if you are monitoring a remote process long cpuAfter = osMBean.getProcessCpuTime(); long nanoAfter = System.nanoTime(); long percent; if (nanoAfter > nanoBefore) percent = ((cpuAfter-cpuBefore)*100L)/ (nanoAfter-nanoBefore); else percent = 0; System.out.println("Cpu usage: "+percent+"%"); Note: You must import com.sun.management.OperatingSystemMXBean and not java.lang.management.OperatingSystemMXBean. A: The accepted answer in 2008 recommended SIGAR. However, as a comment from 2014 (@Alvaro) says: Be careful when using Sigar, there are problems on x64 machines... Sigar 1.6.4 is crashing: EXCEPTION_ACCESS_VIOLATION and it seems the library doesn't get updated since 2010 My recommendation is to use https://github.com/oshi/oshi Or the answer mentioned above. A: OperatingSystemMXBean osBean = ManagementFactory.getPlatformMXBean(OperatingSystemMXBean.class); System.out.println((osBean.getCpuLoad() * 100) + "%"); import com.sun.management.OperatingSystemMXBean It only starts working after the second call so save the osBean and put it in a loop
{ "language": "en", "url": "https://stackoverflow.com/questions/47177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "206" }
Q: Which Version of Python to Use for Maximum Compatibility If I was going to start an open source project using Python what version should I use to ensure that the vast majority of users can use it on their system? I'm the kind of person who quickly jumps to the next version (which I'll do when Python 3 comes out) but many people may be more conservative if their current version seems to be working fine. What version would hit the sweet spot but still allow me to enjoy the newest and coolest language enhancements? A: As python is in kind of an transition phase towards python 3 with breaking backward compatibility I don't think it is a good idea to go python 3 only. Based on the time line there will be at least one or two following releases of the 2.x series after 2.6/3.0 in october. Beside not having python 3 available on your target platforms, it will take some time until important external python libraries will be ported and usable on python 3. So as Matthew suggests staying at 2.4/2.5 and keeping the transition plan to python 3 in mind is a solid choice. A: I've not seen a system with less than 2.3 installed for some time. Mostly 2.4+ is installed by default for most OS I use now. 2.3 is just on an older Solaris machine. Linux distros tend to have 2.4+, as does OS X. IIRC, 2.4 has a lot of the features 2.5 does, but usable only with from __future__ import * A: You can use different versions of python on each machine. Coding something new, I would not use anything less than python2.5. You can do apt-get install python2.5 on stock debian stable. For windows, don't really worry about it. It's very easy to install the python2.5 msi. If the users can't be bothered to do that, you can deploy an executable with py2exe (so simple) and build an installer with inno setup (again simple) then it will behave like a standard windows application and will use its own python dlls, so no need to have python installed. Like Peter said: keep in mind the transition to 3.0 but don't build on it yet. A: Python 2.3, or 2.2 if you can live without the many modules that were added (e.g. datetime, csv, logging, optparse, zipimport), aren't using SSL, and are willing to add boilerplate for True/False. 2.4 added decorators. generator expressions, reversed(), sorted(), and the subprocess and decimal modules. Although these are all nice, it's easy to write Pythonic code without them (assuming that your project wouldn't make heavy use of them). 2.5 added with, relative imports, better 64 bit support, and quite a bit of speed. You could live without all of those easily enough. 2.6 isn't released (although it's very close), so while it might appeal to developers, it doesn't have the compatibility you're after. Take a look at the release notes for 2.3, 2.4, 2.5, and the upcoming 2.6 (use http://www.python.org/download/releases/2.Y/highlights/ where 'Y' is the minor version). FWIW, for SpamBayes we support 2.2 and above (2.2 requires installing the email package separately). This isn't overly taxing, but the 2.3 additions are useful enough and 2.3 old enough that we'll probably drop 2.2 before long. A: If the project is going to be mainstream and will be run on Linux the only sensible choise is 2.4 - just because it is a pain to get anything else installed as default on Enterprise Linuxes. In any case, any modern OS will/can have 2.4 or newer. A: You should use Python 2.7, the final major version of Python 2. Python 3.x currently has limited 3rd-party library support, and is often not installed by default. So you are looking at the 2.x series. Python 2.7 is essentially fully backwards-compatible with earlier 2.xs. In addition, it can give deprecation warnings about things that won't work in Python 3. (Particularly, it will pay to maintain unit tests, and to be pedantic about Unicode- vs. byte-strings.) The warnings will force you to write good code that the automated 2to3 tool will be able to translate to Python 3. Guido van Rossum officially recommends maintaining a single Python 2 code-base, and using 2to3 together with unit testing to produce compatible releases for Python 2 and 3. (Since PEP 3000 was written, Python 2.6 has been superseded by 2.7.)
{ "language": "en", "url": "https://stackoverflow.com/questions/47198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: SQL Server, nvarchar(MAX) or ntext, image or varbinary? When should I choose one or the other? What are the implications regarding space and (full-text) indexing? BTW: I'm currently using SQL Server 2005 planing to upgrade to 2008 in the following months. Thanks A: Once you put it in the blob, it's going to be difficult to be used for normal SQL comparison. See Using Large-Value Data Types. A: The new (max) fields make it a lot easier to deal with the data from .NET code. With varbinary(max), you simply set the value of a SqlParameter to a byte array and you are done. WIth the image field, you need to write a few hundred lines of code to stream the data into and out of the field. Also, the image/text fields are deprecated in favor of varbinary(max) and varchar(max), and future versions of Sql Server will discontinue support for them.
{ "language": "en", "url": "https://stackoverflow.com/questions/47203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How many rows should be in the (main) buffer of a virtual Listview control? How many rows should be in the (main) buffer of a virtual Listview control? I am witting an application in pure 'c' to the Win32 API. There is an ODBC connection to a database which will retrieve the items (actually rows). The MSDN sample code implies a fixed size buffer of 30 for the end cache (Which would almost certainly not be optimal). I think the end cache and the main cache should be the same size. My thinking is that the buffer should be more than the maximum number of items that could be displayed by the list view at one time. I guess this could be re-calculated each time the Listivew was resized? Or, is it just better to go with a large fixed value. If so what is that value? A: Use the ListView_ApproximateViewRect (or the LVM_APPROXIMATEVIEWRECT message) to get the view rect height. Use the ListView_GetItemRect (or the LVM_GETITEMRECT message) to get the height of an item. Divide the view rect height by the height of an item to get the number of items that can fit in your view. Do this calculation on each size event. Then create your buffer accordingly. A: The LVN_ODCACHEHINT notification message will let you know how many items it is going to ask. This could help you in deciding how big your cache should be. A: @Brian R. Bondy Thanks for the explicit help for how to do get the number of items. In fact I was all ready working my way to understanding that it could be done (for list or report view) with ListView_GetCountPerPage, and I would use you way to get it for for the others, though I don't need the ListView_ApproximateViewRect since I will all ready know the new size of the ListView. @Lars Truijens I am already using the LVN_ODCACHEHINT and had though about using that to set the buffer size, however I need to read to the end of the SQL data to find the last item to get the number of rows returned from ODBC. Since that would be the best time to fill the 'end cache' I think I have to set up the number of items (and therefore fill the buffer) before we get a call to LVN_ODCACHEHIN. I guess my real question is one of optimization for which I think Brian hinted at the answer. The amount of overhead in trashing your buffer and reallocating memory is smaller than the overhead of going out to the network and doing an ODBC read, some make the buffer fairly small and change it often rather. Is this right? I have done a little more playing around, and it seems that think that LVN_ODCACHEHINT generally fills the main buffer correctly, and only misses if a row (in report mode) is partially visible. So I think the answer for the size of the cache is: The total number of displayed items, plus one row of displayed items (since in the icons views you have multiple items per row). You would then re-read the cache with each WM_SIZE and LVN_ODCACHEHINT if either had a different begin and end item number. A: The answer would seem to be: (Or a random collection of notes as I fiddle around with ideas) As a general answer for buffers: Start with some amount, in this case a screen full (I add an extra row in case the next is partially uncovered), and then every time the screen is scrolled, double the buffer size (up to the point before you run out of memory). Which would seem to be wrong. As it turns out, most ways of loading data are all ready buffered. ODBC calls of File I/O. Pretty much anything that isn't that I can think off is either in memory or is recalculated on the fly. This means the answer really is: take the values provided in LVN_ODCACHEHINT (and add 1 either side - this just seems to work faster if you don't have an integral height).
{ "language": "en", "url": "https://stackoverflow.com/questions/47206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Django: Print url of view without hardcoding the url Can i print out a url /admin/manage/products/add of a certain view in a template? Here is the rule i want to create a link for (r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}), I would like to have /manage/products/add in a template without hardcoding it. How can i do this? Edit: I am not using the default admin (well, i am but it is at another url), this is my own A: If you use named url patterns you can do the follwing in your template {% url create_object %} A: You can use get_absolute_url, but that will only work for a particular object. Since your object hasn't been created yet, it won't work in this case. You want to use named URL patterns. Here's a quick intro: Change the line in your urls.py to: (r'^manage/products/add/$', create_object, {'model': Product, 'post_save_redirect': ''}, "create-product"), Then, in your template you use this to display the URL: {% url create-product %} If you're using Django 1.5 or higher you need this: {% url 'create-product' %} You can do some more powerful things with named URL patterns, they're very handy. Note that they are only in the development version (and also 1.0). A: The preferred way of creating the URL is by adding a get_absolute_url method to your model classes. You can hardcode the path there so you at least get closer to following the KISS philosophy. You can go further by utilizing the permalink decorator that figures the path based on the urls configuration. You can read more in the django documentation here.
{ "language": "en", "url": "https://stackoverflow.com/questions/47207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is there a good Fogbugz client for Mac OS X? And/or: do I need one? I've recently started using FogBugz for my hobby projects, and I'm very happy with things so far. Having read more about it, especially the evidence-based scheduling, I'd like to start using it for my PhD as well. (Heh; something tells me my supervisors won't be opening tickets for me, though.) Last night I stumbled onto TimePost, which looks like a tidy app that doesn't do much but could be a real bonus to logging my time in FogBugz effectively. I tried looking around for similar apps but came up a little empty-handed. Are there any other FogBugz clients that you've used and recommend for Mac OS X? Or are you happy with the web interface? A: The official answer is no, there is not a dedicated Mac client, other than Safari :) There's a command line version that runs on Linux, Windows, and Mac. There are also plans for an iPhone version although I'm not technically supposed to announce features before they are done or even spec'd so pretend I didn't say that. A: I recently spotted this one which looks quite nice for additions: http://manicwave.com/products/tickets A: I'm happy with using the web interface. I've used Fluid to create a custom browser for it, and even gotten some help making a pretty icon. A: We recently released a new Fogbugz client software for Mac, maybe you are interested to give it a try, http://lithoglyph.com/ladybugz/ A: I remember reading that there was a client in development, and I believe they're still looking for beta testers. See this URL http://support.fogcreek.com/default.asp?fogbugz.4.24403.0 A: Shameless plug here, but you might wanna check out QuickBugz --- it is a lightweight program that integrates into your status menu. http://www.quickbugzapp.com A: I have been very happily using the Tickets program from Manic Wave for a few weeks now. it provides a very fluid experience. I am using it in a pressure cooker of doing a competition entry in my odd hours around my day job. Tickets makes it incredibly easy to create lots of small cases and juggle them between different milestones. I particularly like its outline view which helps when doing task breakdowns into sub-tasks. Being a long way from the Fogbugz servers, in Western Australia, the speed of a searchable local interface is very much appreciated. The UI has a lot of nice little Macisms such as mouse over a milestone and see the hours summarized. Support has also been very prompt and comprehensive. A: I don't think there is any other such Mac tool. I've never found the web interface too bad personally. A: I don't know of any native tool, but like Matt I am pretty happy with the web interface. A: The beta of Safari 4 and SSB feature is a pretty good option... A: I found using a Mac browser w/ the screen snapshot and search engine add-on to be very useful. I think what you are saying is that it can be hard to edit your timesheets, but that is part of the web design. A: I've just released Bee, which is a Mac client for FogBugz. (It also pulls in your tasks from GitHub and JIRA.) It offers several benefits over the web interface and is designed to be simple, fast and elegant. You can check it out at: http://www.neat.io/bee/fogbugz.html
{ "language": "en", "url": "https://stackoverflow.com/questions/47210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: "Cannot change DataType of a column once it has data" error in Visual Studio 2005 DataSet Designer I've got a DataSet in VisualStudio 2005. I need to change the datatype of a column in one of the datatables from System.Int32 to System.Decimal. When I try to change the datatype in the DataSet Designer I receive the following error: Property value is not valid. Cannot change DataType of a column once it has data. From my understanding, this should be changing the datatype in the schema for the DataSet. I don't see how there can be any data to cause this error. Does any one have any ideas? A: Since filled Datatables do not entertain a change in the schema a workaround can be applied as follows: * *Make a new datatable *Use datatable's Clone method to create the datatable with the same structure and make changes to that column *In the end use datatable's ImportRow method to populate it with data. HTH A: I get the same error but only for columns with its DefaultValue set to any value (except the default <DBNull>). So the way I got around this issue was: * *Column DefaultValue : Type in <DBNull> *Save and reopen the dataset A: I have found a work around. If I delete the data column and add it back with the different data type, then it will work. A: For those finding this via Google and you have a slightly different case where your table has got data and you add a new column (like me), if you create the column and set the datatype in separate statements you also get this same exception. However, if you do it in the same statement, it works fine. So, instead of this: var column = myTable.Columns.Add("Column1"); column.DataType = typeof(int); //nope, exception! Do this: var column = myTable.Columns.Add("Column1", typeof(int)); A: * *Close the DataSet in the visual designer *Right click the dataset, choose Open With... *Choose XML (Text) Editor *Find the column in the XML, in your dataset it will look something like: <xs:element name="DataColumn1" msprop:Generator_ColumnVarNameInTable="columnDataColumn1" msprop:Generator_ColumnPropNameInRow="DataColumn1" msprop:Generator_ColumnPropNameInTable="DataColumn1Column" msprop:Generator_UserColumnName="DataColumn1" type="xs:int" minOccurs="0" /> * *Change the type="xs:int" to type="xs:decimal" *Save and close the XML editor *You may need to right click the DataSet again and choose Run Custom Tool A: Its an old Question but it still can happen at VS 2019 Solution: * *Change the DefaultValue to <DBNull> *Save the Dataset *Close the DataSet Designer *Re-Open the Designer Now it should be possible to change the type without any problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/47217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How would you migrate hundreds of MS Access databases to a central service? We have literally 100's of Access databases floating around the network. Some with light usage and some with quite heavy usage, and some no usage whatsoever. What we would like to do is centralise these databases onto a managed database and retain as much as possible of the reports and forms within them. The benefits of doing this would be to have some sort of usage tracking, and also the ability to pay more attention to some of the important decentralised data that is stored in these apps. There is no real constraints on RDBMS (Oracle, MS SQL server) or the stack it would run on (LAMP, ASP.net, Java) and there obviously won't be a silver bullet for this. We would like something that can remove the initial grunt work in an automated fashion. A: We upsize (either using the upsize wizard or by hand) users to SQL server. It's usually pretty straight forward. Replace all the access tables with linked tables to the sql server and keep all the forms/reports/macros in access. The investment in access isn't lost and the users can keep going business as usual. You get reliability of sql server and centralized backups. Keep in mind - we’ve done this for a few large access databases, not hundreds. I'd do a pilot of a few dozen and see how it works out. UPDATE: I just found this, the sql server migration assitant, it might be worth a look: http://www.microsoft.com/sql/solutions/migration/default.mspx Update: Yes, some refactoring will be necessary for poorly designed databases. As for how to handle access sprawl? I've run into this at companies with lots of technical users (engineers esp., are the worst for this... and excel sprawl). We did an audit - (after backing up) deleted any databases that hadn't been touched in over a year. "Owners" were assigned based the location &/or data in the database. If the database was in "S:\quality\test_dept" then the quality manager and head test engineer had to take ownership of it or we delete it (again after backing it up). A: Upsizing an Access application is no magic bullet. It may be that some things will be faster, but some types of operations will be real dogs. That means that an upsized app has to be tested thoroughly and performance bottlenecks addressed, usually by moving the data retrieval logic server-side (views, stored procedures, passthrough queries). It's not really an answer to the question, though. I don't think there is any automated answer to the problem. Indeed, I'd say this is a people problem and not a programming problem at all. Somebody has to survey the network and determine ownership of all the Access databases and then interview the users to find out what's in use and what's not. Then each app should be evaluated as to whether or not it should be folded into an Enterprise-wide data store/app, or whether its original implementation as a small app for a few users was the better approach. That's not the answer you want to hear, but it's the right answer precisely because it's a people/management problem, not a programming task. A: Oracle has a migration workbench to port MS Access systems to Oracle Application Express, which would be worth investigating. http://apex.oracle.com A: So? Dedicate a server to your Access databases. Now you have the benefit of some sort of usage tracking, and also the ability to pay more attention to some of the important decentralised data that is stored in these apps. This is what you were going to do anyway, only you wanted to use a different database engine instead of NTFS. And now you have to force the users onto your server. Well, you can encourage them by telling them that you aren't going to overwrite their data with old backups anymore, because now you will own the data, and you won't do that anymore. Also, you can tell them that their applications will run faster now, because you are going to exclude the folder from on-access virus scanning (you don't do that to your other databases, which is why they are full of sql-injection malware, but these databases won't be exposed to the internet), and planning to turn packet signing off (you won't need that on a dedicated server: it's only for people who put their file-share on their domain-server). Easy upgrade path, improved service to users, greater centralization and control for IT. Everyone's a winner. A: Further to David Fenton's comments Your administrative rule will be something like this: If the data that is in the database is just being used by one user, for their own work (alone), then they can keep it in their own network share. If the data that is in the database is for being used by more than one person (even if it is only two), then that database must go on a central server and go under IT's management (backups, schema changes, interfaces, etc.). This is because, someone experienced needs to coordinate the whole show or we will risk the time/resources of the next guy down the line.
{ "language": "en", "url": "https://stackoverflow.com/questions/47225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is the .MSPX file extension? I've noticed a lot of Microsoft sites have the *.MSPX extension. While I'm very familiar with ASP.NET, I've not seen this extension before. Does anyone know what this identifies? A: A few internet searches led me to http://www.microsoft.com/backstage/bkst_column_46.mspx, but it was a dead link. Fortunately, it was archived on the Wayback Machine and you can read it here: http://web.archive.org/web/20040803120105/http://www.microsoft.com/backstage/bkst_column_46.mspx The .MSPX extension is part of the "Microsoft Network Project," which according to the article above, is designed to give Microsoft's sites a consistent look-and-feel worldwide, as well as keep the design of the site seperate from the content. Here's the gist of the article: The presentation framework includes a custom Web handler built in ASP.NET. Pages that use the presentation framework have the .mspx filename extension, which is registered in Microsoft Internet Information Services (IIS) on the Web servers. When one of the Microsoft.com Web servers receives a request for an .mspx page, this custom Web handler intercepts that call and passes it to the framework for processing. The framework first checks to see whether the result is cached. If it is, the page is rendered immediately. If the page is not cached, the handler looks up the URL for that page in the table of contents provided by the site owner (see below) to determine where the XML content for the page is stored. The framework then checks to see if the XML is cached, and either returns the cached content or retrieves the XML from the data store identified in the table of contents file. Within the file that holds the content for the page, XML tags identify the content template to be used. The framework retrieves the appropriate template and uses a series of XSLTs to assemble the page, including the masthead, the footer, and the primary navigational column, finally rendering the content within the content pane. A: I think it's an XML based template system that outputs HTML. I think it's internal to MS only. A: Well, a little googling found this: The presentation framework includes a custom Web handler built in ASP.NET. Pages that use the presentation framework have the .mspx filename extension, which is registered in Microsoft Internet Information Services (IIS) on the Web servers. When one of the Microsoft.com Web servers receives a request for an .mspx page, this custom Web handler intercepts that call and passes it to the framework for processing." I'd like to find out more info though. A: I love you guys, i was asking myself also many times, why MS uses .mspx and what it is at all?! :) That time i couldn´t find any informations quickly and assumed it would just be something on top of asp.net or maybe not even that, because you should be able to assign the same asp.net cgi dll to .mspx also easy too ;) But, surely, it can be anything.. also an "special" CGI itself (completely beside ASP.NET), which processes that request with much better / much more cache-use, easier editing and so on.. The end of the story was, that i came accross the view, that maybe it´s not important to know, what .mspx exactly is :)
{ "language": "en", "url": "https://stackoverflow.com/questions/47235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can I generate database tables from C# classes? Does anyone know a way to auto-generate database tables for a given class? I'm not looking for an entire persistence layer - I already have a data access solution I'm using, but I suddenly have to store a lot of information from a large number of classes and I really don't want to have to create all these tables by hand. For example, given the following class: class Foo { private string property1; public string Property1 { get { return property1; } set { property1 = value; } } private int property2; public int Property2 { get { return property2; } set { property2 = value; } } } I'd expect the following SQL: CREATE TABLE Foo ( Property1 VARCHAR(500), Property2 INT ) I'm also wondering how you could handle complex types. For example, in the previously cited class, if we changed that to be : class Foo { private string property1; public string Property1 { get { return property1; } set { property1 = value; } } private System.Management.ManagementObject property2; public System.Management.ManagementObject Property2 { get { return property2; } set { property2 = value; } } } How could I handle this? I've looked at trying to auto-generate the database scripts by myself using reflection to enumerate through each class' properties, but it's clunky and the complex data types have me stumped. A: Try out my CreateSchema extension method for objects at http://createschema.codeplex.com/ It returns a string for any object containing CREATE TABLE scripts. A: I think for complex data types, you should extend them by specifying a ToDB() method which holds their own implementation for creating tables in the DB, and this way it becomes auto-recursive. A: As of 2016 (I think), you can use Entity Framework 6 Code First to generate SQL schema from poco c# classes or to use Database First to generate c# code from sql. Code First to DB walkthrough A: @Jonathan Holland Wow, I think that's the most raw work I've ever seen put into a StackOverflow post. Well done. However, instead of constructing DDL statements as strings, you should definitely use the SQL Server Management Objects classes introduced with SQL 2005. David Hayden has a post entitled Create Table in SQL Server 2005 Using C# and SQL Server Management Objects (SMO) - Code Generation that walks through how to create a table using SMO. The strongly-typed objects make it a breeze with methods like: // Create new table, called TestTable Table newTable = new Table(db, "TestTable"); and // Create a PK Index for the table Index index = new Index(newTable, "PK_TestTable"); index.IndexKeyType = IndexKeyType.DriPrimaryKey; VanOrman, if you're using SQL 2005, definitely make SMO part of your solution. A: It's really late, and I only spent about 10 minutes on this, so its extremely sloppy, however it does work and will give you a good jumping off point: using System; using System.Collections.Generic; using System.Text; using System.Reflection; namespace TableGenerator { class Program { static void Main(string[] args) { List<TableClass> tables = new List<TableClass>(); // Pass assembly name via argument Assembly a = Assembly.LoadFile(args[0]); Type[] types = a.GetTypes(); // Get Types in the assembly. foreach (Type t in types) { TableClass tc = new TableClass(t); tables.Add(tc); } // Create SQL for each table foreach (TableClass table in tables) { Console.WriteLine(table.CreateTableScript()); Console.WriteLine(); } // Total Hacked way to find FK relationships! Too lazy to fix right now foreach (TableClass table in tables) { foreach (KeyValuePair<String, Type> field in table.Fields) { foreach (TableClass t2 in tables) { if (field.Value.Name == t2.ClassName) { // We have a FK Relationship! Console.WriteLine("GO"); Console.WriteLine("ALTER TABLE " + table.ClassName + " WITH NOCHECK"); Console.WriteLine("ADD CONSTRAINT FK_" + field.Key + " FOREIGN KEY (" + field.Key + ") REFERENCES " + t2.ClassName + "(ID)"); Console.WriteLine("GO"); } } } } } } public class TableClass { private List<KeyValuePair<String, Type>> _fieldInfo = new List<KeyValuePair<String, Type>>(); private string _className = String.Empty; private Dictionary<Type, String> dataMapper { get { // Add the rest of your CLR Types to SQL Types mapping here Dictionary<Type, String> dataMapper = new Dictionary<Type, string>(); dataMapper.Add(typeof(int), "BIGINT"); dataMapper.Add(typeof(string), "NVARCHAR(500)"); dataMapper.Add(typeof(bool), "BIT"); dataMapper.Add(typeof(DateTime), "DATETIME"); dataMapper.Add(typeof(float), "FLOAT"); dataMapper.Add(typeof(decimal), "DECIMAL(18,0)"); dataMapper.Add(typeof(Guid), "UNIQUEIDENTIFIER"); return dataMapper; } } public List<KeyValuePair<String, Type>> Fields { get { return this._fieldInfo; } set { this._fieldInfo = value; } } public string ClassName { get { return this._className; } set { this._className = value; } } public TableClass(Type t) { this._className = t.Name; foreach (PropertyInfo p in t.GetProperties()) { KeyValuePair<String, Type> field = new KeyValuePair<String, Type>(p.Name, p.PropertyType); this.Fields.Add(field); } } public string CreateTableScript() { System.Text.StringBuilder script = new StringBuilder(); script.AppendLine("CREATE TABLE " + this.ClassName); script.AppendLine("("); script.AppendLine("\t ID BIGINT,"); for (int i = 0; i < this.Fields.Count; i++) { KeyValuePair<String, Type> field = this.Fields[i]; if (dataMapper.ContainsKey(field.Value)) { script.Append("\t " + field.Key + " " + dataMapper[field.Value]); } else { // Complex Type? script.Append("\t " + field.Key + " BIGINT"); } if (i != this.Fields.Count - 1) { script.Append(","); } script.Append(Environment.NewLine); } script.AppendLine(")"); return script.ToString(); } } } I put these classes in an assembly to test it: public class FakeDataClass { public int AnInt { get; set; } public string AString { get; set; } public float AFloat { get; set; } public FKClass AFKReference { get; set; } } public class FKClass { public int AFKInt { get; set; } } And it generated the following SQL: CREATE TABLE FakeDataClass ( ID BIGINT, AnInt BIGINT, AString NVARCHAR(255), AFloat FLOAT, AFKReference BIGINT ) CREATE TABLE FKClass ( ID BIGINT, AFKInt BIGINT ) GO ALTER TABLE FakeDataClass WITH NOCHECK ADD CONSTRAINT FK_AFKReference FOREIGN KEY (AFKReference) REFERENCES FKClass(ID) GO Some further thoughts...I'd consider adding an attribute such as [SqlTable] to your classes, that way it only generates tables for the classes you want. Also, this can be cleaned up a ton, bugs fixed, optimized (the FK Checker is a joke) etc etc...Just to get you started. A: For complex types, you can recursively convert each one that you come across into a table of its own and then attempt to manage foreign key relationships. You may also want to pre-specify which classes will or won't be converted to tables. As for complex data that you want reflected in the database without bloating the schema, you can have one or more tables for miscellaneous types. This example uses as many as 4: CREATE TABLE MiscTypes /* may have to include standard types as well */ ( TypeID INT, TypeName VARCHAR(...) ) CREATE TABLE MiscProperties ( PropertyID INT, DeclaringTypeID INT, /* FK to MiscTypes */ PropertyName VARCHAR(...), ValueTypeID INT /* FK to MiscTypes */ ) CREATE TABLE MiscData ( ObjectID INT, TypeID INT ) CREATE TABLE MiscValues ( ObjectID INT, /* FK to MiscData*/ PropertyID INT, Value VARCHAR(...) ) A: Also... maybe you can use some tool such as Visio (not sure if Visio does this, but I think it does) to reverse engineer your classes into UML and then use the UML to generate the DB Schema... or maybe use a tool such as this http://www.tangiblearchitect.net/visual-studio/ A: I know you're looking for an entire persistence layer, but NHibernate's hbm2ddl task can do this almost as a one-liner. There is a NAnt task available to call it which may well be of interest. A: Subsonic is also another option. I often use it to generate entity classes that map to a database. It has a command line utility that lets you specify tables, types, and a host of other useful things A: There is a free app, Schematrix which generates classes from DB, check out if does the reverse too :) http://www.schematrix.com/products/schemacoder/download.aspx A: You can do the opposite, database table to C# classes here: http://pureobjects.com/dbCode.aspx A: Try DaoliteMappingTool for .net. It can help you generate the classes. Download form Here
{ "language": "en", "url": "https://stackoverflow.com/questions/47239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: Is there a Firefox 3 addon similar to View Source Chart? Before I upgraded to Firefox 3 I used to constantly use the View Source Chart Firefox Addon which shows the source HTML in a very organized, graphical form. Unfortunately, this addon is only for Firefox 2 and the beta version for Firefox 3 now costs $10 on the author's site. Anyone know of a similar addon that works for Firefox 3? (of course, I might indeed pay $10 for this, but first want to ask around if there isn't anything better and free, as the version for Firefox 2 had its limitations and I don't really want to pay $10 for something in beta that I can't test out before paying for it.) A: You can try to use Nightly Tester Tools It overrides addons compatibility check. Using this tool I managed to bring all of my fav extensions from FF2 to FF3 A: Is Firebug not sufficient? A: View formatted source is kinda similar. It uses tree controls rather than pretty colour blocks, though. A: Try Chris Pederick's Web Developer Toolbar. A: You could always try Firefug. It sounds like it does a similar thing, plus more :) A: i had the same problem... you can use the free version (2.5.0503)...it's compatible with firefox3 and it's work. on the web site it's write that it's not with full functionality but i don't know whitch functionality there aren't.
{ "language": "en", "url": "https://stackoverflow.com/questions/47248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you set up a Python WSGI server under IIS? I work in a Windows environment and would prefer to deploy code to IIS. At the same time I would like to code in Python. Having read that IIS can run fastCGI application, I went to the IIS site where it describes in detail how to get PHP up and running but not much about anything else. Does anyone have experience getting a Python framework running under IIS using something other that plain old CGI? If so can you explain to direct me to some instructions on setting this up? A: We can use iiswsgi framework to setup WSGI over IIS since it is compatible with IIS web server's FastCGI protocol.It's bundled with distutils for building, distribution and installing packages with the help of Microsoft Web Deploy and Web Platform Installer. For more info refer the following link: Serving Python WSGI applications natively from IIS A: There shouldn't be any need to use FastCGI. There exists a ISAPI extension for WSGI. A: Microsoft itself develops wfastcgi (source code) to host Python code on IIS.
{ "language": "en", "url": "https://stackoverflow.com/questions/47253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Storing Windows passwords I'm writing (in C# with .NET 3.5) an administrative application which will poll multiple Windows systems for various bits of data. In many cases it will use WMI, but in some cases it may need to read remote registry or remotely execute some command or script on the polled system. This polling will happen at repeating intervals - usually nightly, but can be configured to happen more (or less) frequently. So the poll could happen as often as every 10 minutes or as rarely as once a month. It needs to happen in an automated way, without any human intervention. These functions will require admin-level access to the polled systems. Now, I expect that in most use cases, there will be a domain, and the polling application can run as a service with Domain Admin (or equivalent) privileges, which means I do not have to worry about storing passwords - the admin setting up the app will define the service's username/password via standard Windows mechanisms. But there's always a few black sheep out there. The program may run in nondomain environments, or in cases where some polled systems are not members of the domain. In these cases we will have to define a username and password, store them securely, then invoke this user/pass pair at the time we poll that system. So keep in mind - in this case the program being written is the user who sends password to the authenticating system. I am not sure whether I will need to use a reversible hash which I then decrypt to plaintext at time of use, or if there is some Windows mechanism which would allow me to store and then reuse the hash only. Obviously the second mechanism is preferable; I'd like my program to either never know the password's plaintext value, or know it for the shortest amount of time possible. I need suggestions for smart and secure ways to accomplish this. Thanks for looking! A: The answer is here: How to store passwords in Winforms application? A: Well it seems that your program needs to impersonate a user other than the context under which it is already running. Although, it does look like a pretty automated process, but if it's not, can you simply not ask the administrator to put in username and password at the time this 'black-sheep' computer is being polled?
{ "language": "en", "url": "https://stackoverflow.com/questions/47262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Are there any good oracle podcasts? Are there any good oracle podcasts around? The only ones I've found is produced by oracle corp, and as such are little more than advertising pieces pushing their technology of the moment. I'm specifically interested in Database technologies. A: Here's a list: http://www.oracle.com/podcasts/index.html A: Oracle Podcast Center Green Enterprise Podcasts Host: Paul Salinger, VP Marketing Listen to discussions with customers, partners, and Oracle green experts, exploring topics that can help Oracle customers better understand how Oracle products can support their sustainability initiatives and enable a green enterprise. Oracle AppCasts Host: Cliff Godwin, SVP, Applications Technology Tune into "Live with Cliff" to hear from application technology experts, product and industry experts, and customers about what's saving Oracle customers time and money when using Oracle E-Business Suite, PeopleSoft, and JD Edwards applications. Oracle Customer SuccessCasts Tune into Oracle Customer SuccessCasts, where customers describe how Oracle helps them to run their businesses more successfully. Oracle Database Podcasts Tune into this podcast series to get the latest information on Oracle Database from Oracle technical experts. Oracle Fusion Middleware Radio Host: Rick Schultz, VP, Product Marketing for Oracle Fusion Middleware & Security Products Tune into this podcast series about Oracle Fusion Middleware to hear about Oracle's middleware product strategy and explore what middleware means to your business—growth, agility, insight, and reduced risk. Oracle Magazine Feature Casts Tune into conversations with Oracle Magazine editors, authors, and Oracle subject matter experts about featured articles in Oracle Magazine. Go beyond print with additional insight into Oracle products, technologies, customers, and more. Oracle PartnerNetwork (OPN) PartnerCast Tune in and learn how to grow your business with Oracle, exclusively for Oracle PartnerNetwork members. Oracle Technology Network Arch2Arch Podcast Host: Bob Rhubart, Manager, Architect Community, OTN Listen in as architects and other experts from across the Oracle community discuss the topics, tools, and technologies that drive the evolution of software architecture. Oracle Technology Network TechCasts Host: Justin Kestelyn, OTN Editor-in-Chief Tune into "fireside chats" with experts about new tools, technologies, and trends in application development. Oracle@Work Through Oracle@Work Video Podcasts you'll learn how Oracle customers from aerospace and automotive to travel and television address business and technical issues with the latest Oracle technology and applications solutions. Oracle@Works play like short television news magazine pieces shot on location, world wide. See Oracle customers in action, first hand, with Oracle@Work Video Podcasts. Profit Online Executive Briefing Audiocasts Every month, Profit Online presents conversations with Oracle executives, customers, and partners, discussing developments in their businesses and trends in their industries. Tune into and stay up to date on how IT and business leaders are expanding into new markets, improving business processes, and creating the future of the enterprise. Reference: Oracle Podcasts A: Asking for an "Oracle podcast" that isn't going to be a product marketing focused is too broad. It's almost like asking for a "Microsoft Podcast." Anything you get is going to be broad and shallow. An introduction, nothing in depth. What are you looking for? DB, Middleware, Linux/OS, ERP, CRM, BI, Information Security, Identify Management, Integration, PL/SQL etc. . . . For what kind of audience? Developer, Sysadmin, DBA, Apps DBA (very diff believe it or not), Technology Manager, Accountant, Finance Director, White Hat security specialist, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/47279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What's the best way to pass data into a Flex chart from a Ruby on Rails application? Quite a few methods exist for passing data into a Flex binary from a Rails application. Right now, I'm using the old e4x resultFormat with a xml.erb template. I've done AMF before, but I feel like inlining parameters into the embed itself is a better solution because you don't have to wait for the browser to load a swf binary and the binary to make a web request. Taking all of that into account, what is the best practice for rendering a Flex widget with a Rails back-end these days? A: Sounds to me like you shouldn't use Flex at all. Flex is great for writing applications that load and display data, but is a huge overhead if you just want to display charts. If you're worried about delivering the SWF and the data in two separate responses, you should be worrying about the 200-300 K overhead of the Flex framework. Go with something more lightweight like Flot and deliver the JavaScript code that generates the charts along with the data. I don't think Flex is a suitable platform for what you want. A: If the chart is just a graphic you are placing amidst normal html content, your solution of custom compiling the swf may be a good one. Of course you should still explore alternatives like Theo's suggestion. If your whole application is flash, it's a toss-up. If it is fastest to simply import a swf, go ahead. However, if you need to make things any more interactive, use the data elsewhere, or fetch new data, it would probably be best to use xml templates as you are, or the ActiveRecord::Base#to_xml method if you're lazy. A: It depends, are you going to want the charts to be 'live', as in get updated in real-time when data changes? If so then using AMF with a Flex native RemoteObject gives you plenty of polling options, you can also just use a simpler Flex native HTTPService. Read about them both here: Flex Actionscript 3.0 Documentation
{ "language": "en", "url": "https://stackoverflow.com/questions/47297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: GODI installation issue I'm trying to install GODI on linux (Ubuntu). It's a library management tool for the ocaml language. I've actually installed this before --twice, but awhile ago-- with no issues --that I can remember-- but this time I just can't figure out what I'm missing. $ ./bootstrap --prefix /home/nlucaroni/godi $ ./bootstrap_stage2 .: 1: godi_confdir: not found Error: Command fails with code 2: /bin/sh Failure! I had added the proper directories to the path, and they show up with a quick echo $path, and godi_confdir reported as being: /home/nlucaroni/godi/etc (...and the directory exists, with the godi.conf file present). So, I can't figure out why ./bootstrap_stage2 isn't working. A: What is the output of which godi_confdir? P.S. I remember having this exact same problem, but I don't remember precisely how I fixed it. A: Hey Chris, I just figured it out. Silly mistake. It was just a permission issue, running everything from /tmp/ worked fine --well after enabling GODI_BASEPKG_PCRE in godi.conf. I had been running it from my home directory, you forget simple things like that at 3:00am. -- Actually I'm having another problem. Installing conf-opengl-6: GODI can't seen to find the GL/gl.h file, though I can --you can see that it is Checking the suggestion. > ===> Configuring for conf-opengl-6 > Checking the suggestion > Include=/usr/include/GL/gl.h Library=/<GLU+GL> > Checking /usr: > Include=/usr/include/GL/gl.h Library=/usr/lib/<GLU+GL> > Checking /usr: > Include=/usr/local/include/GL/gl.h Library=/usr/local/lib/<GLU+GL> > Checking /usr/local: > Include=/usr/local/include/GL/gl.h Library=/usr/local/lib/<GLU+GL> > Exception: Failure "Cannot find library". > Error: Exec error: File /home/nlucaroni/godi/build/conf/conf-opengl/./../../mk/bsd.pkg.mk, line 1022: Command returned with non-zero exit code > Error: Exec error: File /home/nlucaroni/godi/build/conf/conf-opengl/./../../mk/bsd.pkg.mk, line 1375: Command returned with non-zero exit code ### Error: Command fails with code 1: godi_console edit - Ok, this is fixed too... just needed GLU, weird since the test configuration option said everything was fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/47309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Top tips for secure web applications I am looking for easy steps that are simple and effective in making a web application more secure. What are your top tips for secure web applications, and what kind of attack will they stop? A: Do not trust user input. Validation of expected data types and formatting is essential to avaoiding SQL injection and Cross-Site Scripting (XSS) attacks. A: * *Escape user provided content to avoid XSS attacks. *Using paremeterised SQL or stored procedures to avoid SQL Injections attacks. *Running the webserver as an unprivileged account to minimise attacks on the OS. *Setting the webserver directories to an unprivileged account, again, to minimise attacks on the OS. *Setting up unprivileged accounts on the SQL server and using them for the application to minimise attacks on the DB. For more in depth information, there is always the OWASP Guide to Building Secure Web Applications and Web Services A: Microsoft Technet has en excellent article: Ten Tips for Designing, Building, and Deploying More Secure Web Applications Here are the topics for the tips answered in that article: * *Never Directly Trust User Input *Services Should Have Neither System nor Administrator Access *Follow SQL Server Best Practices *Protect the Assets *Include Auditing, Logging, and Reporting Features *Analyze the Source Code *Deploy Components Using Defense in Depth *Turn Off In-Depth Error Messages for End Users *Know the 10 Laws of Security Administration *Have a Security Incident Response Plan A: Some of my favourites: * *Filter Input, Escape Output to help guard against XSS or SQL injection attacks *Use prepared statements for database queries (SQL injection attacks) *Disable unused user accounts on your server to prevent brute force password attacks *Remove Apache version info from HTTP header (ServerSignature=Off, ServerTokens=ProductOnly) *Run your web server in a chroot jail to limit damage if compromised A: OWASP is your friend. Their Top Ten List of web application security vulnerabilities includes a description of each problem and how to defend against it. The site is a good resource for learning more about web application security and is a wealth of tools and and testing techniques as well. A: Set the secure flag on cookies for SSL applications. Otherwise there is always a highjacking attack that is much easier to conduct than breaking the crypto. This is the essence of CVE-2002-1152.
{ "language": "en", "url": "https://stackoverflow.com/questions/47323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: UserControl rendering: write link to current page? I'm implementing a custom control and in this control I need to write a bunch of links to the current page, each one with a different query parameter. I need to keep existing query string intact, and add (or modify the value of ) an extra query item (eg. "page"): "Default.aspx?page=1" "Default.aspx?page=2" "Default.aspx?someother=true&page=2" etc. Is there a simple helper method that I can use in the Render method ... uhmm ... like: Page.ClientScript.SomeURLBuilderMethodHere(this,"page","1"); Page.ClientScript.SomeURLBuilderMethodHere(this,"page","2"); That will take care of generating a correct URL, maintain existing query string items and not create duplicates eg. page=1&page=2&page=3? Rolling up my own seems like such an unappealing task. A: I'm afraid I don't know of any built-in method for this, we use this method that takes the querystring and sets parameters /// <summary> /// Set a parameter value in a query string. If the parameter is not found in the passed in query string, /// it is added to the end of the query string /// </summary> /// <param name="queryString">The query string that is to be manipulated</param> /// <param name="paramName">The name of the parameter</param> /// <param name="paramValue">The value that the parameter is to be set to</param> /// <returns>The query string with the parameter set to the new value.</returns> public static string SetParameter(string queryString, string paramName, object paramValue) { //create the regex //match paramname=* //string regex = String.Format(@"{0}=[^&]*", paramName); string regex = @"([&?]{0,1})" + String.Format(@"({0}=[^&]*)", paramName); RegexOptions options = RegexOptions.RightToLeft; // Querystring has parameters... if (Regex.IsMatch(queryString, regex, options)) { queryString = Regex.Replace(queryString, regex, String.Format("$1{0}={1}", paramName, paramValue)); } else { // If no querystring just return the Parameter Key/Value if (queryString == String.Empty) { return String.Format("{0}={1}", paramName, paramValue); } else { // Append the new parameter key/value to the end of querystring queryString = String.Format("{0}&{1}={2}", queryString, paramName, paramValue); } } return queryString; } Obviously you could use the QueryString NameValueCollection property of the URI object to make looking up the values easier, but we wanted to be able to parse any querystring. A: Oh and we have this method too that allows you to put in a whole URL string without having to get the querystring out of it public static string SetParameterInUrl(string url, string paramName, object paramValue) { int queryStringIndex = url.IndexOf("?"); string path; string queryString; if (queryStringIndex >= 0 && !url.EndsWith("?")) { path = url.Substring(0, queryStringIndex); queryString = url.Substring(queryStringIndex + 1); } else { path = url; queryString = string.Empty; } return path + "?" + SetParameter(queryString, paramName, paramValue); }
{ "language": "en", "url": "https://stackoverflow.com/questions/47329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: ASP.NET MVC Preview 5 routing ambiguity I have a problem with a sample routing with the preview 5 of asp.net mvc. In the AccountController I have 2 actions: public ActionResult Delete() public ActionResult Delete(string username) While trying to look for Account/Delete or Account/Delete?username=davide the ControllerActionInvoker throws a exception saying that Delete request is ambiguous between my tow actions methods. The default route in the global.asax hasn't been changed. Shouldn't the action invoker understand what's the method to call looking in the parameters list? Using the preview 4 I hadn't these kind of problem performing the same operation. Any idea? A: Solution found! With the introduction of the ActionNameAttribute, it's now necessary to filter manually which method to call depending on the request. This is done by the ActionSelectionAttribute. Full explanation here: http://haacked.com/archive/2008/08/29/how-a-method-becomes-an-action.aspx A: I can't say for sure why this is happening. But you might want to consider only having the Delete(string username) action and removing the parameter-less overload. Because string is nullable my understanding is that simply calling Account/Delete will invoke the action with a null username parameter which you can then test for at the beginning of the action method. A: What I'd do is ditch the blank Delete(), and only use Delete(string username) In your url routing you'd have something similar to "/{Controller}/{Action}/{username}/" ? If you have "/{Controller}/{Action}/{Id}/" you'd be better off doing Delete(string id) and that way just using the url to handle this "/Account/Delete/davide/" That said use your default route which should be something like the default Id is "" Then in your Delete(string id) method have: public ActionResult Delete(string id) { if(string.IsNullOrEmpty(id)) return EmptyID(); // Continue normal Delete method } public ActionResult EmptyID() { // The method you were going to have on a blank delete. } That or just wrap it up in the one method on an if {} else {} Either way I'd just be going with the one method and doing a default on your username/id in your route of an empty string and handle it that way. If you want to contact me on further follow up to what I mean, or whatever will help, ping me at andrew@ my domain on my info page. Edit: Ah pretty much what Berko said anyway, I'm not sure how Named Attributes would help - so please post a comment here detailing it for other guys who find the same issues! :) A: Its ambiguous because the two controller action are the same post method.. You can only used that in form posting scenario for example you are submitting a form data that uses HTTP post..
{ "language": "en", "url": "https://stackoverflow.com/questions/47338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Versioning library like SQLite database library? I like how SQLite library can be included into an application and the application has a full-fledged database to use. Similarly, is there an open source versioning library that I can include into my application so that I can save versions of files as well as do diffs and merges? A: SVN doesn't require any prerequisites on end user machine. You can embed SVN right into you app. To learn more on subversion integration, visit "Application Integration/Embedding" thread on SVN forum. A: Answering my question myself, I recently discovered hgshelve and gitshelve that is almost exactly what I was looking for. A: I am not entirely sure what you mean by "included in an application", as you could potentially deliver any library so long as the licensing allows. Are you referring to the fact that sqlite is small or that it is public domain? Mercurial is a similarly lightweight piece of revision control software. If you are writing your application in python, which is likely since python now includes sqlite3, importing features directly from mercurial's source code should not be too difficult. Otherwise there's no shame in invoking commandline processes, though this may be clunkier. Mercurial is not public domain, but it is GPL'd. Mercurial is also my personal favorite among modern revision control systems. It's leaps ahead of CVS and Subversion, and very similar to GIT although somewhat simpler to use. A: You might want to look at fossil, an scm tool written by the author of sqlite. I don't know how easy it is to embed, but it is a single file executable so it should be quite easy to run from within your application. Arguably, running it as a seperate process might actually be better than embedding since it won't slow down your app while it does what it does. A: In my opinion Firebird is one of the best choices for embedded DB scenarios. Also Microsoft SQL Server Compact (closed source, but free) might be suitable, however it less capable than Firebird. EDIT: I misread you question. If you don't need RDBMS, you can try to embed SVN to your application.
{ "language": "en", "url": "https://stackoverflow.com/questions/47340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you list all triggers in a MySQL database? What is the command to list all triggers in a MySQL database? A: The command for listing all triggers is: show triggers; or you can access the INFORMATION_SCHEMA table directly by: select trigger_schema, trigger_name, action_statement from information_schema.triggers * *You can do this from version 5.0.10 onwards. *More information about the TRIGGERS table is here. A: I hope following code will give you more information. select * from information_schema.triggers where information_schema.triggers.trigger_schema like '%your_db_name%' This will give you total 22 Columns in MySQL version: 5.5.27 and Above TRIGGER_CATALOG TRIGGER_SCHEMA TRIGGER_NAME EVENT_MANIPULATION EVENT_OBJECT_CATALOG EVENT_OBJECT_SCHEMA EVENT_OBJECT_TABLE ACTION_ORDER ACTION_CONDITION ACTION_STATEMENT ACTION_ORIENTATION ACTION_TIMING ACTION_REFERENCE_OLD_TABLE ACTION_REFERENCE_NEW_TABLE ACTION_REFERENCE_OLD_ROW ACTION_REFERENCE_NEW_ROW CREATED SQL_MODE DEFINER CHARACTER_SET_CLIENT COLLATION_CONNECTION DATABASE_COLLATION A: You can use below to find a particular trigger definition. SHOW TRIGGERS LIKE '%trigger_name%'\G or the below to show all the triggers in the database. It will work for MySQL 5.0 and above. SHOW TRIGGERS\G A: For showing a particular trigger in a particular schema you can try the following: select * from information_schema.triggers where information_schema.triggers.trigger_name like '%trigger_name%' and information_schema.triggers.trigger_schema like '%data_base_name%' A: You can use MySQL Workbench: Connect to the MySQL Server Select DB * *tables *on the table name line click the edit icon (looks like a work tool) *in the table edit window - Click the tab "Triggers" *on the Triggers list click th eTrigger name to get its source code A: This sentence could contribute to solving the problem: select LOWER(concat('delimiter |', '\n', 'create trigger %data_base_name%.', TRIGGER_NAME, '\n', ' ', ACTION_TIMING, ' ', EVENT_MANIPULATION, ' on %data_base_name%.', EVENT_OBJECT_TABLE, ' for each row', '\n', ACTION_STATEMENT, '\n', '|')) AS TablaTriggers from information_schema.triggers where information_schema.triggers.trigger_schema like '%data_base_name%' A: USE dbname; SHOW TRIGGERS OR SHOW TRIGGERS FROM dbname;
{ "language": "en", "url": "https://stackoverflow.com/questions/47363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "124" }
Q: SQL Compare-Like tool for Oracle? We're a .NET team which uses the Oracle DB for a lot of reasons that I won't get into. But deployment has been a bitch. We are manually keeping track of all the changes to the schema in each version, by keeping a record of all the scripts that we run during development. Now, if a developer forgets to check-in his script to the source control after he ran it - which is not that rare - at the end of the iteration we get a great big headache. I hear that SQL Compare by Red-Gate might solve these kind of issues, but it only has SQL Server support. Anybody knows of a similar tool for Oracle? I've been unable to find one. A: TOAD is a great generic tool for Oracle development and i think a similar feature is in the basic version. You can download a trial version (make sure you don't get the old free version of TOAD, that is about 4 years old) If you don't want to buy a tool, and you need something less flash you could roll your own quite easily. I just found Schema Compare Tool for Oracle which looks very simple, and has a nice baseline concept. This is very handy if you want to track changes since the last code check-in. This way you discover changes that may have been made to multiple DBs by hand, but not documented. PS: The "SQL Compare by Red-Gate" demo looked very nice indeed... however the voice over cracked me up... sounded like a BBC documentary. A: OraPowerTools will do the job. There is also a "Diff Wizard" in Oracle SQL Developer, but I haven't used it yet. A: Red Gate Schema Compare for Oracle has now been released! http://www.red-gate.com/products/schema_compare_for_oracle/index.htm There is a 28-day fully functional free trial. Please give it a go and let us know your feedback! A: Hitchhiker, If you're willing to spend some money, TOAD has "compare schemas" functionality which should do what you're after. It'll report the differences and produce a migration script to bring one into line with the other. I've never used the script, so I can't vouch for it, but I have used it to make sure our build scripts are complete. A: There are various tools out there that you can use, I haven't used any of them myself though so I've got no comments to make about them, but another "trick" that you can use is to create a trigger on DDL events, so you can basically capture (to a table, or log file or whatver) any changes done between deployments. DDL Triggers A: Mark - I would like to be able to easily synchronize two database schemas. Specifically, this demo looks like heaven to me. A: Check out Oracle Enterprise Manager Change Management Pack, its an Oracle tool for this: http://www.oracle.com/technology/products/oem/pdf/ds_change_pack.pdf You can try it there: http://www.oracle.com/technology/software/products/oem/index.html
{ "language": "en", "url": "https://stackoverflow.com/questions/47366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Best practices re: LINQ To SQL for data access Part of the web application I'm working on is an area displaying messages from management to 1...n users. I have a DataAccess project that contains the LINQ to SQL classes, and a website project that is the UI. My database looks like this: User -> MessageDetail <- Message <- MessageCategory MessageDetail is a join table that also contains an IsRead flag. The list of messages is grouped by category. I have two nested ListView controls on the page -- One outputs the group name, while a second one nested inside that is bound to MessageDetails and outputs the messages themselves. In the code-behind for the page listing the messages I have the following code: protected void MessageListDataSource_Selecting(object sender, LinqDataSourceSelectEventArgs e) { var db = new DataContext(); // parse the input strings from the web form int categoryIDFilter; DateTime dateFilter; string catFilterString = MessagesCategoryFilter.SelectedValue; string dateFilterString = MessagesDateFilter.SelectedValue; // TryParse will return default values if parsing is unsuccessful (i.e. if "all" is selected"): // DateTime.MinValue for dates, 0 for int DateTime.TryParse(dateFilterString, out dateFilter); Int32.TryParse(catFilterString, out categoryIDFilter); bool showRead = MessagesReadFilter.Checked; var messages = from detail in db.MessageDetails where detail.UserID == (int)Session["UserID"] where detail.Message.IsPublished where detail.Message.MessageCategoryID == categoryIDFilter || (categoryIDFilter == 0) where dateFilter == detail.Message.PublishDate.Value.Date || (dateFilter == DateTime.MinValue) // is unread, showRead filter is on, or message was marked read today where detail.IsRead == false || showRead || detail.ReadDate.Value.Date == DateTime.Today orderby detail.Message.PublishDate descending group detail by detail.Message.MessageCategory into categories orderby categories.Key.Name select new { MessageCategory = categories.Key, MessageDetails = categories.Select(d => d) }; e.Result = messages; } This code works, but sticking a huge LINQ statement like this in the code-behind for a LinqDataSource control just doesn't sit right with me. It seems like I'm still coding queries into the user interface, only now it's LINQ instead of SQL. However, I feel that building another layer between the L2S classes and the UI would cut back on some of the flexibility of LINQ. Isn't the whole point to reduce the amount of code you write to fetch data? Is there some possible middle ground I'm not seeing, or am I just misunderstanding the way LINQ to SQL is supposed to be used? Advice would be greatly appreciated. A: All your LINQ querys should be in a business logic class, no change from older methodologies like ADO. If you are a purist you should always return List(of T) from your methods in the business class, in fact, the datacontext should only be visible to the business classes. Then you can manipulate the list in the user interface. If you are a pragmatist, you can return a IQueryable object and make some manipulations in the user interface. A: Regardless of LINQ, I think that mixing presentation code with database-relaed code is not a good idea. I would create a simple DB abstraction layer on top of LINQ queries. In my opinion LINQ is just a convenient tool, that doesn't have a serious impact on traditional application design.
{ "language": "en", "url": "https://stackoverflow.com/questions/47374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How can I monitor the executed sql statements on a SQL Server 2005 In a project of mine the SQL statements that are executed against a SQL Server are failing for some unknown reason. Some of the code is already used in production so debugging it is not an easy task. Therefore I need a way to see in the database itself what SQL statements are used, as the statements are generated at runtime by the project and could be flawed when certain conditions are met. I therefore considered the possibility to monitor the incoming statements and check myself if I see any flaws. The database is running on a SQL Server 2005, and I use SQL server management studio express as primary tool to manipulate the database. So my question is, what is the best way to do this? A: Seeing how you use the Management Studio Express, I will assume you don't have access to the MSSQL 2005 client tools. If you do, install those, because it includes the SQL profiler which does exactly what you want (and more!). For more info about that one, see msdn. I found this a while ago, because I was thinking about the exact same thing. I have access to the client tools myself, so I don't really need to yet, but that access is not unlimited (it's through my current job). If you try it out, let me know if it works ;-) A: Best way is to fire up profiler, start a trace, save the trace and then rerun the statements
{ "language": "en", "url": "https://stackoverflow.com/questions/47376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Setting Up MySQL Triggers I've been hearing about triggers, and I have a few questions. What are triggers? How do I set them up? Are there any precautions, aside from typical SQL stuff, that should be taken? A: Triggers allow you to perform a function in the database as certain events happen (eg, an insert into a table). I can't comment on mysql specifically. Precaution: Triggers can be very alluring, when you first start using them they seem like a magic bullet to all kinds of problems. But, they make "magic" stuff happen, if you don't know the database inside out, it can seem like really strange things happen (such as inserts into other tables, input data changing, etc). Before implementing things as a trigger I'd seriously consider instead enforcing the use of an API around the schema (preferably in the database, but outside if you can't). Some things I'd still use triggers for * *Keeping track of "date_created" and "date_last_edited" fields *Inserting "ID"'s (in oracle, where there is no auto id field) *Keeping change history Things you wouldn't want to use triggers for * *business rules/logic *anything which connects outside of the database (eg a webservice call) *Access control *Anything which isn't transactional ( anything you do in the trigger MUST be able to rollback with the transaction ) A: From dev.mysql.com, a trigger is ...a named database object that is associated with a table and that is activated when a particular event occurs for the table. The syntax to create them is also documented at that site. Briefly, CREATE [DEFINER = { user | CURRENT_USER }] TRIGGER trigger_name trigger_time trigger_event ON tbl_name FOR EACH ROW trigger_stmt And they provide an example: CREATE TABLE account (acct_num INT, amount DECIMAL(10,2)); CREATE TRIGGER ins_sum BEFORE INSERT ON account FOR EACH ROW SET @sum = @sum + NEW.amount; You at least need to abide by all the restrictions on stored functions. You won't be able to lock tables, alter views, or modify the table that triggered the trigger. Also triggers may cause replication problems. A: A trigger is a named database object that is associated with a table and that is activated when a particular event occurs for the table. To create a trigger: CREATE TRIGGER triggerName [BEFORE|AFTER] [INSERT|UPDATE|DELETE|REPLACE] ON tableName FOR EACH ROW SET stuffToDoHERE; Even though I answered this part the other question still stands. A: This question is old and other answers are very good, but since the user asked about precautions that should be taken, I want to add something: * *If you use replication in a complex environment, don't make a massive use of Triggers, and don't call stored procedures from triggers. *Triggers are slow in MySQL. *You can't use some SQL statements within triggers. And some statements are permitted but should be avoided, like LOCK. The general rule is: if you don't fully understand the implications of what you are doing, you shouldn't do it. *Triggers can cause endless loops, so be careful.
{ "language": "en", "url": "https://stackoverflow.com/questions/47387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Best way to test a MS Access application? With the code, forms and data inside the same database I am wondering what are the best practices to design a suite of tests for a Microsoft Access application (say for Access 2007). One of the main issues with testing forms is that only a few controls have a hwnd handle and other controls only get one they have focus, which makes automation quite opaque since you cant get a list of controls on a form to act on. Any experience to share? A: Another advantage of Access being a COM application is that you can create an .NET application to run and test an Access application via Automation. The advantage of this is that then you can use a more powerful testing framework such as NUnit to write automated assert tests against an Access app. Therefore, if you are proficient in either C# or VB.NET combined with something like NUnit then you can more easily create greater test coverage for your Access app. A: I've taken a page out of Python's doctest concept and implemented a DocTests procedure in Access VBA. This is obviously not a full-blown unit-testing solution. It's still relatively young, so I doubt I've worked out all the bugs, but I think it's mature enough to release into the wild. Just copy the following code into a standard code module and press F5 inside the Sub to see it in action: '>>> 1 + 1 '2 '>>> 3 - 1 '0 Sub DocTests() Dim Comp As Object, i As Long, CM As Object Dim Expr As String, ExpectedResult As Variant, TestsPassed As Long, TestsFailed As Long Dim Evaluation As Variant For Each Comp In Application.VBE.ActiveVBProject.VBComponents Set CM = Comp.CodeModule For i = 1 To CM.CountOfLines If Left(Trim(CM.Lines(i, 1)), 4) = "'>>>" Then Expr = Trim(Mid(CM.Lines(i, 1), 5)) On Error Resume Next Evaluation = Eval(Expr) If Err.Number = 2425 And Comp.Type <> 1 Then 'The expression you entered has a function name that '' can't find. 'This is not surprising because we are not in a standard code module (Comp.Type <> 1). 'So we will just ignore it. GoTo NextLine ElseIf Err.Number <> 0 Then Debug.Print Err.Number, Err.Description, Expr GoTo NextLine End If On Error GoTo 0 ExpectedResult = Trim(Mid(CM.Lines(i + 1, 1), InStr(CM.Lines(i + 1, 1), "'") + 1)) Select Case ExpectedResult Case "True": ExpectedResult = True Case "False": ExpectedResult = False Case "Null": ExpectedResult = Null End Select Select Case TypeName(Evaluation) Case "Long", "Integer", "Short", "Byte", "Single", "Double", "Decimal", "Currency" ExpectedResult = Eval(ExpectedResult) Case "Date" If IsDate(ExpectedResult) Then ExpectedResult = CDate(ExpectedResult) End Select If (Evaluation = ExpectedResult) Then TestsPassed = TestsPassed + 1 ElseIf (IsNull(Evaluation) And IsNull(ExpectedResult)) Then TestsPassed = TestsPassed + 1 Else Debug.Print Comp.Name; ": "; Expr; " evaluates to: "; Evaluation; " Expected: "; ExpectedResult TestsFailed = TestsFailed + 1 End If End If NextLine: Next i Next Comp Debug.Print "Tests passed: "; TestsPassed; " of "; TestsPassed + TestsFailed End Sub Copying, pasting, and running the above code from a module named Module1 yields: Module: 3 - 1 evaluates to: 2 Expected: 0 Tests passed: 1 of 2 A few quick notes: * *It has no dependencies (when used from within Access) *It makes use of Eval which is a function in the Access.Application object model; this means you could use it outside of Access but it would require creating an Access.Application object and fully qualifying the Eval calls *There are some idiosyncrasies associated with Eval to be aware of *It can only be used on functions that return a result that fits on a single line Despite its limitations, I still think it provides quite a bit of bang for your buck. Edit: Here is a simple function with "doctest rules" the function must satisfy. Public Function AddTwoValues(ByVal p1 As Variant, _ ByVal p2 As Variant) As Variant '>>> AddTwoValues(1,1) '2 '>>> AddTwoValues(1,1) = 1 'False '>>> AddTwoValues(1,Null) 'Null '>>> IsError(AddTwoValues(1,"foo")) 'True On Error GoTo ErrorHandler AddTwoValues = p1 + p2 ExitHere: On Error GoTo 0 Exit Function ErrorHandler: AddTwoValues = CVErr(Err.Number) GoTo ExitHere End Function A: Although that being a very old answer: There is AccUnit, a specialized Unit-Test framework for Microsoft Access. A: I would design the application to have as much work as possible done in queries and in vba subroutines so that your testing could be made up of populating test databases, running sets of the production queries and vba against those databases and then looking at the output and comparing to make sure the output is good. This approach doesn't test the GUI obviously, so you could augment the testing with a series of test scripts (here I mean like a word document that says open form 1, and click control 1) that are manually executed. It depends on the scope of the project as the level of automation necessary for the testing aspect. A: 1. Write Testable Code First, stop writing business logic into your Form's code behind. That's not the place for it. It can't be properly tested there. In fact, you really shouldn't have to test your form itself at all. It should be a dead dumb simple view that responds to User Interaction and then delegates responsibility for responding to those actions to another class that is testable. How do you do that? Familiarizing yourself with the Model-View-Controller pattern is a good start. It can't be done perfectly in VBA due to the fact that we get either events or interfaces, never both, but you can get pretty close. Consider this simple form that has a text box and a button. In the form's code behind, we'll wrap the TextBox's value in a public property and re-raise any events we're interested in. Public Event OnSayHello() Public Event AfterTextUpdate() Public Property Let Text(value As String) Me.TextBox1.value = value End Property Public Property Get Text() As String Text = Me.TextBox1.value End Property Private Sub SayHello_Click() RaiseEvent OnSayHello End Sub Private Sub TextBox1_AfterUpdate() RaiseEvent AfterTextUpdate End Sub Now we need a model to work with. Here I've created a new class module named MyModel. Here lies the code we'll put under test. Note that it naturally shares a similar structure as our view. Private mText As String Public Property Let Text(value As String) mText = value End Property Public Property Get Text() As String Text = mText End Property Public Function Reversed() As String Dim result As String Dim length As Long length = Len(mText) Dim i As Long For i = 0 To length - 1 result = result + Mid(mText, (length - i), 1) Next i Reversed = result End Function Public Sub SayHello() MsgBox Reversed() End Sub Finally, our controller wires it all together. The controller listens for form events and communicates changes to the model and triggers the model's routines. Private WithEvents view As Form_Form1 Private model As MyModel Public Sub Run() Set model = New MyModel Set view = New Form_Form1 view.Visible = True End Sub Private Sub view_AfterTextUpdate() model.Text = view.Text End Sub Private Sub view_OnSayHello() model.SayHello view.Text = model.Reversed() End Sub Now this code can be run from any other module. For the purposes of this example, I've used a standard module. I highly encourage you to build this yourself using the code I've provided and see it function. Private controller As FormController Public Sub Run() Set controller = New FormController controller.Run End Sub So, that's great and all but what does it have to do with testing?! Friend, it has everything to do with testing. What we've done is make our code testable. In the example I've provided, there is no reason what-so-ever to even try to test the GUI. The only thing we really need to test is the model. That's where all of the real logic is. So, on to step two. 2. Choose a Unit Testing Framework There aren't a lot of options here. Most frameworks require installing COM Add-ins, lots of boiler plate, weird syntax, writing tests as comments, etc. That's why I got involved in building one myself, so this part of my answer isn't impartial, but I'll try to give a fair summary of what's available. * *AccUnit * *Works only in Access. *Requires you to write tests as a strange hybrid of comments and code. (no intellisense for the comment part. *There is a graphical interface to help you write those strange looking tests though. *The project has not seen any updates since 2013. *VB Lite Unit I can't say I've personally used it. It's out there, but hasn't seen an update since 2005. *xlUnit xlUnit isn't awful, but it's not good either. It's clunky and there's lots of boiler plate code. It's the best of the worst, but it doesn't work in Access. So, that's out. *Build your own framework I've been there and done that. It's probably more than most people want to get into, but it is completely possible to build a Unit Testing framework in Native VBA code. *Rubberduck VBE Add-In's Unit Testing Framework Disclaimer: I'm one of the co-devs. I'm biased, but this is by far my favorite of the bunch. * *Little to no boiler plate code. *Intellisense is available. *The project is active. *More documentation than most of these projects. *It works in most of the major office applications, not just Access. *It is, unfortunately, a COM Add-In, so it has to be installed onto your machine. 3. Start writing tests So, back to our code from section 1. The only code that we really needed to test was the MyModel.Reversed() function. So, let's take a look at what that test could look like. (Example given uses Rubberduck, but it's a simple test and could translate into the framework of your choice.) '@TestModule Private Assert As New Rubberduck.AssertClass '@TestMethod Public Sub ReversedReversesCorrectly() Arrange: Dim model As New MyModel Const original As String = "Hello" Const expected As String = "olleH" Dim actual As String model.Text = original Act: actual = model.Reversed Assert: Assert.AreEqual expected, actual End Sub Guidelines for Writing Good Tests * *Only test one thing at a time. *Good tests only fail when there is a bug introduced into the system or the requirements have changed. *Don't include external dependencies such as databases and file systems. These external dependencies can make tests fail for reasons outside of your control. Secondly, they slow your tests down. If your tests are slow, you won't run them. *Use test names that describe what the test is testing. Don't worry if it gets long. It's most important that it is descriptive. I know that answer was a little long, and late, but hopefully it helps some people get started in writing unit tests for their VBA code. A: If your interested in testing your Access application at a more granular level specifically the VBA code itself then VB Lite Unit is a great unit testing framework for that purpose. A: There are good suggestions here, but I'm surprised no one mentioned centralized error processing. You can get addins that allow for quick function/sub templating and for adding line numbers (I use MZ-tools). Then send all errors to a single function where you can log them. You can also then break on all errors by setting a single break point. A: I find that there are relatively few opportunities for unit testing in my applications. Most of the code that I write interacts with table data or the filing system so is fundamentally hard to unit test. Early on, I tried an approach that may be similar to mocking (spoofing) where I created code that had an optional parameter. If the parameter was used, then the procedure would use the parameter instead of fetching data from the database. It is quite easy to set up a user defined type that has the same field types as a row of data and to pass that to a function. I now have a way of getting test data into the procedure that I want to test. Inside each procedure there was some code that swapped out the real data source for the test data source. This allowed me to use unit testing on a wider variety of function, using my own unit testing functions. Writing unit test is easy, it is just repetitive and boring. In the end, I gave up with unit tests and started using a different approach. I write in-house applications for myself mainly so I can afford wait till issues find me rather than having to have perfect code. If I do write applications for customers, generally the customer is not fully aware of how much software development costs so I need a low cost way of getting results. Writing unit tests is all about writing a test that pushes bad data at a procedure to see if the procedure can handle it appropriately. Unit tests also confirm that good data is handled appropriately. My current approach is based on writing input validation into every procedure within an application and raising a success flag when the code has completed successfully. Each calling procedure checks for the success flag before using the result. If an issue occurs, it is reported by way of an error message. Each function has a success flag, a return value, an error message, a comment and an origin. A user defined type (fr for function return) contains the data members. Any given function many populate only some of the data members in the user defined type. When a function is run, it usually returns success = true and a return value and sometimes a comment. If a function fails, it returns success = false and an error message. If a chain of functions fails, the error messages are daisy changed but the result is actually a lot more readable that a normal stack trace. The origins are also chained so I know where the issue occurred. The application rarely crashes and accurately reports any issues. The result is a hell of a lot better than standard error handling. Public Function GetOutputFolder(OutputFolder As eOutputFolder) As FunctRet '///Returns a full path when provided with a target folder alias. e.g. 'temp' folder Dim fr As FunctRet Select Case OutputFolder Case 1 fr.Rtn = "C:\Temp\" fr.Success = True Case 2 fr.Rtn = TrailingSlash(Application.CurrentProject.path) fr.Success = True Case 3 fr.EM = "Can't set custom paths – not yet implemented" Case Else fr.EM = "Unrecognised output destination requested" End Select exitproc: GetOutputFolder = fr End Function Code explained. eOutputFolder is a user defined Enum as below Public Enum eOutputFolder eDefaultDirectory = 1 eAppPath = 2 eCustomPath = 3 End Enum I am using Enum for passing parameters to functions as this creates a limited set of known choices that a function can accept. Enums also provide intellisense when entering parameters into functions. I suppose they provide a rudimentary interface for a function. 'Type FunctRet is used as a generic means of reporting function returns Public Type FunctRet Success As Long 'Boolean flag for success, boolean not used to avoid nulls Rtn As Variant 'Return Value EM As String 'Error message Cmt As String 'Comments Origin As String 'Originating procedure/function End Type A user defined type such as a FunctRet also provides code completion which helps. Within the procedure, I usually store internal results to an anonymous internal variable (fr) before assigning the results to the return variable (GetOutputFolder). This makes renaming procedures very easy as only the top and bottom have be changed. So in summary, I have developed a framework with ms-access that covers all operations that involve VBA. The testing is permanently written into the procedures, rather than a development time unit test. In practice, the code still runs very fast. I am very careful to optimise lower level functions that can be called ten thousand times a minute. Furthermore, I can use the code in production as it is being developed. If an error occurs, it is user friendly and the source and reason for the error are usually obvious. Errors are reported from the calling form, not from some module in the business layer, which is an important principal of application design. Furthermore, I don't have the burden of maintaining unit testing code, which is really important when I am evolving a design rather than coding a clearly conceptualised design. There are some potential issues. The testing is not automated and new bad code is only detected when the application is run. The code does not look like standard VBA code (it is usually shorter). Still, the approach has some advantages. It is far better that using an error handler just to log an error as the users will usually contact me and give me a meaningful error message. It can also handle procedures that work with external data. JavaScript reminds me of VBA, I wonder why JavaScript is the land of frameworks and VBA in ms-access is not. A few days after writing this post, I found an article on The CodeProject that comes close to what I have written above. The article compares and contrasts exception handling and error handling. What I have suggested above is akin to exception handling. A: I appreciated knox's and david's answers. My answer will be somewhere between theirs: just make forms that do not need to be debugged! I think that forms should be exclusively used as what they are basically, meaning graphic interface only, meaning here that they do not have to be debugged! The debugging job is then limited to your VBA modules and objects, which is a lot easier to handle. There is of course a natural tendency to add VBA code to forms and/or controls, specially when Access offers you these great "after Update" and "on change" events, but I definitely advise you not to put any form or control specific code in the form's module. This makes further maintenance and upgrade very costy, where your code is split between VBA modules and forms/controls modules. This does not mean you cannot use anymore this AfterUpdate event! Just put standard code in the event, like this: Private Sub myControl_AfterUpdate() CTLAfterUpdate myControl On Error Resume Next Eval ("CTLAfterUpdate_MyForm()") On Error GoTo 0 End sub Where: * *CTLAfterUpdate is a standard procedure run each time a control is updated in a form *CTLAfterUpdateMyForm is a specific procedure run each time a control is updated on MyForm I have then 2 modules. The first one is * *utilityFormEvents where I will have my CTLAfterUpdate generic event The second one is * *MyAppFormEvents containing the specific code of all specific forms of the MyApp application and including the CTLAfterUpdateMyForm procedure. Of course, CTLAfterUpdateMyForm might not exist if there are no specific code to run. This is why we turn the "On error" to "resume next" ... Choosing such a generic solution means a lot. It means you are reaching a high level of code normalization (meaning painless maintenance of code). And when you say that you do not have any form-specific code, it also means that form modules are fully standardized, and their production can be automated: just say which events you want to manage at the form/control level, and define your generic/specific procedures terminology. Write your automation code, once for all. It takes a few days of work but it give exciting results. I have been using this solution for the last 2 years and it is clearly the right one: my forms are fully and automatically created from scratch with a "Forms Table", linked to a "Controls Table". I can then spend my time working on the specific procedures of the form, if any. Code normalization, even with MS Access, is a long process. But it is really worth the pain! A: I have not tried this, but you could attempt to publish your access forms as data access web pages to something like sharepoint or just as web pages and then use an tool such as selenium to drive the browser with a suite of tests. Obviously this is not as ideal as driving the code directly through unit tests, but it may get you part of the way. good luck A: Access is a COM application. Use COM, not Windows API. to test things in Access. The best Test environment for an Access Application is Access. All of your Forms/Reports/Tables/Code/Queries are available, there is a scripting language similar to MS Test (Ok, you probably don't remember MS Test), there is database environment for holding your test scripts and test results, and the skills you build here are transferable to your application. A: Data Access Pages have been deprecated by MS for quite some time, and never really worked in the first place (they were dependent on the Office Widgets being installed, and worked only in IE, and only badly then). It is true that Access controls that can get focus only have a window handle when they have the focus (and those that can't get focus, such as labels, never have a window handle at all). This makes Access singularly inappropriate to window handle-driven testing regimes. Indeed, I question why you want to do this kind of testing in Access. It sounds to me like your basic Extreme Programming dogma, and not all of the principles and practices of XP can be adapted to work with Access applications -- square peg, round hole. So, step back and ask yourself what you're trying to accomplish and consider that you may need to utilize completely different methods than those that are based on the approaches that just can't work in Access. Or whether that kind of automated testing is valid at all or even useful with an Access application.
{ "language": "en", "url": "https://stackoverflow.com/questions/47400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: Efficiently reverse the order of the words (not characters) in an array of characters Given an array of characters which forms a sentence of words, give an efficient algorithm to reverse the order of the words (not characters) in it. Example input and output: >>> reverse_words("this is a string") 'string a is this' It should be O(N) time and O(1) space (split() and pushing on / popping off the stack are not allowed). The puzzle is taken from here. A: pushing a string onto a stack and then popping it off - is that still O(1)? essentially, that is the same as using split()... Doesn't O(1) mean in-place? This task gets easy if we can just append strings and stuff, but that uses space... EDIT: Thomas Watnedal is right. The following algorithm is O(n) in time and O(1) in space: * *reverse string in-place (first iteration over string) *reverse each (reversed) word in-place (another two iterations over string) * *find first word boundary *reverse inside this word boundary *repeat for next word until finished I guess we would need to prove that step 2 is really only O(2n)... A: A solution in C/C++: void swap(char* str, int i, int j){ char t = str[i]; str[i] = str[j]; str[j] = t; } void reverse_string(char* str, int length){ for(int i=0; i<length/2; i++){ swap(str, i, length-i-1); } } void reverse_words(char* str){ int l = strlen(str); //Reverse string reverse_string(str,strlen(str)); int p=0; //Find word boundaries and reverse word by word for(int i=0; i<l; i++){ if(str[i] == ' '){ reverse_string(&str[p], i-p); p=i+1; } } //Finally reverse the last word. reverse_string(&str[p], l-p); } This should be O(n) in time and O(1) in space. Edit: Cleaned it up a bit. The first pass over the string is obviously O(n/2) = O(n). The second pass is O(n + combined length of all words / 2) = O(n + n/2) = O(n), which makes this an O(n) algorithm. A: #include <string> #include <boost/next_prior.hpp> void reverse(std::string& foo) { using namespace std; std::reverse(foo.begin(), foo.end()); string::iterator begin = foo.begin(); while (1) { string::iterator space = find(begin, foo.end(), ' '); std::reverse(begin, space); begin = boost::next(space); if (space == foo.end()) break; } } A: Here is my answer. No library calls and no temp data structures. #include <stdio.h> void reverse(char* string, int length){ int i; for (i = 0; i < length/2; i++){ string[length - 1 - i] ^= string[i] ; string[i] ^= string[length - 1 - i]; string[length - 1 - i] ^= string[i]; } } int main () { char string[] = "This is a test string"; char *ptr; int i = 0; int word = 0; ptr = (char *)&string; printf("%s\n", string); int length=0; while (*ptr++){ ++length; } reverse(string, length); printf("%s\n", string); for (i=0;i<length;i++){ if(string[i] == ' '){ reverse(&string[word], i-word); word = i+1; } } reverse(&string[word], i-word); //for last word printf("\n%s\n", string); return 0; } A: In pseudo code: reverse input string reverse each word (you will need to find word boundaries) A: In C: (C99) #include <stdio.h> #include <string.h> void reverseString(char* string, int length) { char swap; for (int i = 0; i < length/2; i++) { swap = string[length - 1 - i]; string[length - 1 - i] = string[i]; string[i] = swap; } } int main (int argc, const char * argv[]) { char teststring[] = "Given an array of characters which form a sentence of words, give an efficient algorithm to reverse the order of the words (not characters) in it."; printf("%s\n", teststring); int length = strlen(teststring); reverseString(teststring, length); int i = 0; while (i < length) { int wordlength = strspn(teststring + i, "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"); reverseString(teststring + i, wordlength); i += wordlength + 1; } printf("%s\n", teststring); return 0; } This gives output: Given an array of characters which form a sentence of words, give an efficient algorithm to reverse the order of the words (not characters) in it. .it in )characters not( words the of order the reverse to algorithm efficient an give ,words of sentence a form which characters of array an Given This takes at most 4N time, with small constant space. Unfortunately, It doesn't handle punctuation or case gracefully. A: O(N) in space and O(N) in time solution in Python: def reverse_words_nosplit(str_): """ >>> f = reverse_words_nosplit >>> f("this is a string") 'string a is this' """ iend = len(str_) s = "" while True: ispace = str_.rfind(" ", 0, iend) if ispace == -1: s += str_[:iend] break s += str_[ispace+1:iend] s += " " iend = ispace return s A: You would use what is known as an iterative recursive function, which is O(N) in time as it takes N (N being the number of words) iterations to complete and O(1) in space as each iteration holds its own state within the function arguments. (define (reverse sentence-to-reverse) (reverse-iter (sentence-to-reverse "")) (define (reverse-iter(sentence, reverse-sentence) (if (= 0 string-length sentence) reverse-sentence ( reverse-iter( remove-first-word(sentence), add-first-word(sentence, reverse-sentence))) Note: I have written this in scheme which I am a complete novice, so apologies for lack of correct string manipulation. remove-first-word finds the first word boundary of sentence, then takes that section of characters (including space and punctuation) and removes it and returns new sentence add-first-word finds the first word boundary of sentence, then takes that section of characters (including space and punctuation) and adds it to reverse-sentence and returns new reverse-sentence contents. A: @Daren Thomas Implementation of your algorithm (O(N) in time, O(1) in space) in D (Digital Mars): #!/usr/bin/dmd -run /** * to compile & run: * $ dmd -run reverse_words.d * to optimize: * $ dmd -O -inline -release reverse_words.d */ import std.algorithm: reverse; import std.stdio: writeln; import std.string: find; void reverse_words(char[] str) { // reverse whole string reverse(str); // reverse each word for (auto i = 0; (i = find(str, " ")) != -1; str = str[i + 1..length]) reverse(str[0..i]); // reverse last word reverse(str); } void main() { char[] str = cast(char[])("this is a string"); writeln(str); reverse_words(str); writeln(str); } Output: this is a string string a is this A: in Ruby "this is a string".split.reverse.join(" ") A: THIS PROGRAM IS TO REVERSE THE SENTENCE USING POINTERS IN "C language" By Vasantha kumar & Sundaramoorthy from KONGU ENGG COLLEGE, Erode. NOTE: Sentence must end with dot(.) because NULL character is not assigned automatically at the end of the sentence* #include<stdio.h> #include<string.h> int main() { char *p,*s="this is good.",*t; int i,j,a,l,count=0; l=strlen(s); p=&s[l-1]; t=&s[-1]; while(*t) { if(*t==' ') count++; t++; } a=count; while(l!=0) { for(i=0;*p!=' '&&t!=p;p--,i++); p++; for(;((*p)!='.')&&(*p!=' ');p++) printf("%c",*p); printf(" "); if(a==count) { p=p-i-1; l=l-i; } else { p=p-i-2; l=l-i-1; } count--; } return 0; } A: Push each word onto a stack. Pop all the words off the stack. A: A C++ solution: #include <string> #include <iostream> using namespace std; string revwords(string in) { string rev; int wordlen = 0; for (int i = in.length(); i >= 0; --i) { if (i == 0 || iswspace(in[i-1])) { if (wordlen) { for (int j = i; wordlen--; ) rev.push_back(in[j++]); wordlen = 0; } if (i > 0) rev.push_back(in[i-1]); } else ++wordlen; } return rev; } int main() { cout << revwords("this is a sentence") << "." << endl; cout << revwords(" a sentence with extra spaces ") << "." << endl; return 0; } A: using System; namespace q47407 { class MainClass { public static void Main(string[] args) { string s = Console.ReadLine(); string[] r = s.Split(' '); for(int i = r.Length-1 ; i >= 0; i--) Console.Write(r[i] + " "); Console.WriteLine(); } } } edit: i guess i should read the whole question... carry on. A: in C#, in-place, O(n), and tested: static char[] ReverseAllWords(char[] in_text) { int lindex = 0; int rindex = in_text.Length - 1; if (rindex > 1) { //reverse complete phrase in_text = ReverseString(in_text, 0, rindex); //reverse each word in resultant reversed phrase for (rindex = 0; rindex <= in_text.Length; rindex++) { if (rindex == in_text.Length || in_text[rindex] == ' ') { in_text = ReverseString(in_text, lindex, rindex - 1); lindex = rindex + 1; } } } return in_text; } static char[] ReverseString(char[] intext, int lindex, int rindex) { char tempc; while (lindex < rindex) { tempc = intext[lindex]; intext[lindex++] = intext[rindex]; intext[rindex--] = tempc; } return intext; } A: Efficient in terms of my time: took under 2 minutes to write in REBOL: reverse_words: func [s [string!]] [form reverse parse s none] Try it out: reverse_words "this is a string" "string a is this" A: A Ruby solution. # Reverse all words in string def reverse_words(string) return string if string == '' reverse(string, 0, string.size - 1) bounds = next_word_bounds(string, 0) while bounds.all? { |b| b < string.size } reverse(string, bounds[:from], bounds[:to]) bounds = next_word_bounds(string, bounds[:to] + 1) end string end # Reverse a single word between indices "from" and "to" in "string" def reverse(s, from, to) half = (from - to) / 2 + 1 half.times do |i| s[from], s[to] = s[to], s[from] from, to = from.next, to.next end s end # Find the boundaries of the next word starting at index "from" def next_word_bounds(s, from) from = s.index(/\S/, from) || s.size to = s.index(/\s/, from + 1) || s.size return { from: from, to: to - 1 } end A: This problem can be solved with O(n) in time and O(1) in space. The sample code looks as mentioned below: public static string reverseWords(String s) { char[] stringChar = s.ToCharArray(); int length = stringChar.Length, tempIndex = 0; Swap(stringChar, 0, length - 1); for (int i = 0; i < length; i++) { if (i == length-1) { Swap(stringChar, tempIndex, i); tempIndex = i + 1; } else if (stringChar[i] == ' ') { Swap(stringChar, tempIndex, i-1); tempIndex = i + 1; } } return new String(stringChar); } private static void Swap(char[] p, int startIndex, int endIndex) { while (startIndex < endIndex) { p[startIndex] ^= p[endIndex]; p[endIndex] ^= p[startIndex]; p[startIndex] ^= p[endIndex]; startIndex++; endIndex--; } } A: A one liner: l="Is this as expected ??" " ".join(each[::-1] for each in l[::-1].split()) Output: '?? expected as this Is' A: Algorithm: 1).Reverse each word of the string. 2).Reverse resultant String. public class Solution { public String reverseWords(String p) { String reg=" "; if(p==null||p.length()==0||p.equals("")) { return ""; } String[] a=p.split("\\s+"); StringBuilder res=new StringBuilder();; for(int i=0;i<a.length;i++) { String temp=doReverseString(a[i]); res.append(temp); res.append(" "); } String resultant=doReverseString(res.toString()); System.out.println(res); return resultant.toString().replaceAll("^\\s+|\\s+$", ""); } public String doReverseString(String s)`{` char str[]=s.toCharArray(); int start=0,end=s.length()-1; while(start<end) { char temp=str[start]; str[start]=str[end]; str[end]=temp; start++; end--; } String a=new String(str); return a; } public static void main(String[] args) { Solution r=new Solution(); String main=r.reverseWords("kya hua"); //System.out.println(re); System.out.println(main); } } A: The algorithm to solve this problem is based on two steps process, first step will reverse the individual words of string,then in second step, reverse whole string. Implementation of algorithm will take O(n) time and O(1) space complexity. #include <stdio.h> #include <string.h> void reverseStr(char* s, int start, int end); int main() { char s[] = "This is test string"; int start = 0; int end = 0; int i = 0; while (1) { if (s[i] == ' ' || s[i] == '\0') { reverseStr(s, start, end-1); start = i + 1; end = start; } else{ end++; } if(s[i] == '\0'){ break; } i++; } reverseStr(s, 0, strlen(s)-1); printf("\n\noutput= %s\n\n", s); return 0; } void reverseStr(char* s, int start, int end) { char temp; int j = end; int i = start; for (i = start; i < j ; i++, j--) { temp = s[i]; s[i] = s[j]; s[j] = temp; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/47402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: MS-Access design pattern for last value for a grouping It's common to have a table where for example the the fields are account, value, and time. What's the best design pattern for retrieving the last value for each account? Unfortunately the last keyword in a grouping gives you the last physical record in the database, not the last record by any sorting. Which means IMHO it should never be used. The two clumsy approaches I use are either a subquery approach or a secondary query to determine the last record, and then joining to the table to find the value. Isn't there a more elegant approach? A: could you not do: select account,last(value),max(time) from table group by account I tested this (granted for a very small, almost trivial record set) and it produced proper results. Edit: that also doesn't work after some more testing. I did a fair bit of access programming in a past life and feel like there is a way to do what your asking in 1 query, but im drawing a blank at the moment. sorry. A: After literally years of searching I finally found the answer at the link below #3. The sub-queries above will work, but are very slow -- debilitatingly slow for my purposes. The more popular answer is a tri-level query: 1st level finds the max, 2nd level gets the field values based on the 1st query. The result is then joined in as a table to the main query. Fast but complicated and time-consuming to code/maintain. This link works, still runs pretty fast and is a lot less work to code/maintain. Thanks to the authors of this site. http://access.mvps.org/access/queries/qry0020.htm A: The subquery option sounds best to me, something like the following psuedo-sql. It may be possible/necessary to optimize it via a join, that will depend on the capabilities of the SQL engine. select * from table where account+time in (select account+max(time) from table group by account order by time) A: This is a good trick for returning the last record in a table: SELECT TOP 1 * FROM TableName ORDER BY Time DESC Check out this site for more info. A: @Tom It might be easier for me in general to do the "In" query that you've suggested. Generally I do something like select T1.account, T1.value from table T as T1 where T1 = (select max(T2.time) from table T as T2 where T1.account = T2.Account) A: @shs yes, that select last(value) SHOULD work, but it doesn't... My understanding although I can't produce an authorative source is that the last(value) gives the last physical record in the access file, which means it could be the first one timewise but the last one physically. So I don't think you should use last(value) for anything other than a really bad random row. A: Perhaps the following SQL is clumsy, but it seems to work correctly in Access. SELECT a.account, a.time, a.value FROM tablename AS a INNER JOIN [ SELECT account, Max(time) AS MaxOftime FROM tablename GROUP BY account ]. AS b ON (a.time = b.MaxOftime) AND (a.account = b.account) ; A: I'm trying to find the latest date in a group using the Access 2003 query builder, and ran into the same problem trying to use LAST for a date field. But it looks like using MAX finds the lates date.
{ "language": "en", "url": "https://stackoverflow.com/questions/47413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Standard way to merge Entities in LlblGenPro I start with an entity A with primary key A1, it has child collections B and C, but they are empty, because I haven't prefetched them. I now get a new occurrence of A (A prime) with primary key A1 with the child collections B and C filled. What is a good way to get the A and A prime to be the same object and to get A collections of B and C filled? A: Once you, have 2 separate objects in memory and you have references to both of them the only way to merge them is to change all references to point to one of the objects, which might be impossible. However there's something you can do not to arrive in this situation you can use a SD.LLBLGen.Pro.ORMSupportClasses.Context class which you can attach to an adapter and which acts as a caching layer and when entities are loaded it returns the same object for a unique entity, basically it does't let you duplicate entities in memory and always returns the reference to a already loaded entity.
{ "language": "en", "url": "https://stackoverflow.com/questions/47414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I break on exception using ddbg I'm using the d programing language to write a program, and I'm trying to use ddbg to debug it. When there is an exception, I want to have the program break whenever there is an exception thrown so that I can inspect the stack. Alternatively, is there another debugger that works with d? Is there another way to get a stack trace when there is an exception? A: You want to break when there's any exception thrown or just uncaught exceptions? Because I think the latter is already the default behavior. You probably know this, but you get the stack trace by typing 'us' (unwind stack) at the prompt. Just trying to eliminate the obvious. Anyway, I've never had to use onex. Never even heard of it. Another thing you could try is forcing execution to stop by putting in asserts. A: You can get stack traces on exceptions by modding the runtime, by the way. The best resource is probably this backtrace hack page A: Haven't used ddbg yet, but according to the documentation at http://ddbg.mainia.de/doc.html there is the onex <cmd; cmd; ...> on exception execute list of commands command. A: I saw the onex command, but I couldn't find a break command. The two commands below don't seem to work. onex break onex b
{ "language": "en", "url": "https://stackoverflow.com/questions/47420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Why do some websites add "Slugs" to the end of URLs? Many websites, including this one, add what are apparently called slugs - descriptive but as far as I can tell useless bits of text - to the end of URLs. For example, the URL the site gives for this question is: https://stackoverflow.com/questions/47427/why-do-some-websites-add-slugs-to-the-end-of-urls But the following URL works just as well: https://stackoverflow.com/questions/47427/ Is the point of this text just to somehow make the URL more user friendly or are there some other benefits? A: As already stated, the 'slug' helps people and the search engines... Something worth noticing, is that in the source of the page there is a canonical url This stops the page from being index multiple times. Example: <link rel="canonical" href="http://stackoverflow.com/questions/47427/why-do-some-websites-add-slugs-to-the-end-of-urls"> A: Remove the formatting from your question, and you'll see part of the answer: https://stackoverflow.com/questions/47427/ vs https://stackoverflow.com/questions/47427/why-do-some-websites-add-slugs-to-the-end-of-urls With no markup, the second one is self-descriptive. A: Usability is one reason, if you receive that link in your e-mail, you know what to expect. SEO (search engine optimization) is another reason. Search engines such as google will rank your page higher for the keywords contained in the url A: I recently changed my website url format from: www.mywebsite.com/index.asp?view=display&postid=100 To www.mywebsite.com/this-is-the-title-of-the-post and noticed that click through rates to article increased about 300% after the change. It certainly helps the user decide if what they're thinking of clicking on is relevant, in terms of SEO purposes though I have to say I've seen little impact after the change A: Don't forget readability when sending a link, not just in search engines. If you email someone the first link they can look at the URL and get a general idea of what it is about. The second one gives no indication of the content of that page before they click. A: I agree with other responses that any mis-typed slug should 301-redirect to the proper form. In other words, https://stackoverflow.com/questions/47427/wh should redirect to https://stackoverflow.com/questions/47427/why-do-some-websites-add-slugs-to-the-end-of-urls . It has one other benefit that hasn't been mentioned--if you do not do a redirect to a canonical URL, it will appear that you have a near-infinite number of duplicate pages. Google hates duplicate content. That said, you should really only care about the content ID and allow any input for the slug as long as you redirect. Why? https://stackoverflow.com/questions/47427/why-do-some-websites-add-slugs-to-the-end-of-urls ... Oops, the mail software cut off the end of the URL! No problem though because you still can roll with just https://stackoverflow.com/questions/47427 The one big problem with this approach is if you derive the slug from the title of your content, how are you going to deal with non-ASCII, UTF-8 titles? A: If you emailed someone a link wouldn't it make more sense to include a description by actually writing out a description rather than making the other person parse to the URL where the description exists, and try-to-read-a-bunch-of-hyphenated-words-stuck-together. A: First off, it's SEO and user friendly, but in the case of the example (this site), it's not done well or correctly (as it is open to black hat tricks and rank poisoning by others, which would reflect badly on this site). If https://stackoverflow.com/questions/47427/why-do-some-websites-add-slugs-to-the-end-of-urls has the content, then https://stackoverflow.com/questions/47427/ and https://stackoverflow.com/questions/47427/any-other-bollix should not be duplicates. They should actually automatically detect the link followed is not using the current text (as obviously the slug is defined by the question title and can be later edited) and they should redirect 301 automatically to https://stackoverflow.com/questions/47427/why-do-some-websites-add-slugs-to-the-end-of-urls thus ensuring the "one piece of content to one URI" rule, and if the URI moves/changes, ensure the old bookmarks follow/move with it through 301 redirects (so intelligent browsers can update the bookmarks). A: The slugs make the URL more user-friendly and you know what to expect when you click a link. Search engines such as Google, rank the pages higher if the searchword is in the URL. A: The reason most sites use it is probably SEO (Search Engine Optimization). Yahoo used to give a reasonable weighting to the presence of the search keyword in the URL itself, and it also helped in the Google result as well. More recently the search engines have lowered the weighting given to keywords in the URL, likely because the technique is now more common on spam sites than legitimate. Keywords in the URL now have only a very minor impact on the search results, if at all. As for stackoverflow itself, SEO might be a motivation (old habits die hard) or simply for usability. A: It's basically a more meaningful location for the resource. Using the ID is perfectly valid but it means more to machines than people. Strictly speaking the ID shouldn't be needed if the slug is unique, you can more easily ensure unique slugs by scoping them inside dates. ie: /2008/sept/06/why-some-websites-add-slugs-end-of-urls/ Basically this exploits the low likelihood of two identical slugs being in use on the same day. If there is a clash the general convention is to add a counter at the end of the slug but it's rare that you ever see these: /2008/sept/06/why-some-websites-add-slugs-end-of-urls/ /2008/sept/06/why-some-websites-add-slugs-end-of-urls-1/ /2008/sept/06/why-some-websites-add-slugs-end-of-urls-2/ A lot of slug algorithms also get rid of common words like "the" and "a" to assist in keeping the URL short. This scoped approach also makes it very straightforward to find all resources for a given day, month or year - you simply chop off segments. Additionally, stackoverflow URLs are bad in the sense that they introduce an additional segment in order to feature the slug, which is a violation of the idea that each segment should represent descending a resource hierarchy. A: The term slug comes from the newspaper/publishing business. It's a short title that's used to identify a story in progress. People interested in URL semantics started using a short, abbreviated title in their URLs. It also pays off in SEO land, as keywords in URLs add importance to a page. Ironically, lots of websites have started place a full serialized-with-hyphens version of the titles in their URLs for strictly SEO purposes, which means the term slug no longer quite applies. This also rankles semantic purists, as many implementations just tack this serialized version of the title at the end of their URLs. A: I note that you can change the text freely. This URL appears to work just as well. https://stackoverflow.com/questions/47427/why-is-billpg-so-very-awesome A: Ideally, the "slug" should be the only identifier needed. In practice, on dynamic sites such as this, you either have to have a unique numerical identifier or start appending/incrementing numbers to the "slug" like Digg does.
{ "language": "en", "url": "https://stackoverflow.com/questions/47427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "115" }
Q: Would you recommend using "The C5 Generic Collection Library for C# and CLI" based on your experience with it? This free collection library comes from IT University of Copenhagen. http://www.itu.dk/research/c5/ There is a video with one of the authors on Channel 9. I am trying to learn how to use these collections and I was wondering whether anyone has more experiences or what are your thoughts on this specific collection library for .NET. Do you like the way they are designed, do you like their performance and what were your major problems with them ? A: I have been using the C5 library for a while now, and with much success. I find that C5 offers great benefit in programming to interface. For example, in System.Collections.Generic, the IList interface doesn't offer all of the functionality of the ArrayList and LinkedList implementations. Conversely, in C5, you can access all the functionality of the ArrayList and LinkedList just by holding a C5.IList. As well, the original authors took much care to document the asymptotic speed of many critical operations, such as collection count, contains, and indexing. The library internally uses these properties when doing operations between collections to determine which collection would be faster as the "actor". As well, while not updated in a while, the C5 documentation book is an amazing resource that provides clear documentation on the library. A: I've used it in the past and there are a couple of notes I must make: * *The library is very good, very fast and very useful. It has lots of very nice data structures, some of which I did not know before starting to use this library. *It's Open-Source! This is a huge benefit. *Sometimes you don't have exactly what you want. As far as my experience showed, the library's authors decided to go with a very fault-intolerant attitude, throwing exceptions about everything. This caused me to add a few fault-tolerant methods. All in all, a very nice library with some advanced data structures. Unfortunately, support for it is very lacking, as you can see from the fact that new releases (bugfixes, et al) range somewhere from 6 months to a year. Note: Starting with Mono 2.0, C5 is bundled as a 3rd party API, which I believe to be a wonderful show of faith in the product from the Mono team. A: In addition to that omer van kloeten's points. The open source licence is MIT (comparable to BSD licence) this means that if you need make changes to the library you don't have to open-source the changes. (this might be a problem with some companies). For GPL-type licences this can be a problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/47432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: subselect vs outer join Consider the following 2 queries: select tblA.a,tblA.b,tblA.c,tblA.d from tblA where tblA.a not in (select tblB.a from tblB) select tblA.a,tblA.b,tblA.c,tblA.d from tblA left outer join tblB on tblA.a = tblB.a where tblB.a is null Which will perform better? My assumption is that in general the join will be better except in cases where the subselect returns a very small result set. A: non-correlated sub queries are fine. you should go with what describes the data you're wanting. as has been noted, this likely gets rewritten into the same plan, but isn't guaranteed to! what's more, if table A and B are not 1:1 you will get duplicate tuples from the join query (as the IN clause performs an implicit DISTINCT sort), so it's always best to code what you want and actually think about the outcome. A: Well, it depends on the datasets. From my experience, if You have small dataset then go for a NOT IN if it's large go for a LEFT JOIN. The NOT IN clause seems to be very slow on large datasets. One other thing I might add is that the explain plans might be misleading. I've seen several queries where explain was sky high and the query run under 1s. On the other hand I've seen queries with excellent explain plan and they could run for hours. So all in all do test on your data and see for yourself. A: I second Tom's answer that you should pick the one that is easier to understand and maintain. The query plan of any query in any database cannot be predicted because you haven't given us indexes or data distributions. The only way to predict which is faster is to run them against your database. As a rule of thumb I tend to use sub-selects when I do not need to include any columns from tblB in my select clause. I would definitely go for a sub-select when I want to use the 'in' predicate (and usually for the 'not in' that you included in the question), for the simple reason that these are easier to understand when you or someone else has come back and change them. A: RDBMSs "rewrite" queries to optimize them, so it depends on system you're using, and I would guess they end up giving the same performance on most "good" databases. I suggest picking the one that is clearer and easier to maintain, for my money, that's the first one. It's much easier to debug the subquery as it can be run independently to check for sanity. A: The first query will be faster in SQL Server which I think is slighty counter intuitive - Sub queries seem like they should be slower. In some cases (as data volumes increase) an exists may be faster than an in. A: It should be noted that these queries will produce different results if TblB.a is not unique. A: From my observations, MSSQL server produces same query plan for these queries. A: I created a simple query similar to the ones in the question on MSSQL2005 and the explain plans were different. The first query appears to be faster. I am not a SQL expert but the estimated explain plan had 37% for query 1 and 63% for the query 2. It appears that the biggest cost for query 2 is the join. Both queries had two table scans.
{ "language": "en", "url": "https://stackoverflow.com/questions/47433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is the shortcut key for Run to cursor What is the shortcut key for Run to cursor in Visual Studio 2008? A: The shortcut key is CTRL+F10. A: While on debug mode click on 'Run' you will see 'Run to Line' there in-front of it there will be a short key give which is CTRL + R A: The default is CTRL+F10 but it can be overridden. The place to find what your current shortcuts are and change them is Tools Customize... Keyboard... Show commands containing: Debug.RunToCursor or Tools Options Environment Keyboard Show commands containing: Debug.RunToCursor A: You can first hit Ctrl + Shift + P Then type "> Run to Cursor" if you click on "gear" icon on the right, you can double click and set the shortcut at your convenience. Also you can c/p: @command:editor.debug.action.runToCursor
{ "language": "en", "url": "https://stackoverflow.com/questions/47436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Which is a better refactoring tool for a beginner (something easy to learn & use)? Resharper, RefactorPro, etc? A: I have tried using Resharper for some while and also CodeRush with Refactor later on. I have stayed with CodeRush/Refactor. There is one major difference - the discoverability of the commands. Their learning videos are quite nice and show you a lot. Most importantly Coderush has one key/shortcut for all refactorings which makes you much more likely to actually use them. There is side window that shows you what keys to press in order to use the templates as well. I have liked Resharper's searching for usage of a method, but CodeRush has a similar feature ignited by Shift + F12 and you can also simply press Tab on a variable, function etc. to jump to its next usage. I also liked the interface of CodeRush/Refactor more. One of the pro's for Resharper is the integrated testing tool so yuo can run test directly from Visual Studio. A: In addition to Resharper I've tried both Coderush and Visual Assist X from Whole Tomato Software. In my opinion none of the above could measure up to Resharper from Jetbrains which I decided to go for. The others have many great features but Resharper is in a class of it's own. IMHO Coderush looks cooler, but I found Resharper more helpful. In response to Tomas note about discoverability: I agree it's tough relearning all the shortcuts. But to ease the transistion Resharper also has a shortcut Ctrl+Shift+R which will show all refactorings appropriate for the thing the cursor is placed on: My recommendation is download a trial of all three, try one of them at a time for a while, and make your own choice. A: I think ReSharper is great. I've been using it for 3 years now and I just love it more and more.
{ "language": "en", "url": "https://stackoverflow.com/questions/47437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Child spans of the same width I am trying to create a horizontal menu with the elements represented by <span>'s. The menu itself (parent <div>) has a fixed width, but the elements number is always different. I would like to have child <span>'s of the same width, independently of how many of them are there. What I've done so far: added a float: left; style for every span and specified its percentage width (percents are more or less fine, as the server knows at the time of the page generation, how many menu items are there and could divide 100% by this number). This works, except for the case when we have a division remainder (like for 3 elements), in this case I have a one-pixel hole to the right of the parent <div>, and if I rounding the percents up, the last menu element is wrapped. I also don't really like style generation on the fly, but if there's no other solution, it's fine. What else could I try? It seems like this is a very common problem, however googling for "child elements of the same width" didn't help. A: If you have a fixed width container, then you are losing some of the effectiveness of a percentage width child span. For your case of 33% you could add a class to the first and every 4th child span to set the correct width as necessary. <div> <span class="first-in-row">/<span><span></span><span></span><span class="first-in-row"><span></span><span></span>... </div> where .first-in-row { width:auto; /* or */ width:XXX px; } A: You might try a table with a fixed table layout. It should calculate the column widths without concerning itself with the cell contents. table.ClassName { table-layout: fixed } A: have you tried the decimal values, like setting width to 33.33%? As specified in the CSS syntax, the width property (http://www.w3.org/TR/CSS21/visudet.html#the-width-property) can be given as <percentage> (http://www.w3.org/TR/CSS21/syndata.html#value-def-percentage), which is stated to be a <number>. As said at the number definition (http://www.w3.org/TR/CSS21/syndata.html#value-def-number), there some value types that must be integers, and are stated as <integer>, and the others are real numbers, stated as <number>. The percentage is defined as <number>, not as <integer> so it might work. It will depend on the browser's ability to solve this situation if it can't divide the parent's box by 3 without remaining, will it draw a 1- or 2-pixel space, or make 1 or 2 spans out of three wider than the rest. A: In reference to Xian's answer, there's also the :first-child pseudo-element. Rather than having first-in-row class, you'd have this. span:first-child { width: auto; } Obviously, this is only applicable to a single line menu.
{ "language": "en", "url": "https://stackoverflow.com/questions/47447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: A Well-Designed Web App GUI Framework? As one of those people that never got around to properly learning design (or having no talent for it), the design seems to be the step always holding me back. It's not a problem for rich-clients, as the standard GUI toolkits provide some acceptable ways to design a good-looking interface, but the web is another story. Question: Does anyone know of a high-quality framework for designing the interface of web applications? I'm thinking of standard snippets of html for an adequate set of controls (the standard html controls plus maybe navigations, lists, forms) and style sheets to give it an acceptable, modern look. If such a thing existed, it could even support themes in the way wordpress, drupal etc. allow it. I know of the yahoo yui, but that's mostly javascript, though their grid css is a step in the right direction. A: Try the samples on ExtJs. I find them immensely useful in working out the UI. (trees, panels, modals, etc etc) A: I'm not sure that what you're looking for exists in the way you're looking for it. However, I've had some luck with places like Open Source Web Design and Open Designs, which have some really slick templates that can be adapted to a web application so they at least don't look like crap. There are also some commercial offerings, such as Gooey Templates. Once you're getting closer to launch, you can contact a pro to fix the details for you, or simply build on what you've got. Edited to add: You might also want to consider learning Blueprint CSS. I've found it helps guide my layouts and helps them look "right", without constraining me to the layout constructed for another purpose. A: I realise this is an old thread but it still comes high up in Google searches so it's worth mentioning that Twitter have recently put out Twitter Bootstrap, a "toolkit for kickstarting CSS for websites, apps, and more" which looks fantastic! » https://github.com/twitter/bootstrap A: I'll suggest Google Web Toolkit if you're a Java developer. Examples I'll also second the suggestion for Ext JS. It's got a vast array of really slick looking UI elements, incredibly well documented code, and a strong community. A: You'd probably also find the myriad of Wordpress templates reasonably useful to build on, as Wordpress is at least reasonable at separating content from layout. The also tend to have a modern bloggy feel. Of course teaming up with a talented designer is the ideal way to go in my experience! :) A: This will be more than a framework OP originally wanted but I'll suggest having a look at Morfik. You'll be able to build pretty slick user interfaces with the conventional drag&drop way and with theming support (The homepage itself is built in Morfik). There're numerous other advantages Morfik provides, though let me not drift to off-topic for the subject. You may download the trial and see... ps. Disclaimer: I'd worked for them. A: you can check out this young site, http://guitemplates.com/. The templates are quite clear and modern, and at 20 bucks each they won't break your budget. A: We had the same problem so we made our own. CSS UI (http://css-ui.com/), open-source UI framework. The concept is to use pre-defined CSS classes to style any element. A: Check out http://jacanasoftware.com. Their templates feature multi level tabs, clean css, it validates, and the CSS won't mess with your controls. I highly recommend them.
{ "language": "en", "url": "https://stackoverflow.com/questions/47468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Are unit-test names important? If unit-test names can become outdated over time and if you consider that the test itself is the most important thing, then is it important to choose wise test names? ie [Test] public void ShouldValidateUserNameIsLessThan100Characters() {} verse [Test] public void UserNameTestValidation1() {} A: Very. Equally important as choosing good method and variable names. Much more if your test suite is going to referred to by new devs in the future. As for your original question, definitely Answer1. Typing in a few more characters is a small price to pay for * *the readability. For you and others. It'll eliminate the 'what was I thinking here?' as well as 'WTF is this guy getting at in this test?' *Quick zoom in when you're in to fix something someone else wrote *instant update for any test-suite visitor. If done correctly, just going over the names of the test cases will inform the reader of the specs for the unit. A: Yes. [Test] public void UsernameValidator_LessThanLengthLimit_ShouldValidate() {} Put the test subject first, the test statement next, and the expected result last. That way, you get a clear indication of what it is doing, and you can easily sort by name :) A: In Clean Code, page 124, Robert C. Martin writes: The moral of the story is simple: Test code is just as important as production code. It is not a second-class citizen. It requires thought, design, and care. It must be kept as clean as production code. A: I think if one can not find a good concise name for a test method it's a sign that design of this test is incorrect. Also good method name helps you to find out what happened in less time. A: Yes, the whole point of the test name is that it tells you what doesn't work when the test fails. A: The name of any method should make it clear what it does. IMO, your first suggestion is a bit long and the second one isn't informative enough. Also it's probably a bad idea to put "100" in the name, as that's very likely to change. What about: public void validateUserNameLength() If the test changes, the name should be updated accordingly. A: Yes, the names are totally important, specially when you are running the tests in console or continuous integration servers. Jay Fields wrote a post about it. Moreover, put good test names with one assertion per test and your suite will give you great reports when a test fails. A: i wouldn't put conditions that test needs to meet in the name, because conditions may change in time. in your example, i'd recommend naming like UserNameLengthValidate() or UserNameLengthTest() or something similar to explain what the test does, but not presuming the testing/validation parameters. A: Yes, the names of the code under test (methods, properties, whatever) can change, but I contend your existing tests should fail if the expectations change. That is the true value of having well-constructed tests, not perusing a list of test names. That being said, well named test methods are great tools for getting new developers on board, helping them locate "executable documentation" with which they can kick the tires of existing code -- so I would keep the names of test methods up to date just as I would keep the assertions made by the test methods up to date. I name my test using the following pattern. Each test fixture attempts to focus on one class and is usually name {ClassUnderTest}Test. I name each test method {MemberUnderTest}_{Assertion}. [TestFixture] public class IndexableFileTest { [Test] public void Connect_InitializesReadOnlyProperties() { // ... } [Test,ExpectedException(typeof(NotInitializedException))] public void IsIndexable_ErrorWhenNotConnected() { // ... } [Test] public void IsIndexable_True() { // ... } [Test] public void IsIndexable_False() { // ... } } A: Having a very descriptive name helps to instantly see what is not working correctly, so that you don't actually need to look at the unit test code. Also, a list of all the unit tests describes the intended behavior of the unit, and can be used (more or less) as documentation to the behavior of the unit under test. Note, this only works when unit tests are very specific and do not validate too much within one unit test. So for example: [Test] void TestThatExceptionIsRaisedWhenStringLengthLargerThen100() [Test] void TestThatStringLengthOf99IsAccepted() A: The name needs to matter within reason. I don't want an email from the build saying that test 389fb2b5-28ad3 failed, but just knowing that it was a UserName test as opposed to something else would help ensure the right person gets to do the diagnosis. A: [RowTest] [Row("GoodName")] [Row("GoodName2")] public void Should_validate_username() { } [RowTest] [Row("BadUserName")] [Row("Bad%!Name")] public void Should_invalidate_username() { } This might make more sense for more complex types of validation really. A: Yes, they are. I'd personally recommend looking at SSW's rules to better unit tests. It contains some very helpful naming guidelines.
{ "language": "en", "url": "https://stackoverflow.com/questions/47475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Passing switches to Xcode 3.1 user scripts I have a user script that would be much more useful if it could dynamically change some of its execution dependent on what the user wanted. Passing simple switches would easily solve this problem but I don't see any way to do it. I also tried embedding a keyword in the script name, but Xcode copies the script to a guid-looking filename before execution, so that won't work either. So does anyone know of a way to call a user script with some sort of argument? (other that the normal %%%var%%% variables) EDIT: User scripts are accessible via the script menu in Xcode's menubar (between the Window and Help menus). My question is not about "run script" build phase scripts. My apologies for leaving that somewhat ambiguous. A: You can't pass parameters to user scripts — instead, user scripts operate on the context you're working in (e.g. the selected file, the selected text, etc.). You should use the context to determine what the user really wants. A: User scripts are accessible via the script menu in Xcode's menubar (between the Window and Help menus). Wasn't sure what else to call them. What I'm asking about are not "run script" build phase scripts. A: I suppose you could do something like this: #!/bin/bash result=$( osascript << END tell app "System Events" set a to display dialog "What shall be the result?" default answer "" end tell return text returned of a END ) # do stuff with $result A: There are built in utility scripts that allow you to prompt the user and capture the reply. You could prompt for a string, for example, then based on that perform a certain task. The String prompt is: STRING = `%%%{PBXUtilityScriptsPath}%%%/AskUserForStringDialog "DefaultString" "DefaultWindowName"` If you notice, you're just calling an applescript they wrote using a static path. You could write your own applescript dialog and place it there if you want and bypass the need for cumbersome osascript syntax. There are others (for files, folders, applications, etc) User Scripts Documenation
{ "language": "en", "url": "https://stackoverflow.com/questions/47483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Tips for database design in a web application Does someone have any tips/advice on database design for a web application? The kind of stuff that can save me a lot of time/effort in the future when/if the application I'm working on takes off and starts having a lot of usage. To be a bit more specific, the application is a strategy game (browser based, just text) that will mostly involve players issuing "orders" that will be stored in the database and processed later, with the results also being stored there (the history of "orders" and the corresponding results will probably get quite big). Edited to add more details (as requested): platform: Django database engine: I was thinking of using MySQL (unless there's a big advantage in using another) the schema: all I have now are some Django models, and that's far too much detail to post here. And if I start posting schemas this becomes too specific, and I was looking for general tips. For example, consider that I issue "orders" that will be later processed and return a result that I have to store to display some kind of "history". In this case is it better to have a separate table for the "history" or just one that aggregates both the "orders" and the result? I guess I could cache the "history" table, but this would take more space in the database and also more database operations because I would have to constantly create new rows instead of just altering them in the aggregate table. A: Database Normalization, and a giving a good thought to indexes, are two things that you just can't miss. Especially if you consider a game, where SELECTS happen much more frequently than UPDATEs. For the long run, you should also take a look at memcached, as database querys can be the bottleneck whenever you have more than a few users. A: You have probably touched on a much larger issue of designing for high scalability and performance in general. Essentially, for your database design I would follow good practices such as adding foreign keys and indexes to data you expect to be used frequently, normalise your data by splitting it into smaller tables and identify which data is to be read frequently and which is to be written frequently and optimise. Much more important than your database design for high performance web applications, is your effective use of caching both at the client level through HTML page caching and at the server level through cached data or serving up static files in place of dynamic files. The great thing about caching is that it can be added as it is needed, so that when your application does take off then you evolve accordingly. As far as your historical data is concerned, this is a great thing to cache as you do not expect it to change frequently. If you wish to produce regular and fairly intensive reports from your data, then it is good practise to put this data into another database so as not to bring your web application to a halt whilst they run. Of course this kind of optimisation really isn't necessary unless you think your application will warrant it. A: Why don't you post the schema you have now? It's too broad a question to answer usefully without some detail of what platform and database you're going to use and the table structure you're proposing... A: You should denormalize your tables if you find yourself joining 6+ tables in one query to retrieve data for a reporting type web page that will be hit often. Also, if you use ORM libraries like Hibernate or ActiveRecord make sure to spend some time on the default mappings they generate and the sql that ends up generating. They tend to be very chatty with the database when you could have achieve the same results with one round trip to the database.
{ "language": "en", "url": "https://stackoverflow.com/questions/47486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Create a variable in .CSS file for use within that .CSS file Possible Duplicate: Avoiding repeated constants in CSS We have some "theme colors" that are reused in our CSS sheet. Is there a way to set a variable and then reuse it? E.g. .css OurColor: Blue H1 { color:OurColor; } A: There's no requirement that all styles for a selector reside in a single rule, and a single rule can apply to multiple selectors... so flip it around: /* Theme color: text */ H1, P, TABLE, UL { color: blue; } /* Theme color: emphasis */ B, I, STRONG, EM { color: #00006F; } /* ... */ /* Theme font: header */ H1, H2, H3, H4, H5, H6 { font-family: Comic Sans MS; } /* ... */ /* H1-specific styles */ H1 { font-size: 2em; margin-bottom: 1em; } This way, you avoid repeating styles that are conceptually the same, while also making it clear which parts of the document they affect. Note the emphasis on "conceptually" in that last sentence... This just came up in the comments, so I'm gonna expand on it a bit, since I've seen people making this same mistake over and over again for years - predating even the existence of CSS: two attributes sharing the same value does not necessarily mean they represent the same concept. The sky may appear red in the evening, and so do tomatoes - but the sky and the tomato are not red for the same reason, and their colors will vary over time independently. By the same token, just because you happen to have two elements in your stylesheet that are given the same color, or size or positioning does not mean they will always share these values. A naive designer who uses grouping (as described here) or a variable processor such as SASS or LESS to avoid value repetition risks making future changes to styling incredibly error-prone; always focus on the contextual meaning of styles when looking to reduce repetition, ignoring their current values. A: No, but Sass does this. It's a CSS preprocessor, allowing you to use a lot of shortcuts to reduce the amount of CSS you need to write. For example: $blue: #3bbfce; $margin: 16px; .content-navigation { border-color: $blue; color: darken($blue, 9%); } .border { padding: $margin / 2; margin: $margin / 2; border-color: $blue; } Beyond variables, it provides the ability to nest selectors, keeping things logically grouped: table.hl { margin: 2em 0; td.ln { text-align: right; } } li { font: { family: serif; weight: bold; size: 1.2em; } } There's more: mixins that act kind of like functions, and the ability to inherit one selector from another. It's very clever and very useful. If you're coding in Ruby on Rails, it'll even automatically compile it to CSS for you, but there's also a general purpose compiler that can do it for you on-demand. A: You're not the first to wonder and the answer is no. Elliotte has a nice rant on it: http://cafe.elharo.com/web/css-repeats-itself/. You could use JSP, or its equivalent, to generate the CSS at runtime. A: CSS doesn't offer any such thing. The only solution is to write a preprocessing script that is either run manually to produce static CSS output based on some dynamic pseudo-CSS, or that is hooked up to the web server and preprocesses the CSS prior to sending it to the client. A: That's not supported at the moment unless you use some script to produce the CSS based on some variables defined by you. It seems, though, that at least some people from the browser world are working on it. So, if it really becomes a standard sometime in the future, then we'll have to wait until it is implemented in all the browsers (it will be unusable until then). A: Since CSS does not have that (yet, I believe the next version will), follow Konrad Rudolphs advice for preprocesing. You probably want to use one that allready exists: m4 http://www.gnu.org/software/m4/m4.html A: You're making it too complicated. This is the reason the cascade exists. Simply provide your element selectors and class your color: h1 { color: #000; } .a-theme-color { color: #333; } Then apply it to the elements in the HTML, overriding when you need to use your theme colors. <h1>This is my heading.</h1> <h1 class="a-theme-color">This is my theme heading.</h1> A: I've written a macro (in Visual Studio) that allows me to not only code CSS for named colors but to easily calculate shades or blends of those colors. It also handles fonts. It fires on save and outputs a separate version of the CSS file. This is in line with Bert Bos's argument that any symbol processing in CSS take place at the point of authoring, not not at the point of interpretation. The full setup along with all the code would be a bit too complicated to post here, but might be appropriate for a blog post down the road. Here's the comment section from the macro which should be enough to get started. The goals of this approach are as follows: * *Allow base colors, fonts, etc. to be defined in a central location, so that an entire pallete or typographical treatment can be easily tweaked without having to use search/replace *Avoid having to map the .CSS extension in IIS *Generate garden-variety text CSS files that can be used, for example, by VisualStudio's design mode *Generate these files once at authoring time, rather than recalculating them every time the CSS file is requested *Generate these files instantly and transparently, without adding extra steps to the tweak-save-test workflow With this approach, colors, shades of colors, and font families are all represented with shorthand tokens that refer to a list of values in an XML file. The XML file containing the color and font definitions must be called Constants.xml and must reside in the same folder as the CSS files. The ProcessCSS method is fired by EnvironmentEvents whenever VisualStudio saves a CSS file. The CSS file is expanded, and the expanded, static version of the file is saved in the /css/static/ folder. (All HTML pages should reference the /css/static/ versions of the CSS files). The Constants.xml file might look something like this: <?xml version="1.0" encoding="utf-8" ?> <cssconstants> <colors> <color name="Red" value="BE1E2D" /> <color name="Orange" value="E36F1E" /> ... </colors> <fonts> <font name="Text" value="'Segoe UI',Verdana,Arial,Helvetica,Geneva,sans-serif" /> <font name="Serif" value="Georgia,'Times New Roman',Times,serif" /> ... </fonts> </cssconstants> In the CSS file, you can then have definitions like: font-family:[[f:Text]]; background:[[c:Background]]; border-top:1px solid [[c:Red+.5]]; /* 50% white tint of red */ A: You can achieve it and much more by using Less CSS. A: See also Avoiding repeated constants in CSS. As Farinha said, a CSS Variables proposal has been made, but for the time being, you want to use a preprocessor. A: You can use mutliple classes in the HTML element's class attribute, each providing part of the styling. So you could define your CSS as: .ourColor { color: blue; } .ourBorder { border: 1px solid blue; } .bigText { font-size: 1.5em; } and then combine the classes as required: <h1 class="ourColor">Blue Header</h1> <div class="ourColor bigText">Some big blue text.</div> <div class="ourColor ourBorder">Some blue text with blue border.</div> That allows you to reuse the ourColor class without having to define the colour mulitple times in your CSS. If you change the theme, simply change the rule for ourColour. A: This may sound like insanity, but if you are using NAnt (or Ant or some other automated build system), you can use NAnt properties as CSS variables in a hacky way. Start with a CSS template file (maybe styles.css.template or something) containing something like this: a { color: ${colors.blue}; } a:hover { color: ${colors.blue.light}; } p { padding: ${padding.normal}; } And then add a step to your build that assigns all the property values (I use external buildfiles and <include> them) and uses the <expandproperties> filter to generate the actual CSS: <property name="colors.blue" value="#0066FF" /> <property name="colors.blue.light" value="#0099FF" /> <property name="padding.normal" value="0.5em" /> <copy file="styles.css.template" tofile="styles.css" overwrite="true"> <filterchain> <expandproperties/> </filterchain> </copy> The downside, of course, is that you have to run the css generation target before you can check what it looks like in the browser. And it probably would restrict you to generating all your css by hand. However, you can write NAnt functions to do all sorts of cool things beyond just property expansion (like generating gradient image files dynamically), so for me it's been worth the headaches. A: CSS does not (yet) employ variables, which is understandable for its age and it being a declarative language. Here are two major approaches to achieve more dynamic style handling: * *Server-side variables in inline css Example (using PHP): <style> .myclass{color:<?php echo $color; ?>;} </style>   * *DOM manipulation with javascript to change css client-side Examples (using jQuery library): $('.myclass').css('color', 'blue'); OR //The jsvarColor could be set with the original page response javascript // in the DOM or retrieved on demand (AJAX) based on user action. $('.myclass').css('color', jsvarColor);
{ "language": "en", "url": "https://stackoverflow.com/questions/47487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: How to make your website look the same on Linux I have a fairly standards compliant XHTML+CSS site that looks great on all browsers on PC and Mac. The other day I saw it on FF3 on Linux and the letter spacing was slightly larger, throwing everything out of whack and causing unwanted wrapping and clipping of text. The CSS in question has font-size: 11px; font-family: Arial, Helvetica, sans-serif; I know it's going with the generic sans-serif, whatever that maps to. If I add the following, the text scrunches up enough to be close to what I get on the other platforms: letter-spacing: -1.5px; but this would involve some nasty server-side OS sniffing. If there's a pure CSS solution to this I'd love to hear it. The system in question is Ubuntu 7.04 but that is irrelevant as I'm looking to fix it for at least the majority of, if not all, Linux users. Of course asking the user to install a font is not an option! A: A List Apart has a pretty comprehensive article on sizing fonts in CSS. Their conclusion is to use "ems" to size text, since it generally gives the most consistent sizing across browsers. They make no direct mention of different OSes, but you should try using ems. It might solve your problem. A: Have you tried it in FF3 on windows? Personally, I find a good CSS Reset goes a long way in making your page look the same in all browsers. A: I find the easiest way to solve font sizing problems between browsers is to simply leave room for error. Make divs slightly larger or fonts slightly smaller so that platform variation doesn't change wrapping or clipping considerably. A: Sizing/spacing differences are usually difficult to catch. What you can do is create a Linux-specific CSS file that will contain these values adjusted for Linux, then do a simple JS-based detect to inject that CSS if the User agent is a Linux one. This is probably not the cleanest approach, but it will work, and with the least intrusion into your otherwise clean HTML/CSS. A: Unless your site is expecting an above-normal amount of Linux-based traffic, you're probably going to adversely affect more people if you "sacrifice the user’s ability to adjust his or her reading environment" as opposed to just not caring about the Linux experience. Having said that, if you do want a nice Linux experience, you should address the reasons behind why your design breaks under small variations in font spacing, given that these issues are difficult to control under current CSS implementations.
{ "language": "en", "url": "https://stackoverflow.com/questions/47519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Using yield to iterate over a datareader might not close the connection? Here is a sample code to retrieve data from a database using the yield keyword that I found in a few place while googling around : public IEnumerable<object> ExecuteSelect(string commandText) { using (IDbConnection connection = CreateConnection()) { using (IDbCommand cmd = CreateCommand(commandText, connection)) { connection.Open(); using (IDbDataReader reader = cmd.ExecuteReader()) { while(reader.Read()) { yield return reader["SomeField"]; } } connection.Close(); } } } Am I correct in thinking that in this sample code, the connection would not be closed if we do not iterate over the whole datareader ? Here is an example that would not close the connection, if I understand yield correctly.. foreach(object obj in ExecuteSelect(commandText)) { break; } For a db connection that might not be catastrophic, I suppose the GC would clean it up eventually, but what if instead of a connection it was a more critical resource? A: Connection will be closed automatically since you're using it inside "using" block. A: From the simple test I have tried, aku is right, dispose is called as soon as the foreach block exit. @David : However call stack is kept between call, so the connection would not be closed because on the next call we would return to the next instruction after the yield, which is the while block. My understanding is that when the iterator is disposed, the connection would also be disposed with it. I also think that the Connection.Close would not be needed because it would be taken care of when the object is disposed because of the using clause. Here is a simple program I tried to test the behavior... class Program { static void Main(string[] args) { foreach (int v in getValues()) { Console.WriteLine(v); } Console.ReadKey(); foreach (int v in getValues()) { Console.WriteLine(v); break; } Console.ReadKey(); } public static IEnumerable<int> getValues() { using (TestDisposable t = new TestDisposable()) { for(int i = 0; i<10; i++) yield return t.GetValue(); } } } public class TestDisposable : IDisposable { private int value; public void Dispose() { Console.WriteLine("Disposed"); } public int GetValue() { value += 1; return value; } } A: The Iterator that the compiler synthesises implements IDisposable, which foreach calls when the foreach loop is exited. The Iterator's Dispose() method will clean up the using statements on early exit. As long as you use the iterator in a foreach loop, using() block, or call the Dispose() method in some other way, the cleanup of the Iterator will happen. A: Judging from this technical explanation, your code will not work as expected, but abort on the second item, because the connection was already closed when returning the first item. @Joel Gauvreau : Yes, I should have read on. Part 3 of this series explains that the compiler adds special handling for finally blocks to trigger only at the real end.
{ "language": "en", "url": "https://stackoverflow.com/questions/47521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Maven2 Eclipse integration There seem to be two rival Eclipse plugins for integrating with Maven: m2Eclipse and q4e. Has anyone recently evaluated or used these plugins? Why would I choose one or the other? A: Side by side comparison table of three maven plugins. A: There is only one point where q4e is actually better: dependency viewer. You could see the dependency tree, manage your dependencies visually and even see them in a graph. But, m2eclipse works in a better way, specially because you can create you own build commands (in the run menu). q4e comes with some predefined commands and I can't find where to define a new one. In other words, m2eclipse is more friendly to the maven way. A: I have been using m2Eclipse for quiet some time now and have found it to be very reliable. I wasn't aware of q4e until I saw this question so I can't recommend one over the other. A: My 2cents, I am using eclipse for some months now with m2eclipse integration. It's easy to use and straight forward. Once you associate your project to maven and update the dependencies using m2eclipse, any change to pom.xml are reflected to entire project, even Java version definition causes it to be compiled in right JRE (if you have it installed, and properly configured into eclipse.) Another advantage I found is the maven plug-ins are easy to use integrated with eclipse (jetty being my best example, again, properly configured you can easily integrate maven, jetty-plug-in and Eclipse Debugger) Compilation, packaging and all other maven features are equally easy to use with a couple clicks or shortcuts. About q4e I have been reading a lot of good stuff about it and seems the next versions will do a lot more than m2eclipse, with a better dependency management and even visual graphs (!) but the general opinion is that m2eclipse is still better than q4e but q4e is getting better each new version and maybe will surpass m2eclipse soon.
{ "language": "en", "url": "https://stackoverflow.com/questions/47522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: JUnit for database code I've been trying to implement unit testing and currently have some code that does the following: * *query external database, loading into a feed table *query a view, which is a delta of my feed and data tables, updating data table to match feed table my unit testing strategy is this: I have a testing database that I am free to manipulate. * *in setUP(), load some data into my testing db *run my code, using my testing db as the source *inspect the data table, checking for counts and the existence/non existence of certain records *clear testing db, loading in a different set of data *run code again *inspect data table again Obviously I have the data sets that I load into the source db set up such that I know certain records should be added,deleted,updated, etc. It seems like this is a bit cumbersome and there should be an easier way? any suggestions? A: Is it your intent to test the view which generates the deltas, or to test that your code correctly adds, deletes and updates in response to the view? If you want to test the view, you could use a tool like DBUnit to populate your feed and data tables with various data whose delta you've manually calculated. Then, for each test you would verify that the view returns a matching set. If you want to test how your code responds to diffs detected by the view, I would try to abstract away database access. I imagine an java method to which you can pass a result set (or list of POJO/DTO's) and returns a list of parameter Object arrays (again, or POJO's) to be added. Other methods would parse the diff list for items to be removed and updated. You could then create a mock result set or pojo's, pass them to your code and verify the correct parameters are returned. All without touching a database. I think the key is to break your process into parts and test each of those as independently as possible. A: DbUnit will meet your needs. One thing to watch out for is that they have switched to using SLF4J as their logging facade instead of JCL. You can configure SLF4J to forward the logging to JCL but be warned if you are using Maven DbUnit sucks in their Nop log provider by default so you will have to use an exclusion, I blogged about this conflict recently. A: I use DbUnit, but also I work very hard to not to have to test against the DB. Tests that go against the database should only exist for the purpose of testing the database interface. So I have Mock Db Connections that I can set the data for use in all the rest of my tests. A: If you are using Maven, one option is to use the sql-maven-plugin. It allows you to run database initialization/population scripts during the maven build cycle. A: Apart from the already suggested DBUnit, you may want to look into Unitils. It uses DBUnit, but provides more than that (quoting from the site): * *Automatic maintenance of databases, with support for incremental, repeatable and post processing scripts *Automatically disable constraints and set sequences to a minimum value *Support for Oracle, Hsqldb, MySql, DB2, Postgresql, MsSql and Derby *Simplify test database connection setup *Simple insertion of test data with DBUnit * Run tests in a transaction *JPA entity manager creation and injection for hibernate, toplink and * Hibernate SessionFactory creation and session *Automatically test the mapping of JPA entities / hibernate mapped objects with the database
{ "language": "en", "url": "https://stackoverflow.com/questions/47524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Reading from a socket in C# I am trying to read ASCII text response from a tcp open streaming socket This is what I have come up with so far. I want to know what is the best way to do this that includes error handling. Should I also be checking the SocketError after the Receive call in the do loop? IPEndPoint ep = new IPEndPoint(IPAddress.Parse("192.168.1.1"), 9000); Socket sock = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); sock.SendTimeout = 5000; sock.ReceiveTimeout = 5000; if (null != sock) { try { sock.Connect(ep); if (sock.Connected) { // Blocks until send returns. int bytesSent = sock.Send(buffer); Console.WriteLine("Sent {0} bytes.", bytesSent); //get the first 4 bytes, should be the lenngth of the rest of the response. byte[] response = new byte[4]; int bytesRec = sock.Receive(response); int totalBytesRec = 0; if (4 == bytesRec) { int len = BitConverter.ToInt32(response, 0); response = new byte[len]; Console.WriteLine("Trying to get {0} bytes.", len); bytesRec = 0; do { bytesRec = sock.Receive(response); totalBytesRec += bytesRec; forecast += Encoding.ASCII.GetString(response, 0, bytesRec); } while (totalBytesRec < len && bytesRec > 0); if (totalBytesRec != len) throw new Exception("The total bytes recieved from manager did not equal the expected bytes"); } else throw new Exception("Unable to get the response size from the manager"); Console.WriteLine("Received {0} bytes.", totalBytesRec); } } catch (SocketException ex) { Console.WriteLine("{0} Error code: {1}.", ex.Message, ex.ErrorCode); } catch (Exception ex) { Console.WriteLine(ex.Message); } finally { sock.Shutdown(SocketShutdown.Both); sock.Close(); } } A: Your best bet is probably TcpClient. There's a great sample that does exactly what you're doing right in the .NET Documentation here: http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.aspx Good luck to you!
{ "language": "en", "url": "https://stackoverflow.com/questions/47533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SQL to add column with default value - Access 2003 Updating an old ASP/Access site for a client - I need SQL to add a column to an existing table and set a default value. Doesn't work - any ideas? This works fine ALTER TABLE documents ADD COLUMN membersOnly NUMBER I want this to work: ALTER TABLE documents ADD COLUMN membersOnly NUMBER DEFAULT 0 Have googled and seen instructions for default values work for other field types but I want to add number. Thanks! A: Tools -> Options -> Tables/Queries -> (At the bottom right:) Sql Server Compatible Syntax - turn option on for this database. then you can execute your query: ALTER TABLE documents ADD COLUMN membersOnly NUMBER DEFAULT 0 A: With ADO, you can execute a DDL statement to create a field and set its default value. CurrentProject.Connection.Execute _ "ALTER TABLE discardme ADD COLUMN membersOnly SHORT DEFAULT 0" A: How are you connecting to the database to run the update SQL? You can use the ODBC compatible mode through ADO. Without opening the database in Access. A: You may find Sql Server Compatible Syntax is already turned on, so definately worth just trying to run the sql statement mentioned above (via an ADO connection from ASP) before resorting to taking the db offline. Thanks, this helped me out. A: Tools -> Options -> Tables/Queries -> (At the bottom right:) Sql Server Compatible Syntax - turn option on for this database. is not found on MS Access 2010
{ "language": "en", "url": "https://stackoverflow.com/questions/47535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: C# Console? Does anyone know if there is a c# Console app, similar to the Python or Ruby console? I know the whole "Compiled versus Interpreted" difference, but with C#'s reflection power I think it could be done. UPDATE Well, it only took about 200 lines, but I wrote a simple one...It works a lot like osql. You enter commands and then run them with go. SharpConsole http://www.gfilter.net/junk/sharpconsole.jpg If anyone wants it, let me know. A: Found this on reddit: http://tirania.org/blog/archive/2008/Sep-08.html Quote from the site: The idea was simple: create an interactive C# shell by altering the compiler to generate and execute code dynamically as opposed to merely generating static code. A: If you don't have to use the console, and just want a place to test some ad hoc C# snippets, then LinqPad is a good option. I find it very cool/easy to use. A: I am not sure what you are looking for this application to accomplish. If it is just to try some code without having to create a project and all the overhead to just test an idea, then SnippetCompiler could be a good fit. I just wanted to give you another option. A: It appears Miguel De Icaza was stalking me: http://tirania.org/blog/archive/2008/Sep-08.html A: Given your mention of "C#'s reflection power", I am unsure whether you're looking for an interactive C# console for small code snippets of your own (à la Ruby's irb), or a means of interacting with an existing, compiled application currently running as a process. In the former case: * *Windows PowerShell might be your friend *Another candidate would be the C# shell *Finally, CSI, a Simple C# Interpreter A: Google reveals a few efforts at this. One in particular illustrates why this is less straightforward than it might seem. http://www.codeproject.com/KB/cs/csi.aspx has a basic interpreter using .NET's built in ability to compile c# code. A key problem is that the author's approach creates a new mini .NET assembly for each interpreted line. C# may have the reflective power to have a python or ruby style console, but the .NET framework libraries are geared toward compiling C#, not dynamically interpreting it. If you are serious about this, you may want to look at http://www.paxscript.net/, which seems like a genuine attempt at interpreted C#. A: I believe you are looking for Snippy =)
{ "language": "en", "url": "https://stackoverflow.com/questions/47537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Where's the Win32 resource for the mouse cursor for dragging splitters? I am building a custom win32 control/widget and would like to change the cursor to a horizontal "splitter" symbol when hovering over a particular vertical line in the control. IE: I want to drag this vertical line (splitter bar) left and right (WEST and EAST). Of the the system cursors (OCR_*), the only cursor that makes sense is the OCR_SIZEWE. Unfortunately, that is the big, awkward cursor the system uses when resizing a window. Instead, I am looking for the cursor that is about 20 pixels tall and around 3 or 4 pixel wide with two small arrows pointing left and right. I can easily draw this and include it as a resource in my application but the cursor itself is so prevalent that I wanted to be sure it wasn't missing something. For example: when you use the COM drag and drop mechanism (CLSID_DragDropHelper, IDropTarget, etc) you implicitly have access to the "drag" icon (little box under the pointer). I didn't see an explicit OCR_* constant for this guy ... so likewise, if I can't find this splitter cursor outright, I am wondering if it is part of a COM object or something else in the win32 lib. A: There are all sorts of icons, cursors, and images in use throughout the Windows UI which are not publicly available to 3rd-party software. Of course, you could still load up the module in which they reside and use them, but there's really no guarantee your program will keep working after a system update / upgrade. Include your own. The last thing you want is adding an extra dependency over a tiny little cursor. A: I had this exact problem. When I looked back over some old code for a vertical splitter thinking I had an easy answer, it turned out that I had build and loaded my own resource: SetCursor( LoadCursor( ghInstance, "IDC_SPLITVERT" )); I vaguely remember investing some considerable time and effort into find the system way of doing it, so (my guess) is that there is not a system ICON readily available to do the job, so you are better off rolling your own. This is one of those times when I would like to be wrong, as I would have liked there to be a system icon for this job.
{ "language": "en", "url": "https://stackoverflow.com/questions/47538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Practical Experience using Stripes? I am coming from an Enterprise Java background which involves a fairly heavyweight software stack, and have recently discovered the Stripes framework; my initial impression is that this seems to do a good job of minimising the unpleasant parts of building a web application in Java. Has anyone used Stripes for a project that has gone live? And can you share your experiences from the project? Also, did you consider any other technologies and (if so) why did you chose Stripes? A: We've been using Stripes for about 4 years now. Our stack is Stripes/EJB3/JPA. Many use Stripes plus Stripernate as a single, full stack solution. We don't because we want our business logic within the EJB tier, so we simply rely on JPA Entities as combined Model and DTO. Stripes does the binding to our Entities/DTO and we shove them back in to the EJB tier for work. For most of our CRUD stuff this is very thing and straightforward, making our 80% use case trivial to work with. Yet we have the flexibility to do whatever we want for the edge cases that always come up with complicate applications. We have a very large base Action Bean which encapsulates the bulk of our CRUD operations that makes call backs in to the individual subclasses specific to the entities and forms. We also have a large internal tag file library to manage our pages, security, navigation, tasks, etc. A simple CRUD edit form is little more than a list of field names, and we get all of the chrome and menus and access controls "for free". The beauty of this is that we get to keep the HTTP request based metaphor that we like and we get to choose the individual parts of the system rather than use one fat stack. The Stripes layer is lean and mean, and never gets in our way. We have a bunch of Ajax integrating YUI and JQuery, all working against our Stripes and EJB stack painlessly. I also ported a lighter version of the stack to GAE for a sample project, basically having to do minor work to our EJB tier. So, the entire stack is very nimble and amicable to change. Stripes is a big factor of that since we let it do the few things that it does, and does very well. Then delegate the rest to other parts of the stack. As always there are parts folks would rather have different at times, but Stripes would be the last part to go in our stack, frankly. It could be better at supporting the full HTTP verb set, but I'd rather fix Stripes to do that better than switch over to something else. A: We use stripes now on all our production sites, and have been for about a year now. It is an awesome product compared to struts, which we used to use before that. Just the fact that there are literally no XML config files and that you can set it all up with a minimal amount of classes and annotations is awesome. In terms of scaling & speed it actually seems to be better than struts, and my guess would be because there are less layers involved. The code you end up with is a lot cleaner as well, because you don't have to go off to seperate XML files to find out where redirects are going. We use it with an EJB3 backend, and the two seem to work really well together, because you can use your EJB POJO inside your actionBean object, without needing a form object like in struts. In our evaluation we considered an alpha version of struts (that supported annotations) and a lot of other frameworks, but stripes won because of it's superior documentation, stability and clean-ness. Couldn't figure out how to leave a comment: so to answer your second question we haven't encountered a single bug in Stripes that I know of. This is quite impressive for an open source framework. I haven't tried the latest version (1.5) yet, but 1.4.x is very stable. A: We converted a home-grown web framework to stripes in about a week. We're using it in production at this time and it's a great framework. The community is extremely helpful, and the framework doesn't get in your way. It can be extended in many places to change the behavior as you see fit. The url binding feature is awesome as well. We implemented a robust security framework using annotations and interceptors. We're using spring for dependency injection and stripes has excellent support for that. I'd definitely use the new 1.5 release if you're going to use it. I'm a huge fan of the framework. I came from a struts background and it's the exact framework I was looking for. The other developers on our team really enjoy using the stripes framework. I just bought the stripes beta book from the pragmatic programmer's site. It's a great resource on Stripes 1.5. A: We have now used Stripes in multiple production projects and so far the experience has been great. Setup time is low and the configuration management issues seem to be fewer. We have webapps running with Stripes/Dojo/Hibernate and others with a mix of Stripes/Spring/JSP/Jquery etc. Adding Stripes to our existing projects was fairly simple thanks to their support for integrating existing Spring configurations. Using Stripes with JSP is fun although sometimes you do feel the need to code in Java and not have to use the JSTL so much. Note: This is an old question, but given that it pops up pretty fast when you search for Stripes usage, I am adding a response to it. A: I also came from a Struts and JSF background into Stripes. I went from a large enterprise environment that used mostly struts and JSF on newer projects, to a smaller environment that did all their J2EE in Stripes. Seems like Stripes gives you what you want in a Web Framework without getting in the way too much. Not much configuration is necessary, as others have already mentioned. Very quick development and allows you to focus on presentation etc. instead of hassling with the framework. If I had to start a fresh new project and I had my say, I would choose either Stripes or JSF. I might have been scared away from Stripes if I had to make the decision to switch to it, because it kind of looks/feels like a sourceforge basement project instead of a enterprise-grade framework, but it seems to be fairly solid. We use Stripernate for easy ORM. However, it reminds me of Fruit Stripe gum, which lost its flavor WAY TOO FAST. A: Stripes is yesterdays technology, if you can pick something a little more modern like GWT.
{ "language": "en", "url": "https://stackoverflow.com/questions/47555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Can I detect and handle MySQL Warnings with PHP? I'm dealing with a MySQL table that defines the JobName column as UNIQUE. If somebody tries to save a new Job to the database using a JobName that is already in the database, MySQL throws a warning. I would like to be able to detect this warning, just like an error, in my PHP script and deal with it appropriately. Ideally I would like to know what kind of warning MySQL has thrown so that I can branch the code to handle it. Is this possible? If not, is it because MySQL doesn't have this ability, PHP doesn't have this ability, or both? A: First, you should turn warnings off so that your visitors don't see your MySQL errors. Second, when you call mysql_query(), you should check to see if it returned false. If it did, call mysql_errno() to find out what went wrong. Match the number returned to the error codes on this page. It looks like this is the error number you're looking for: Error: 1169 SQLSTATE: 23000 (ER_DUP_UNIQUE) Message: Can't write, because of unique constraint, to table '%s' A: ini_set('mysql.trace_mode', 1) may be what you are looking for. The PHP errors can then be handled with a custom PHP error handler, but you can also just turn off displaying php errors as they are usually logged into a log file (depends on your php configuration). A: For warnings to be "flagged" to PHP natively would require changes to the mysql/mysqli driver, which is obviously beyond the scope of this question. Instead you're going to have to basically check every query you make on the database for warnings: $warningCountResult = mysql_query("SELECT @@warning_count"); if ($warningCountResult) { $warningCount = mysql_fetch_row($warningCountResult ); if ($warningCount[0] > 0) { //Have warnings $warningDetailResult = mysql_query("SHOW WARNINGS"); if ($warningDetailResult ) { while ($warning = mysql_fetch_assoc($warningDetailResult) { //Process it } } }//Else no warnings } Obviously this is going to be hideously expensive to apply en-mass, so you might need to carefully think about when and how warnings may arise (which may lead you to refactor to eliminate them). For reference, MySQL SHOW WARNINGS Of course, you could dispense with the initial query for the SELECT @@warning_count, which would save you a query per execution, but I included it for pedantic completeness. A: Updated to remove the stuff about errno functions which I now realize don't apply in your situation... One thing in MySQL to be wary of for UPDATE statements: mysqli_affected_rows() will return zero even if the WHERE clause matched rows, but the SET clause didn't actually change the data values. I only mention this because that behaviour caused a bug in a system I once looked at--the programmer used that return value to check for errors after an update, assuming a zero meant that some error had occurred. It just meant that the user didn't change any existing values before clicking the update button. So I guess using mysqli_affected_rows() can't be relied upon to find such warnings either, unless you have something like an update_time column in your table that will always be assigned a new timestamp value when updated. That sort of workaround seems kinda kludgey though. A: depending on what (if any) framework you're using, I suggest you do a query to check for the jobname yourself and create the proper information to user in with the rest of the validations for the form. Depending on the number of jobnames, you could send the names to the view that contains the form and use javascript to tell use which is taken. If this doesnt make sense to you, then to sum my view it's this: dont design your program and / or user to try to do illegal things and catch the errors when they do and handle it then. It is much better, imho, to design your system to not create errors. Keep the errors to actual bugs :) A: You can detect Unique key violations using mysqli statement error no. The mysqli statement returns error 1062 , that is ER_DUP_ENTRY. You can look for error 1062 and print a suitable error message. If you want to print your column (jobName) also as part of your error message then you should parse the statement error string. if($stmt = $mysqli->prepare($sql)){ $stmt->bind_param("sss", $name, $identKey, $domain); $stmt->execute(); if($mysqli->affected_rows != 1) { //This will return errorno 1062 trigger_error('mysql error >> '.$stmt->errno .'::' .$stmt->error, E_USER_ERROR); exit(1); } $stmt->close(); } else { trigger_error('mysql error >> '. $mysqli->errno.'::'.$mysqli->error,E_USER_ERROR); } A: It is possible to get the warnings, and in a more efficient way with mysqli than with mysql. Here is the code suggested on the manual page on php.net for the property mysqli->warning_count: $mysqli->query($query); if ($mysqli->warning_count) { if ($result = $mysqli->query("SHOW WARNINGS")) { $row = $result->fetch_row(); printf("%s (%d): %s\n", $row[0], $row[1], $row[2]); $result->close(); } } A: Note on suppressing warnings: Generally, it is not a good idea to prevent warnings from being displayed since you might be missing something important. If you absolutely must hide warnings for some reason, you can do it on an individual basis by placing an @ sign in front of the statement. That way you don't have to turn off all warning reporting and can limit it to a specific instance. Example: // this suppresses warnings that might result if there is no field titled "field" in the result $field_value = @mysql_result($result, 0, "field");
{ "language": "en", "url": "https://stackoverflow.com/questions/47589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How can I use Core Animation to interpolate property values over time on my own classes? Specifically, I am looking to use CA on properties of types other than * *integers and doubles *CGRect, CGPoint, CGSize, and CGAffineTransform structures *CATransform3D data structures *CGColor and CGImage references and in objects other than CALayers or NSViews A: If you can do the changes yourself and the class you use is custom, you might want to add a setProgress:(float) f method to your class and use CA to animate it, then modify the desired properties as needed as a function of f. Just do a [[someObject animator] setValue:[NSNumber numberWithFloat:1.0] forKeyPath:@"someCustomProperty.progress"]; or if the object doesn't have an animator, create the correct CAAnimation yourself. A: Well, it seems I cannot do that. What I should be doing is [subclassing NSAnimation](https://developer.apple.com/library/content/documentation/Cocoa/Conceptual/AnimationGuide/Articles/TimingAnimations.html subclassing NSAnimation). This will work on a MacOS 10.4+ app, but not on Cocoa Touch, in which I cannot find any alternatives apart from using a NSTimer. A: I am not sure exactly what you are trying to do, but if you want to use CA on other properties that is not an issue. You just need to register appropriate actions for the key path. There is an example of doing this in Apple's Core Animation documentaation, using CAAction. Specifically you implement actionForLayer:forKey: to configure the default animation behaviour of that key, and if you want to make the property animation implicitly you implement runActionForKey:object:arguments: . As for animating objects other than CALayers, I really don't understand. Layers are the root visual entity in Core Animation. Additionally, on the iPhone every single UIView is backed on a layer, I do not believe there is anything on the iPhone's screen that is not in a layer, so I don't understand why you are worried about using animation on something that is not a CALayer.
{ "language": "en", "url": "https://stackoverflow.com/questions/47591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: String concatenation: concat() vs "+" operator Assuming String a and b: a += b a = a.concat(b) Under the hood, are they the same thing? Here is concat decompiled as reference. I'd like to be able to decompile the + operator as well to see what that does. public String concat(String s) { int i = s.length(); if (i == 0) { return this; } else { char ac[] = new char[count + i]; getChars(0, count, ac, 0); s.getChars(0, i, ac, count); return new String(0, count + i, ac); } } A: No, not quite. Firstly, there's a slight difference in semantics. If a is null, then a.concat(b) throws a NullPointerException but a+=b will treat the original value of a as if it were null. Furthermore, the concat() method only accepts String values while the + operator will silently convert the argument to a String (using the toString() method for objects). So the concat() method is more strict in what it accepts. To look under the hood, write a simple class with a += b; public class Concat { String cat(String a, String b) { a += b; return a; } } Now disassemble with javap -c (included in the Sun JDK). You should see a listing including: java.lang.String cat(java.lang.String, java.lang.String); Code: 0: new #2; //class java/lang/StringBuilder 3: dup 4: invokespecial #3; //Method java/lang/StringBuilder."<init>":()V 7: aload_1 8: invokevirtual #4; //Method java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder; 11: aload_2 12: invokevirtual #4; //Method java/lang/StringBuilder.append:(Ljava/lang/String;)Ljava/lang/StringBuilder; 15: invokevirtual #5; //Method java/lang/StringBuilder.toString:()Ljava/lang/ String; 18: astore_1 19: aload_1 20: areturn So, a += b is the equivalent of a = new StringBuilder() .append(a) .append(b) .toString(); The concat method should be faster. However, with more strings the StringBuilder method wins, at least in terms of performance. The source code of String and StringBuilder (and its package-private base class) is available in src.zip of the Sun JDK. You can see that you are building up a char array (resizing as necessary) and then throwing it away when you create the final String. In practice memory allocation is surprisingly fast. Update: As Pawel Adamski notes, performance has changed in more recent HotSpot. javac still produces exactly the same code, but the bytecode compiler cheats. Simple testing entirely fails because the entire body of code is thrown away. Summing System.identityHashCode (not String.hashCode) shows the StringBuffer code has a slight advantage. Subject to change when the next update is released, or if you use a different JVM. From @lukaseder, a list of HotSpot JVM intrinsics. A: Basically, there are two important differences between + and the concat method. * *If you are using the concat method then you would only be able to concatenate strings while in case of the + operator, you can also concatenate the string with any data type. For Example: String s = 10 + "Hello"; In this case, the output should be 10Hello. String s = "I"; String s1 = s.concat("am").concat("good").concat("boy"); System.out.println(s1); In the above case you have to provide two strings mandatory. *The second and main difference between + and concat is that: Case 1: Suppose I concat the same strings with concat operator in this way String s="I"; String s1=s.concat("am").concat("good").concat("boy"); System.out.println(s1); In this case total number of objects created in the pool are 7 like this: I am good boy Iam Iamgood Iamgoodboy Case 2: Now I am going to concatinate the same strings via + operator String s="I"+"am"+"good"+"boy"; System.out.println(s); In the above case total number of objects created are only 5. Actually when we concatinate the strings via + operator then it maintains a StringBuffer class to perform the same task as follows:- StringBuffer sb = new StringBuffer("I"); sb.append("am"); sb.append("good"); sb.append("boy"); System.out.println(sb); In this way it will create only five objects. So guys these are the basic differences between + and the concat method. Enjoy :) A: I ran a similar test as @marcio but with the following loop instead: String c = a; for (long i = 0; i < 100000L; i++) { c = c.concat(b); // make sure javac cannot skip the loop // using c += b for the alternative } Just for good measure, I threw in StringBuilder.append() as well. Each test was run 10 times, with 100k reps for each run. Here are the results: * *StringBuilder wins hands down. The clock time result was 0 for most the runs, and the longest took 16ms. *a += b takes about 40000ms (40s) for each run. *concat only requires 10000ms (10s) per run. I haven't decompiled the class to see the internals or run it through profiler yet, but I suspect a += b spends much of the time creating new objects of StringBuilder and then converting them back to String. A: For the sake of completeness, I wanted to add that the definition of the '+' operator can be found in the JLS SE8 15.18.1: If only one operand expression is of type String, then string conversion (§5.1.11) is performed on the other operand to produce a string at run time. The result of string concatenation is a reference to a String object that is the concatenation of the two operand strings. The characters of the left-hand operand precede the characters of the right-hand operand in the newly created string. The String object is newly created (§12.5) unless the expression is a constant expression (§15.28) About the implementation the JLS says the following: An implementation may choose to perform conversion and concatenation in one step to avoid creating and then discarding an intermediate String object. To increase the performance of repeated string concatenation, a Java compiler may use the StringBuffer class or a similar technique to reduce the number of intermediate String objects that are created by evaluation of an expression. For primitive types, an implementation may also optimize away the creation of a wrapper object by converting directly from a primitive type to a string. So judging from the 'a Java compiler may use the StringBuffer class or a similar technique to reduce', different compilers could produce different byte-code. A: Most answers here are from 2008. It looks that things have changed over the time. My latest benchmarks made with JMH shows that on Java 8 + is around two times faster than concat. My benchmark: @Warmup(iterations = 5, time = 200, timeUnit = TimeUnit.MILLISECONDS) @Measurement(iterations = 5, time = 200, timeUnit = TimeUnit.MILLISECONDS) public class StringConcatenation { @org.openjdk.jmh.annotations.State(Scope.Thread) public static class State2 { public String a = "abc"; public String b = "xyz"; } @org.openjdk.jmh.annotations.State(Scope.Thread) public static class State3 { public String a = "abc"; public String b = "xyz"; public String c = "123"; } @org.openjdk.jmh.annotations.State(Scope.Thread) public static class State4 { public String a = "abc"; public String b = "xyz"; public String c = "123"; public String d = "!@#"; } @Benchmark public void plus_2(State2 state, Blackhole blackhole) { blackhole.consume(state.a+state.b); } @Benchmark public void plus_3(State3 state, Blackhole blackhole) { blackhole.consume(state.a+state.b+state.c); } @Benchmark public void plus_4(State4 state, Blackhole blackhole) { blackhole.consume(state.a+state.b+state.c+state.d); } @Benchmark public void stringbuilder_2(State2 state, Blackhole blackhole) { blackhole.consume(new StringBuilder().append(state.a).append(state.b).toString()); } @Benchmark public void stringbuilder_3(State3 state, Blackhole blackhole) { blackhole.consume(new StringBuilder().append(state.a).append(state.b).append(state.c).toString()); } @Benchmark public void stringbuilder_4(State4 state, Blackhole blackhole) { blackhole.consume(new StringBuilder().append(state.a).append(state.b).append(state.c).append(state.d).toString()); } @Benchmark public void concat_2(State2 state, Blackhole blackhole) { blackhole.consume(state.a.concat(state.b)); } @Benchmark public void concat_3(State3 state, Blackhole blackhole) { blackhole.consume(state.a.concat(state.b.concat(state.c))); } @Benchmark public void concat_4(State4 state, Blackhole blackhole) { blackhole.consume(state.a.concat(state.b.concat(state.c.concat(state.d)))); } } Results: Benchmark Mode Cnt Score Error Units StringConcatenation.concat_2 thrpt 50 24908871.258 ± 1011269.986 ops/s StringConcatenation.concat_3 thrpt 50 14228193.918 ± 466892.616 ops/s StringConcatenation.concat_4 thrpt 50 9845069.776 ± 350532.591 ops/s StringConcatenation.plus_2 thrpt 50 38999662.292 ± 8107397.316 ops/s StringConcatenation.plus_3 thrpt 50 34985722.222 ± 5442660.250 ops/s StringConcatenation.plus_4 thrpt 50 31910376.337 ± 2861001.162 ops/s StringConcatenation.stringbuilder_2 thrpt 50 40472888.230 ± 9011210.632 ops/s StringConcatenation.stringbuilder_3 thrpt 50 33902151.616 ± 5449026.680 ops/s StringConcatenation.stringbuilder_4 thrpt 50 29220479.267 ± 3435315.681 ops/s A: The + operator can work between a string and a string, char, integer, double or float data type value. It just converts the value to its string representation before concatenation. The concat operator can only be done on and with strings. It checks for data type compatibility and throws an error, if they don't match. Except this, the code you provided does the same stuff. A: I don't think so. a.concat(b) is implemented in String and I think the implementation didn't change much since early java machines. The + operation implementation depends on Java version and compiler. Currently + is implemented using StringBuffer to make the operation as fast as possible. Maybe in the future, this will change. In earlier versions of java + operation on Strings was much slower as it produced intermediate results. I guess that += is implemented using + and similarly optimized. A: Tom is correct in describing exactly what the + operator does. It creates a temporary StringBuilder, appends the parts, and finishes with toString(). However, all of the answers so far are ignoring the effects of HotSpot runtime optimizations. Specifically, these temporary operations are recognized as a common pattern and are replaced with more efficient machine code at run-time. @marcio: You've created a micro-benchmark; with modern JVM's this is not a valid way to profile code. The reason run-time optimization matters is that many of these differences in code -- even including object-creation -- are completely different once HotSpot gets going. The only way to know for sure is profiling your code in situ. Finally, all of these methods are in fact incredibly fast. This might be a case of premature optimization. If you have code that concatenates strings a lot, the way to get maximum speed probably has nothing to do with which operators you choose and instead the algorithm you're using! A: How about some simple testing? Used the code below: long start = System.currentTimeMillis(); String a = "a"; String b = "b"; for (int i = 0; i < 10000000; i++) { //ten million times String c = a.concat(b); } long end = System.currentTimeMillis(); System.out.println(end - start); * *The "a + b" version executed in 2500ms. *The a.concat(b) executed in 1200ms. Tested several times. The concat() version execution took half of the time on average. This result surprised me because the concat() method always creates a new string (it returns a "new String(result)". It's well known that: String a = new String("a") // more than 20 times slower than String a = "a" Why wasn't the compiler capable of optimize the string creation in "a + b" code, knowing the it always resulted in the same string? It could avoid a new string creation. If you don't believe the statement above, test for your self. A: Niyaz is correct, but it's also worth noting that the special + operator can be converted into something more efficient by the Java compiler. Java has a StringBuilder class which represents a non-thread-safe, mutable String. When performing a bunch of String concatenations, the Java compiler silently converts String a = b + c + d; into String a = new StringBuilder(b).append(c).append(d).toString(); which for large strings is significantly more efficient. As far as I know, this does not happen when you use the concat method. However, the concat method is more efficient when concatenating an empty String onto an existing String. In this case, the JVM does not need to create a new String object and can simply return the existing one. See the concat documentation to confirm this. So if you're super-concerned about efficiency then you should use the concat method when concatenating possibly-empty Strings, and use + otherwise. However, the performance difference should be negligible and you probably shouldn't ever worry about this. A: When using +, the speed decreases as the string's length increases, but when using concat, the speed is more stable, and the best option is using the StringBuilder class which has stable speed in order to do that. I guess you can understand why. But the totally best way for creating long strings is using StringBuilder() and append(), either speed will be unacceptable. A: Note that s.concat("hello"); would result in a NullPointereException when s is null. In Java, the behavior of the + operator is usually determined by the left operand: System.out.println(3 + 'a'); //100 However, Strings are an exception. If either operand is a String, the result is expected to be a String. This is the reason null is converted into "null", even though you might expect a RuntimeException.
{ "language": "en", "url": "https://stackoverflow.com/questions/47605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "577" }
Q: How to do C++ style destructors in C#? I've got a C# class with a Dispose function via IDisposable. It's intended to be used inside a using block so the expensive resource it handles can be released right away. The problem is that a bug occurred when an exception was thrown before Dispose was called, and the programmer neglected to use using or finally. In C++, I never had to worry about this. The call to a class's destructor would be automatically inserted at the end of the object's scope. The only way to avoid that happening would be to use the new operator and hold the object behind a pointer, but that required extra work for the programmer isn't something they would do by accident, like forgetting to use using. Is there any way to for a using block to be automatically used in C#? Many thanks. UPDATE: I'd like to explain why I'm not accepting the finalizer answers. Those answers are technically correct in themselves, but they are not C++ style destructors. Here's the bug I found, reduced to the essentials... try { PleaseDisposeMe a = new PleaseDisposeMe(); throw new Exception(); a.Dispose(); } catch (Exception ex) { Log(ex); } // This next call will throw a time-out exception unless the GC // runs a.Dispose in time. PleaseDisposeMe b = new PleaseDisposeMe(); Using FXCop is an excellent suggestion, but if that's my only answer, my question would have to become a plea to the C# people, or use C++. Twenty nested using statements anyone? A: Where I work we use the following guidelines: * *Each IDisposable class must have a finalizer *Whenever using an IDisposable object, it must be used inside a "using" block. The only exception is if the object is a member of another class, in which case the containing class must be IDisposable and must call the member's 'Dispose' method in its own implementation of 'Dispose'. This means 'Dispose' should never be called by the developer except for inside another 'Dispose' method, eliminating the bug described in the question. *The code in each Finalizer must begin with a warning/error log notifying us that the finalizer has been called. This way you have an extremely good chance of spotting such bugs as described above before releasing the code, plus it might be a hint for bugs occuring in your system. To make our lives easier, we also have a SafeDispose method in our infrastructure, which calls the the Dispose method of its argument within a try-catch block (with error logging), just in case (although Dispose methods are not supposed to throw exceptions). See also: Chris Lyon's suggestions regarding IDisposable Edit: @Quarrelsome: One thing you ought to do is call GC.SuppressFinalize inside 'Dispose', so that if the object was disposed, it wouldn't be "re-disposed". It is also usually advisable to hold a flag indicating whether the object has already been disposed or not. The follwoing pattern is usually pretty good: class MyDisposable: IDisposable { public void Dispose() { lock(this) { if (disposed) { return; } disposed = true; } GC.SuppressFinalize(this); // Do actual disposing here ... } private bool disposed = false; } Of course, locking is not always necessary, but if you're not sure if your class would be used in a multi-threaded environment or not, it is advisable to keep it. A: Unfortunately there isn't any way to do this directly in the code. If this is an issue in house, there are various code analysis solutions that could catch these sort of problems. Have you looked into FxCop? I think that this will catch these situations and in all cases where IDisposable objects might be left hanging. If it is a component that people are using outside of your organization and you can't require FxCop, then documentation is really your only recourse :). Edit: In the case of finalizers, this doesn't really guarantee when the finalization will happen. So this may be a solution for you but it depends on the situation. A: ~ClassName() { } EDIT (bold): If will get called when the object is moved out of scope and is tidied by the garbage collector however this is not deterministic and is not guaranteed to happen at any particular time. This is called a Finalizer. All objects with a finaliser get put on a special finalise queue by the garbage collector where the finalise method is invoked on them (so it's technically a performance hit to declare empty finalisers). The "accepted" dispose pattern as per the Framework Guidelines is as follows with unmanaged resources: public class DisposableFinalisableClass : IDisposable { ~DisposableFinalisableClass() { Dispose(false); } public void Dispose() { Dispose(true); } protected virtual void Dispose(bool disposing) { if (disposing) { // tidy managed resources } // tidy unmanaged resources } } So the above means that if someone calls Dispose the unmanaged resources are tidied. However in the case of someone forgetting to call Dispose or an exception preventing Dispose from being called the unmanaged resources will still be tidied away, only slightly later on when the GC gets its grubby mitts on it (which includes the application closing down or unexpectedly ending). A: @Quarrelsome If will get called when the object is moved out of scope and is tidied by the garbage collector. This statement is misleading and how I read it incorrect: There is absolutely no guarantee when the finalizer will be called. You are absolutely correct that billpg should implement a finalizer; however it will not be called automaticly when the object goes out of scope like he wants. Evidence, the first bullet point under Finalize operations have the following limitations. In fact Microsoft gave a grant to Chris Sells to create an implementation of .NET that used reference counting instead of garbage collection Link. As it turned out there was a considerable performance hit. A: The best practice is to use a finaliser in your class and always use using blocks. There isn't really a direct equivalent though, finalisers look like C destructors, but behave differently. You're supposed to nest using blocks, that's why the C# code layout defaults to putting them on the same line... using (SqlConnection con = new SqlConnection("DB con str") ) using (SqlCommand com = new SqlCommand( con, "sql query") ) { //now code is indented one level //technically we're nested twice } When you're not using using you can just do what it does under the hood anyway: PleaseDisposeMe a; try { a = new PleaseDisposeMe(); throw new Exception(); } catch (Exception ex) { Log(ex); } finally { //this always executes, even with the exception a.Dispose(); } With managed code C# is very very good at looking after its own memory, even when stuff is poorly disposed. If you're dealing with unmanaged resources a lot it's not so strong. A: This is no different from a programmer forgetting to use delete in C++, except that at least here the garbage collector will still eventually catch up with it. And you never need to use IDisposable if the only resource you're worried about is memory. The framework will handle that on it's own. IDisposable is only for unmanaged resources like database connections, filestreams, sockets, and the like. A: A better design is to make this class release the expensive resource on its own, before its disposed. For example, If its a database connection, only connect when needed and release immediately, long before the actual class gets disposed.
{ "language": "en", "url": "https://stackoverflow.com/questions/47612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What are the advantages of installing programs in AppData like Google Chrome? I just noticed that Chromium was installed in AppData in both Vista and XP. If Google does that and if other applications does this, than is that becuase there is some form of protection? Should we write installers that does the same thing as Google? A: One advantage nobody mentioned are silent auto-updates. Chrome has an updater process that runs all the time and immediately updates your chrome installation. I think their use-case is non-standard. They need a way to fix vulnerability issues (since it's a browser) as soon as possible. Waiting for admins approving every single update company-wide, is simply not good enough. A: As far as I can tell, the only reason why Chrome installs into the Application Data folder is so that non-admin users can install it. The Chrome installer currently does not allow the user to pick where the application is to be installed. Don't do that – instead, give the user a choice between a per-user (somewhere like App Data) and computer-wide (Program Files) installation. A: Windows 7 and Windows Installer 5.0 provide real per-user installation capabilities now. http://msdn.microsoft.com/en-us/library/dd408068%28VS.85%29.aspx You can sort of fudge it in Vista and XP by using ~/AppData/Local or the equivalent like Chrome does. Microsoft themselves use this for the ClickOnce installers. So at least on Windows 7 and beyond the solution is simple. A: Windows still lacks a convention for per-user installation. * *When an installer asks whether to install for the current user or all users, it really only refers to shortcut placement (Start Menu; Desktop). The actual application files still go in the system-wide %PROGRAMFILES%. *Microsoft's own ClickOnce works around this by creating a completely non-standard %USERPROFILE%\Local Settings\Apps (%USERPROFILE%\AppData\Roaming on Vista / Server 2008) directory, with both program files and configuration data in there. (I'm at a loss why Microsoft couldn't add a per-user Program Files directory in Vista. For example, in OS X, you can create a ~/Applications, and the Finder will give it an appropriate icon. Apps like CrossOver and Adobe AIR automatically use that, defaulting to per-user apps. Thus, no permissions issues.) What you probably should do: if the user is not an admin, install in the user directory; if they do, give them both options. A: Frankly, I have yet to see the first installer that really allows both per-user and per-machine installations. Many installers offer this option in their GUI, but the setting only affects where the shortcuts etc. go -- the binaries always fo to %ProgramFiles%. In fact, it is pretty hard to create Windows Installer packages that allow both kinds of installs, to say the least. With the advent of UAC, I'd say its is impossible: Per user installations must not require elevation, per machine installations have to. But whether an MSI package requires elevation is controlled via a bit in the summary information stream -- there is no way to have user input have impact on that. Whether per-user or per-machine is the better choice greatly deoends on the application. For small packages, however, I tend to prefer per-user installations. Besides being slightly more user-friendly by not requiring an UAC prompt or runas, they also signalize the user that the setup will not do much harm to the computer (assuming he is a non-admin). A: The Chrome installer really ought to allow global installation (with elevation) in addition to per-user. I don't want to have to maintain an installation for every user; I want to be able to centrally manage upgrades and so on. Chrome doesn't allow that. That said, the option to install per-user is quite nice, as it means no permissions issues. A: Just so you people know, Google has created an MSI installer for global system installation and management. It's located here: https://www.google.com/intl/en/chrome/business/browser/ A: I do not see anything in %PROGRAMFILES% on Win7. Looks like Chrome must be installed for each user on the machine. Perhaps the true reason of doing this is faking number of Chrome installations by few times ! Thus making it first browser in the world !
{ "language": "en", "url": "https://stackoverflow.com/questions/47639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How do I do full-text searching in Ruby on Rails? I would like to do full-text searching of data in my Ruby on Rails application. What options exist? A: I can recommend Sphinx. Ryan Bates has a great screencast on using the Thinking Sphinx plugin to create a full-text search solution. A: You can use Ferret (which is Lucene written in Ruby). It integrates seamless with Rails using the acts_as_ferret mixin. Take a look at "How to Integrate Ferret With Rails". A alternative is Sphinx. A: There are several options available and each have different strengths and weaknesses. If you would like to add full-text searching, it would be prudent to investigate each a little bit and try them out to see how well it works for you in your environment. MySQL has built-in support for full-text searching. It has online support meaning that when new records are added to the database, they are automatically indexed and will be available in the search results. The documentation has more details. acts_as_tsearch offers a wrapper for similar built-in functionality for recent versions of PostgreSQL For other databases you will have to use other software. Lucene is a popular search provider written in Java. You can use Lucene through its search server Solr with Rails using acts_as_solr. If you don't want to use Java, there is a port of Lucene to Ruby called Ferret. Support for Rails is added using the acts_as_ferret plugin. Xapian is another good option and is supported in Rails using the acts_as_xapian plugin. Finally, my preferred choice is Sphinx using the Ultrasphinx plugin. It is extremely fast and has many options on how to index and search your databases, but is no longer being actively maintained. Another plugin for Sphinx is Thinking Sphinx which has a lot of positive feedback. It is a little easier to get started using Thinking Sphinx than Ultrasphinx. I would suggest investigating both plugins to determine which fits better with your project. A: Two main options, depending on what you're after. 1) Full Text Indexing and MATCH() AGAINST(). If you're just looking to do a fast search against a few text columns in your table, you can simply use a full text index of those columns and use MATCH() AGAINST() in your queries. * *Create the full text index in a migration file: add_index :table, :column, type: :fulltext *Query using that index: where( "MATCH( column ) AGAINST( ? )", term ) 2) ElasticSearch and Searchkick If you're looking for a full blown search indexing solution that allows you to search for any column in any of your records while still being lightning quick, take a look at ElasticSearch and Searchkick. ElasticSearch is the indexing and search engine. Searchkick is the integration library with Rails that makes it very easy to index your records and search them. Searchkick's README does a fantastic job at explaining how to get up and running and to fine tune your setup, but here is a little snippet: * *Install and start ElasticSearch. brew install elasticsearch brew services start elasticsearch *Add searchkick gem to your bundle: bundle add searchkick --strict The --strict option just tells Bundler to use an exact version in your Gemfile, which I highly recommend. *Add searchkick to a model you want to index: class MyModel < ApplicationRecord searchkick end *Index your records. MyModel.reindex *Search your index. matching_records = MyModel.search( "term" ) A: I've been compiling a list of the various Ruby on Rails search options in this other question. I'm not sure how, or if to combine our questions. A: It depends on what database you are using. I would recommend using Solr as it offers up a lot of nice options. The downside is you have to run a separate process for it. I have used Ferret as well, but found it to be less stable in terms of multi-threaded access to the index. I haven't tried Sphinx because it only works with MySQL and Postgres. A: Just a note for future reference: Ultra Sphinx is no longer being maintained. Thinking sphinx is its replacement. Although it lacks several features at this time like excerpting which Ultra sphinx had, it makes up for it in other features. A: I would recommend acts_as_ferret as I am using it for Scrumpad project at work. The indexing can be done as a separate process which ensures that while re-indexing we can still use our application. This can reduce the downtime of website. Also the searching is much faster. You can search through multiple model at a time and have your results sorted out by the fields you prefer.
{ "language": "en", "url": "https://stackoverflow.com/questions/47656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Standards Document I am writing a coding standards document for a team of about 15 developers with a project load of between 10 and 15 projects a year. Amongst other sections (which I may post here as I get to them) I am writing a section on code formatting. So to start with, I think it is wise that, for whatever reason, we establish some basic, consistent code formatting/naming standards. I've looked at roughly 10 projects written over the last 3 years from this team and I'm, obviously, finding a pretty wide range of styles. Contractors come in and out and at times, and sometimes even double the team size. I am looking for a few suggestions for code formatting and naming standards that have really paid off ... but that can also really be justified. I think consistency and shared-patterns go a long way to making the code more maintainable ... but, are there other things I ought to consider when defining said standards? * *How do you lineup parenthesis? Do you follow the same parenthesis guidelines when dealing with classes, methods, try catch blocks, switch statements, if else blocks, etc. *Do you line up fields on a column? Do you notate/prefix private variables with an underscore? Do you follow any naming conventions to make it easier to find particulars in a file? How do you order the members of your class? What about suggestions for namespaces, packaging or source code folder/organization standards? I tend to start with something like: <com|org|...>.<company>.<app>.<layer>.<function>.ClassName I'm curious to see if there are other, more accepted, practices than what I am accustomed to -- before I venture off dictating these standards. Links to standards already published online would be great too -- even though I've done a bit of that already. A: First find a automated code-formatter that works with your language. Reason: Whatever the document says, people will inevitably break the rules. It's much easier to run code through a formatter than to nit-pick in a code review. If you're using a language with an existing standard (e.g. Java, C#), it's easiest to use it, or at least start with it as a first draft. Sun put a lot of thought into their formatting rules; you might as well take advantage of it. In any case, remember that much research has shown that varying things like brace position and whitespace use has no measurable effect on productivity or understandability or prevalence of bugs. Just having any standard is the key. A: Coming from the automotive industry, here's a few style standards used for concrete reasons: Always used braces in control structures, and place them on separate lines. This eliminates problems with people adding code and including it or not including it mistakenly inside a control structure. if(...) { } All switches/selects have a default case. The default case logs an error if it's not a valid path. For the same reason as above, any if...elseif... control structures MUST end with a default else that also logs an error if it's not a valid path. A single if statement does not require this. In the occasional case where a loop or control structure is intentionally empty, a semicolon is always placed within to indicate that this is intentional. while(stillwaiting()) { ; } Naming standards have very different styles for typedefs, defined constants, module global variables, etc. Variable names include type. You can look at the name and have a good idea of what module it pertains to, its scope, and type. This makes it easy to detect errors related to types, etc. There are others, but these are the top off my head. -Adam A: I'm going to second Jason's suggestion. I just completed a standards document for a team of 10-12 that work mostly in perl. The document says to use "perltidy-like indentation for complex data structures." We also provided everyone with example perltidy settings that would clean up their code to meet this standard. It was very clear and very much industry-standard for the language so we had great buyoff on it by the team. When setting out to write this document, I asked around for some examples of great code in our repository and googled a bit to find other standards documents that smarter architects than I to construct a template. It was tough being concise and pragmatic without crossing into micro-manager territory but very much worth it; having any standard is indeed key. Hope it works out! A: It obviously varies depending on languages and technologies. By the look of your example name space I am going to guess java, in which case http://java.sun.com/docs/codeconv/ is a really good place to start. You might also want to look at something like maven's standard directory structure which will make all your projects look similar.
{ "language": "en", "url": "https://stackoverflow.com/questions/47658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Tomcat vs Weblogic JNDI Lookup The Weblogic servers we are using have been configured to allow JNDI datasource names like "appds". For development (localhost), we might be running Tomcat and when declared in the <context> section of server.xml, Tomcat will hang JNDI datasources on "java:comp/env/jdbc/*" in the JNDI tree. Problem: in Weblogic, the JNDI lookup is "appds" whilst in Tomcat, it seems that that I must provide the formal "java:comp/env/jdbc/appds". I'm afraid the Tomcat version is an implicit standard but unfortunately, I can't change Weblogic's config ... so that means we end up with two different spring config files (we're using spring 2.5) to facilitate the different environments. Is there an elegant way to address this. Can I look JNDI names up directly in Tomcat? Can Spring take a name and look in both places? Google searches or suggestions would be great. A: I've managed the trick with Tomcat and WebLogic using Spring. Here is a description of how it worked for me. A: The following config works in Tomcat and Weblogic for me. In Spring: <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <!-- This will prepend 'java:comp/env/' for Tomcat, but still fall back to the short name for Weblogic --> <property name="resourceRef" value="true" /> <property name="jndiName" value="jdbc/AgriShare" /> </bean> In Weblogic Admin Console create a JDBC resource named jdbc/AgriShare. Under 'Targets', MAKE SURE YOU TARGET THE DATASOURCE TO THE SERVER YOU ARE DEPLOYING YOUR APP TO!. This particular point cost me some time just now... A: How to use a single JNDI name in your web app I've struggled with this for a few months myself. The best solution is to make your application portable so you have the same JNDI name in both Tomcat and Weblogic. In order to do that, you change your web.xml and spring-beans.xml to point to a single jndi name, and provide a mapping to each vendor specific jndi name. I've placed each file below. You need: * *A <resource-ref /> entry in web.xml for your app to use a single name *A file WEB-INF/weblogic.xml to map your jndi name to the resource managed by WebLogic *A file META-INF/context.xml to map your jndi name to the resource managed by Tomcat * *This can be either in the Tomcat installation or in your app. As a general rule, prefer to have your jndi names in your app like jdbc/MyDataSource and jms/ConnFactory and avoid prefixing them with java:comp/env/. Also, data sources and connection factories are best managed by the container and used with JNDI. It's a common mistake to instantiate database connection pools in your application. spring <?xml version="1.0" encoding="UTF-8" ?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:jee="http://www.springframework.org/schema/jee" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-3.0.xsd"> <jee:jndi-lookup jndi-name="jdbc/appds" id="dataSource" /> </beans> web.xml <resource-ref> <description>My data source</description> <res-ref-name>jdbc/appds</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> weblogic.xml <?xml version="1.0" encoding="UTF-8" ?> <weblogic-web-app xmlns="http://xmlns.oracle.com/weblogic/weblogic-web-app" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://xmlns.oracle.com/weblogic/weblogic-web-app http://http://www.oracle.com/technology/weblogic/weblogic-web-app/1.1/weblogic-web-app.xsd"> <resource-description> <jndi-name>appds</jndi-name> <res-ref-name>jdbc/appds</res-ref-name> </resource-description> </weblogic-web-app> META-INF/context.xml (for Tomcat) <Context> <ResourceLink global="jdbc/appds" name="jdbc/appds" type="javax.sql.DataSource"/> </Context> A: JndiLocatorSupport has a property resourceRef. When setting this true, "java:comp/env/" prefix will be prepended automatically. So I believe it would be correct to differentiate this parameter when moving from Tomcat to Weblogic. A: How about an evironment variable? Set developers machines with the tomcat name and production with the Weblogic name. You can even set your code to use a default one (WebLogic) in case the variable don't exist. A: How are you referencing the resource in spring? This is what we have for tomcat: context: <Resource name="jms/ConnectionFactory" auth="Container" type="org.apache.activemq.ActiveMQConnectionFactory" description=" JMS Connection Factory" factory="org.apache.activemq.jndi.JNDIReferenceFactory" brokerURL="tcp://localhost:61615" brokerName="StandaloneAc tiveMQBroker"/> spring: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util" xmlns:aop="http://www.springframework.org/schema/aop" xmlns:jee="http://www.springframework.org/schema/jee" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util-2.0.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-2.0.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.0.xsd"> <jee:jndi-lookup jndi-name="jms/ConnectionFactory" id="connectionFactory" resource-ref="true" expected-type="javax.jms.ConnectionFactory" lookup-on-startup="false"/> The jee namespace comes from: http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-2.0.xsd A: Setting up DataSource in the application itself is not that crazy :) I would say that is even mandatory if application is meant to be deployed on a grid. River, GigaSpaces, or similar. Note: I do not say connection settings have to be hardcoded inside of WAR, they need to be supplied at deployment time/runtime. This simplifies management of cloud instances since there is only on place to configure. Configuring resources at the container makes sense only if multiple applications are deployed there and they can use shared resource. Again, in cloud type of deployments there is only one application per servlet container instance. A: My application also had a similar problem and this is how I solved it: 1) WEB-INF/classes/application.properties contains the entry: ds.jndi=java:comp/env/jdbc/tcds 2) On the WLS machine, I have an entry in the /etc/sysenv file: ds.jndi=wlsds 3) I configured spring to lookup the JNDI vis the property ${ds.jndi}, using a PropertyPlaceholderConfigurer bean with classpath:application.properties and file:/etc/sysenv as locations. I also set the ignoreResourceNotFound to true so that developers need not have /etc/sysenv on their machines. 4) I run an integration test using Cargo+Jetty and I could not properly set up a JNDI environment there. So I have a fallback BasicDataSource configured too using the defaultObject property of JndiObjectFactoryBean.
{ "language": "en", "url": "https://stackoverflow.com/questions/47676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Reconnecting JMS listener to JBossMQ We have a Java listener that reads text messages off of a queue in JBossMQ. If we have to reboot JBoss, the listener will not reconnect and start reading messages again. We just get messages in the listener's log file every 2 minutes saying it can't connect. Is there something we're not setting in our code or in JBossMQ? I'm new to JMS so any help will be greatly appreciated. Thanks. A: I'd highly recommend you use the Spring abstractions for JMS such as the MessageListenerContainer to deal with reconnection, transactions and pooling for you. You just need to supply a MessageListener and configure the MessageListenerContainer with the ConnectionFactory and the container does the rest. A: If you're purely a listener and do no other JMS calls other than connection setup, then the "onException() handler" answer is correct. If you do any JMS calls in your code, just using onException() callback isn't sufficient. Problems are relayed from the JMS provider to the app either via an exception on a JMS method call or through the onException() callback. Not both. So if you call any JMS methods from your code, you'll also want to invoke that reconnection logic if you get any exceptions on those calls. A: You should implement in your client code javax.jms.ExceptionListener. You will need a method called onException. When the client's connection is lost, you should get a JMSException, and this method will be called automatically. The only thing you have to look out for is if you are intentionally disconnecting from JBossMQ-- that will also throw an exception. Some code might look like this: public void onException (JMSException jsme) { if (!closeRequested) { this.disconnect(); this.establishConnection(connectionProps, queueName, uname, pword, clientID, messageSelector); } else { //Client requested close so do not try to reconnect } } In your "establishConnection" code, you would then implement a while(!initialized) construct that contains a try/catch inside of it. Until you are sure you have connected and subscribed properly, stay inside the while loop catching all JMS/Naming/etc. exceptions. We've used this method for years with JBossMQ and it works great. We have never had a problem with our JMS clients not reconnecting after bouncing JBossMQ or losing our network connection. A: Piece of advice from personal experience. Upgrade to JBoss Messaging. I've seen it in production for 4 months without problems. It has fully transparent failover - amongst many other features. Also, if you do go with Spring, be very careful with the JmsTemplate.
{ "language": "en", "url": "https://stackoverflow.com/questions/47683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How can I reformat XAML nicely in VS 2008? Visual Studio 2008's XAML editor (SP1) cannot reformat the XML into a consistent style. Which tools can I use to get a nicely formatted XAML file? Studio integration preferred. A: While browsing through the options, I found that I had to set "Position each attribute on a separate line" and "Position first attribute on same line as start tag" under "Tools > Options ... > Text-Editor > XAML > Formatting > Spacing" and reset the Keyboard mappings under "Tools > Options ... > Environment > Keyboard" to "Visual C# 2005". Now the XAML editor reformats the XAML to my taste when pressing Ctrl+E, D. A: Have you tried CTRL + K , D?! A: Here's a link that is specific to VS2008 XAML formatting but the good news is you can do it directly inside VS. Link A: Karl just released v2 of his XAML Power toys and it can reformat your xaml from VS2008! Check out the video about XAML Power Toys Accessories http://karlshifflett.wordpress.com/2008/09/16/xaml-power-toys-v2-release-finally-code-name-hawaii/ A: Or try xaml styler hosted at http://xamlstyler.codeplex.com/ for visual studio 2010. If you ever used Kaxaml's Xaml Scrubber and you like it, then you could think of this extension is the "Xaml Scrubber" for Visual Studio. Check http://xamlstyler.codeplex.com/ for feature highlights. A: The only tool I found is Kaxaml, which does nice formatting ("XAML Scrubber" entry in the left menu), but being a stand-alone editor doesn't quite make the cut. A: http://www.dimebrain.com/2008/05/automating-read.html is a nice plugin for formatting your xaml so the attributes line up underneath each other. A: I just did a post on this. This is a very versatile way to format XAML. http://blogs.msdn.com/b/brunoterkaly/archive/2013/01/09/how-to-format-xaml-easily-and-effectively-windows-8-wpf-silverlight.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/47685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I become "test infected" with TDD? I keep reading about people who are "test infected", meaning that they don't just "get" TDD but also can't live without it. They've "had the makeover" as it were. The question is, how do I get like that? A: Learn about TDD to start, and then begin integrating it into your workflow. If you use the methodologies enough, you'll find that they become second nature and you'll start framing all of your development tasks within that framework. Also, start using the J-Unit (or X-Unit) framework for your language of choice. A: Part of the point of being "test infected" is that you've used TDD enough and seen the successes enough that you don't want to code without it. Once you've gone through a cycle of writing tests first, then coding and refactoring and seeing your bug counts go down and your code get better as a result, not only does it become second nature like Zxaos said, you have a hard time going back to Code First. This is being test infected. A: You've already read about TDD; reading more isn't going to excite you. Instead, you need a genuine personal success story. Here's how. Grab some code from a core module, code that doesn't depend on external systems or too many other subroutines. Doesn't matter how complex or simple the routine is. Then start writing unit tests against it. (I'm assuming you have an xUnit or similar for your language.) Be really obnoxious with the tests -- test every boundary case, test max-int and min-int, test null's, test strings and lists with millions of elements, test strings with Korean and control characters and right-to-left Arabic and quotes and backslashes and periods and other things that tend to break things if not escaped. What you'll find is.... bugs! At first you might think these bugs aren't important -- you haven't run into these problems yet, your code probably would never do this, etc. etc.. But my experience is if you keep pushing forward you'll be amazed at the number of little problems. Eventually it becomes hard to believe that none of these bugs will ever cause a problem. Plus you get a great feeling of accomplishment with something is done really, really well. We know code is never perfect and rarely free of bugs, so it's nice when we've exhausted so many tests that we really do feel confident. Confidence is a nice feeling. Finally, I think the last event that will trigger the love will happen weeks or months later. Maybe you're fixing a bug or adding a feature or refactoring some code, and something you do will break a unit test. "Huh?" you'll say, not understanding why the new change was even relevant to the broken test. Then you'll find it, and find enlightenment. Because you really didn't know that you were breaking code, and the tests saved you. Hallelujah! A: One word, practice! There is some overhead with doing TDD and the way to overcome it is to practice and make sure you are using tools to help the process. You need to learn the tools like the back of your hand. Once you learn the tools to go along with the process you are learning, then it will click and you will get fluent with writing tests first to flush the code out. Then you will be "test infected". I answered a question similar to this a while back. You may want to check it out also. I mention some tools and explain learning TDD. Out of these tools, Resharper and picking a good mocking framework are critical for doing TDD. I can't stress learning these tools to go along with the testing framework you are using enough.
{ "language": "en", "url": "https://stackoverflow.com/questions/47692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is there a way to attach a debugger to a multi-threaded Python process? I'm trying to debug a deadlock in a multi-threaded Python application after it has locked up. Is there a way to attach a debugger to inspect the state of the process? Edit: I'm attempting this on Linux, but it would be great if there were a cross-platform solution. It's Python after all :) A: You can attach a debugger to a multi-threaded Python process, but you need to do it at the C level. To make sense of what's going on, you need the Python interpreter to be compiled with symbols. If you don't have one, you need to download source from python.org and build it yourself: ./configure --prefix=/usr/local/pydbg make OPT=-g sudo make install sudo ln -s /usr/local/pydbg/bin/python /usr/local/bin/dbgpy Make sure your workload is running on that version of the interpreter. You can then attach to it with GDB at any time. The Python folks have included a sample ".gdbinit" in their Misc directory, which has some useful macros. However it's broken for multi-threaded debugging (!). You need to replace lines like this while $pc < Py_Main || $pc > Py_GetArgcArgv with the following: while ($pc < Py_Main || $pc > Py_GetArgcArgv) && ($pc < t_bootstrap || $pc > thread_PyThread_start_new_thread) Otherwise commands like pystack won't terminate on threads other than the main thread. With this stuff in place, you can do stuff like gdb> attach <PID> gdb> info threads gdb> thread <N> gdb> bt gdb> pystack gdb> detach and see what's going on. Kind of. You can parse what the objects are with the "pyo" macro. Chris has some examples on his blog. Good luck. (Shoutout for Dan's blog for some key information for me, notably the threading fix!) A: My experience debugging multi-threaded programs in PyDev (Eclipse on Windows XP) is, threads created using thread.start_new_thread could not be hooked, but thread created using threading.Thread could be hooked. Hope the information is helpful. A: If you mean the pydb, there is no way to do it. There was some effort in that direction: see the svn commit, but it was abandoned. Supposedly winpdb supports it. A: Use Winpdb. It is a platform independent graphical GPL Python debugger with support for remote debugging over a network, multiple threads, namespace modification, embedded debugging, encrypted communication and is up to 20 times faster than pdb. Features: * *GPL license. Winpdb is Free Software. *Compatible with CPython 2.3 through 2.6 and Python 3000 *Compatible with wxPython 2.6 through 2.8 *Platform independent, and tested on Ubuntu Gutsy and Windows XP. *User Interfaces: rpdb2 is console based, while winpdb requires wxPython 2.6 or later. (source: winpdb.org) A: Yeah, gdb is good for lower level debugging. You can change threads with the thread command. e.g (gdb) thr 2 [Switching to thread 2 (process 6159 thread 0x3f1b)] (gdb) backtrace .... You could also check out Python specific debuggers like Winpdb, or pydb. Both platform independent. A: PyCharm IDE allows attaching to a running Python process since version 4.0. Here is described how to do that. A: python3 provides gdb extensions. Using them, gdb can attach to a running program, select a thread and print its python backtrace. On Debian (since at least Buster) the extensions are part of the python3.x-dbg package (ex. python3.10-dbg installs /usr/share/gdb/auto-load/usr/bin/python3.10-gdb.py) and gdb auto-loads them. Example with a simple threaded python script: #!/usr/bin/env python3 import signal import threading def a(): while True: pass def b(): while True: signal.pause() threading.Thread(target=a).start() threading.Thread(target=b).start() Running gdb: user@vsid:~$ ps -C python3 -L PID LWP TTY TIME CMD 1215 1215 pts/0 00:00:00 python3 1215 1216 pts/0 00:00:19 python3 1215 1217 pts/0 00:00:00 python3 user@vsid:~$ gdb -p 1215 GNU gdb (Debian 10.1-2+b1) 10.1.90.20210103-git Copyright (C) 2021 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> [...] (gdb) info auto-load python-scripts Loaded Script Yes /usr/share/gdb/auto-load/usr/bin/python3.10-gdb.py (gdb) info threads Id Target Id Frame * 1 Thread 0x7f2f034b4740 (LWP 1215) "python3" 0x00007f2f036a60fa in __futex_abstimed_wait_common64 (futex_word=futex_word@entry=0x7f2ef4000b60, expected=expected@entry=0, clockid=clockid@entry=0, abstime=abstime@entry=0x0, private=<optimized out>, cancel=cancel@entry=true) at ../sysdeps/nptl/futex-internal.c:74 2 Thread 0x7f2f02ea7640 (LWP 1216) "python3" 0x000000000051b858 in _PyEval_EvalFrameDefault (tstate=<optimized out>, f=<optimized out>, throwflag=<optimized out>) at ../Python/ceval.c:3850 3 Thread 0x7f2f026a6640 (LWP 1217) "python3" 0x00007f2f036a3932 in __libc_pause () at ../sysdeps/unix/sysv/linux/pause.c:29 (gdb) thread 2 (gdb) py-bt Traceback (most recent call first): File "/root/./threaded.py", line 7, in a while True: File "/usr/lib/python3.10/threading.py", line 946, in run self._target(*self._args, **self._kwargs) File "/usr/lib/python3.10/threading.py", line 1009, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 966, in _bootstrap self._bootstrap_inner() (gdb) We can confirm that thread 1216 which used the most cpu time according to ps is indeed the thread running function a() that is busy-looping. A: What platform are you attempting this on? Most debuggers allow you to attach to a running process by using the process id. You can either output the process id via logging or using something like Task Manager. Once that is achieved it will be possible to inspect individual threads and their call stacks. EDIT: I don't have any experience with GNU Debugger (GDB), which is cross platform, however I found this link and it may start you on the right path. It explains how to add debug symbols (handy for reading stack traces) and how to instruct gdb to attach to a running python process. A: pdbinject allows you to inject pdb into an already running python process. The pdbinject executable only works under python2, but can inject into python3 just fine too. A: This can be used as a dead simple "remote" debugger: import sys import socket import pdb def remote_trace(): server = socket.socket() server.bind(('0.0.0.0', 12345)) server.listen() client, _= server.accept() stream = client.makefile('rw') sys.stdin = sys.stdout = sys.stderr = stream pdb.set_trace() remote_trace() # Execute in the shell: `telnet 127.0.0.1 12345` On Windows it's easier to use Netcat instead of Telnet (which will also work on linux).
{ "language": "en", "url": "https://stackoverflow.com/questions/47701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Multiple threads and performance on a single CPU Is here any performance benefit to using multiple threads on a computer with a single CPU that does not having hyperthreading? A: You can consider using multithreading on a single CPU * *If you use network resources *If you do high-intensive IO operations *If you pull data from a database *If you exploit other stuff with possible delays *If you want to make your app with ultraspeed reaction When you should not use multithreading on a single CPU * *High-intensive operations which do almost 100% CPU usage *If you are not sure how to use threads and synchronization *If your application cannot be divided into several parallel processes A: In terms of speed of computation, No. In fact things will slow down due to the overhead of managing the threads. In terms of responsiveness, yes. You can for example have one thread wait on an IO operation and have another run a GUI at the same time. A: Yes, there is a benefit of using multiple threads (or processes) on a single CPU - if one thread is busy waiting for something, others can continue doing useful work. However this can be offset by the overhead of task switching. You'll have to benchmark and/or profile your application on production grade hardware to find out. A: It depends on your application. If it spends all its time using the CPU, then multithreading will just slow things down - though you may be able to use it to be more responsive to the user and thus give the impression of better performance. However, if your code is limited by other things, for example using the file system, the network, or any other resource, then multithreading can help, since it allows your application to behave asynchronously. So while one thread is waiting for a file to load from disk, another can be querying a remote webserver and another redrawing the GUI, while another is doing various calculations. Working with multiple threads can also simplify your business logic, since you don't have to pay so much attention to how various independent tasks need to interleave. If the operating system's scheduling logic is better than yours, then you may indeed see improved performance. A: Regardless of the number of CPUs available, if you require preemptive multitasking and/or applications with asynchronous components (i.e. pretty much anything that combines a responsive GUI with a non-trivial amount of computation or continuous I/O processing), multithreading performs much better than the alternative, which is to use multiple processes for each application. This is because threads in the same process can exchange data much more efficiently than can multiple processes, because they share the same memory context. See this Wikipedia article on computer multitasking for a fairly concise discussion of these issues. A: Absolutely! If you do any kind of I/O, there is great advantage to having a multithreaded system. While one thread wait for an I/O operation (which are relatively slow), another thread can do useful work.
{ "language": "en", "url": "https://stackoverflow.com/questions/47703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How to start coding the "Dining Philosophers" simulation? I'm not a beginner at C# but I really need to increase my understanding, so I've picked a classic deadlock problem to code to help teach myself some of the more advanced concepts of C#. The Dining Philosophers Problem seems like a good one, but I need a little help to get started. I know I need to approach the "diners" as objects, but to simulate the random delays between eating, should I look to threading with each diner in a separate thread? Do I need some kind of "master" to monitor all the actions? Any general design concept advice is welcome, but I'd like to do the grunt programming as an exercise. Thanks! A: I think the best approach to simulate it would be a Fork class with a method like use() that holds the fork (bool available = false) and a release() that releases it. A Philosopher class with getFork(Fork) and releaseFork(Fork) that operates the holding/releasing of the object Fork (seems to me a timer would be good in a method useFork() so you can really perceive the deadlock. And for Last a DinningTable (or any other name) class that creates instances, and do the log. If you plan to use threads, here is where you should implement a thread for each Philosopher concurring for the Fork. As a suggestion, you could implement a Plate Class, holding a quantity of spaghetti that the Philosopher.useFork() method lower during the time frame. This way you can see which Philosopher finishes first. I will let the implementations for you, of course, since your objective is to learn C# ... in my experience, you better learn doing something concrete like these classes ;) Besides, you can find lots of implementations on Google if you want to cheat ... I invite you to share the code after it . It's a great Study Reference. Hope This helps you.
{ "language": "en", "url": "https://stackoverflow.com/questions/47707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How does google make make those awesome PDF reports in Analytics and when you print a Google Doc etc? When you print from Google Docs (using the "print" link, not File/Print) you end up printing a nicely formated PDF file instead of relying on the print engine of the browser. Same is true for some of the reports in Google Analytics . . . the printed reports as PDF's are beautiful. How do they do that? I can't imagine they use something like Adobe Acrobat to facilitate it but maybe they do. I've seen some expensive HTML to PDF converters online from time to time but have never tired it. Any thoughts? A: iTextSharp and iText are opensource and free PDF generation libraries for .NET and Java respectively. I've used them to generate report PDF's before and was quite happy with the results. http://itextsharp.sourceforge.net/ http://www.lowagie.com/iText/ A: Great free alternative to PrinceXML: wkhtmltopdf . There are plenty of wrapper libraries for various languages - but I've only used Ruby ones. However the product itseld is on par with PrinceXML IMHO. A: If you are specifically looking at how Google does it. If you look at the PDF Properties page, they use Prince 6.0 (see princexml.com) There are lots of other PDF generators out there. I've had great success with PDFlib for tricky jobs. A: I have had success with pd4ml. It has a tag library, so you can turn any existing HTML into PDF by <pd4ml:transform> <!-- Your HTML is here --> <c:import url="/page.html" /> </pd4ml:transform> A: Rendering a PDF is hard, complex problem. However generating them, is not. Simply make up some entities, and generate. It's about same problem domain as generating HTML for webpage vs. displaying (rendering) it. A: Well, I doubt it's as easy as generating HTML . . . I mean, first of all, PDF is not a human readable format and it's not plain text (like SVG). In fact, I would compare a SVG file to a PDF file in that with both you have precise control over the layout on a printed page. But SVG is different in that it's XML (and also in that it's not supported completely in the browser . . . still looking into SVG too). Come to think of it, SVG should probably will be my next question. I know Google doesn't use .NET and I doubt they use Java so there must be some other libraries they use for generating the PDF files. More importantly, how do they create the PDF's without having to rewrite everything as a PDF instead of as HTML? I mean, there has to be some shared code for between when they generate the HTML view as opposed to the PDF view. Come to think of it, maybe the PDF view and the HTML view are completely separate and they just have two views and hence why the MVC development style seems to be the way to go.
{ "language": "en", "url": "https://stackoverflow.com/questions/47709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do you determine how far to normalize a database? When creating a database structure, what are good guidelines to follow or good ways to determine how far a database should be normalized? Should you create an un-normalized database and split it apart as the project progresses? Should you create it fully normalized and combine tables as needed for performance? A: Normalization means eliminating redundant data. In other words, an un-normalized or de-normalized database is a database where the same information will be repeated in multiple different places. This means you have to write more complex update statement to ensure you update the same data everywhere, otherwise you get inconsistent data which in turn means the output of queries is unrealiable. This is a pretty huge problem, so I would say denormalization hurts, not the other way around. In some case you may deliberately decide to denormalize specific parts of a database, if you judge that the benefit outweighs the extra work in updating data and the risk of data corruption. For example with datawarehouses, where data is aggregated for performance reasons, and data if often not updated after the initial entry which reduce the risk of inconsistencies. But in general be weary of denormalizing for performance. For example the performance benefit of a denormalized join can typically be achieved by using materialized view (also called indexed view), which will be as fast as querying a denormalized table, but still protects the consistency of the data. A: Jeff has a pretty good overview of his philosophy on his blog: Maybe normalization isn't normal. The main thing is: don't overdo normalization. But I think an even bigger point to take away is that it probably doesn't matter too much. Unless you're running the next Google, you probably won't notice much of a difference until your application grows. A: Database normizational I feel is an art form. You don't want to over normalize your database because you will have too many tables and it will cause your queries of even simple objects take longer than they should. A good rule of thumb I follow is to normalize the same information repeated over and over again. For example if you are creating a contact management application it would make sense to have Address (Street, City, State, Zip, etc. . ) as its own table. However if you have only 2 types of contacts, Business or personal, do you need a contact type table if you know you are only going to have 2? For me no. I would start by first figuring out the datatypes you need. Use a modeling program to help like Visio. You don't want to start with a non-normalized database because you will eventually normalize. Start by putting objects in there logical groupings, as you see data repeated take that data into a new table. I would keep up with that process until you feel you have the database designed. Let testing tell you if you need to combine tables. A well written query can cover any over normalization. A: I believe starting with an un-normalized database and moving toward normalized as you progress is usually easiest to get started. To the question of how far to normalize, my philosophy is to normalize until is starts to hurt. That may sound a little flippant, but it generally is a good way to gauge how far to take it. A: Having a normalized database will give you the most flexibility and the easiest maintenance. I always start with a normalized database and then un-normalize only when there is an real life problem that needs addressing. I view this similarly to code performance i.e. write maintainable, flexible code and make compromises for performance when you know that there is a performance problem. A: The original poster never described in what situation the database will be used. If it's going to be any type of data warehousing project where at some point you will need cubes (OLAP) processing data for some front-end, it would be wiser to start off with star schema (fact tables + dimension) rather than looking into normalization. The Kimball books will be of great help in this case. A: You want to start designing a normalized database up to 3rd normal form. As you develop the business logic layer you may decide you have to denormalize a bit but never, never go below the 3rd form. Always, keep 1st and 2nd form compliant. You want to denormalize for simplicity of code, not for performance. Use indexes and stored procedures for that :) The reason not "normalize as you go" is that you would have to modify the code you already have written most every time you modify the database design. There are a couple of good articles: http://www.agiledata.org/essays/dataNormalization.html A: @GrizzlyGuru A wise man once told me "normalize till it hurts, denormalize till it works". It hasn't failed me yet :) I disagree about starting with it in un-normalized form however, in my experience its' been easier to adapt your application to deal with a less normalized database than a more-normalized one. It could also lead to situations where its' working "well enough" so you never get around to normalizing it (until its' too late!) A: I agree that it is typically better to start out with a normalized DB and then denormalize to solve very specific problems, but I'd probably start at Boyce-Codd Normal Form instead of 3rd Normal Form. A: The truth is that "it depends." It depends on a lot of factors including: * *Code (Hand-coded or Tool driven (like ETL packages)) *Primary Application (Transaction Processing, Data Warehousing, Reporting) *Type of Database (MySQL, DB/2, Oracle, Netezza, etc.) *Database Architecture (Tablular, Columnar) *DBA Quality (proactive, reactive, inactive) *Expected Data Quality (do you want to enforce data quality at the application level or the database level?) A: I agree that you should normalise as much as possible and only denormalise if absolutely necessary for performance. And with materialised views or caching schemes this is often not necessary. The thing to bare in mind is that by normalising your model you are giving the database more information on how to constrain your data so that you can remove the risk of update anomalies that can occur in incompletely normalised models. If you denormalise then you either need to live with the fact that you may get update anomolies or you need to implement the constraint validation yourself in your application code. This takes away a lot of the benefit of using a DBMS which lets you define these constraints declaratively. So assuming the same quality of code, denormalising may not actually give you better performance. Another thing to mention is that hardware is cheap these days so throwing extra processing power at the problem is often more cost effective than accepting the potential costs of cleaning up corrupted data. A: Often if you normalize as far as your other software will let you, you'll be done. For example, when using Object-Relational mapping technology, you'll have a rich set of semantics for various many-to-one and many-to-many relationships. Under the hood that'll provide join tables with effectively 2 primary keys. While relatively rare, true normalization often gives you relations with 3 or more primary keys. In cases like this, I prefer to stick with the O/R and roll my own code to avoid the various DB anomalies. A: Just try to use common sense. Also some say - and I have to agree with them - that, if you're finding yourself joining 6 (the magic number) tables together in most of your queries - not including reporting related ones- , than you might consider denormalizing a bit.
{ "language": "en", "url": "https://stackoverflow.com/questions/47711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Are mocks better than stubs? A while ago I read the Mocks Aren't Stubs article by Martin Fowler and I must admit I'm a bit scared of external dependencies with regards to added complexity so I would like to ask: What is the best method to use when unit testing? Is it better to always use a mock framework to automatically mock the dependencies of the method being tested, or would you prefer to use simpler mechanisms like for instance test stubs? A: I generally prefer to use mocks because of Expectations. When you call a method on a stub that returns a value, it typically just gives you back a value. But when you call a method on a mock, not only does it return a value, it also enforces the expectation that you set up that the method was even called in the first place. In other words, if you set up an expectation and then don't call that method, an exception gets thrown. When you set an expectation, you are essentially saying "If this method doesn't get called, something went wrong." And the opposite is true, if you call a method on a mock and did not set an expectation, it will throw an exception, essentially saying "Hey, what are you doing calling this method when you didn't expect it." Sometimes you don't want expectations on every method you're calling, so some mocking frameworks will allow "partial" mocks that are like mock/stub hybrids, in that only the expectations you set are enforced, and every other method call is treated more like a stub in that it just returns a value. One valid place to use stubs I can think of off the top, though, is when you are introducing testing into legacy code. Sometimes it's just easier to make a stub by subclassing the class you are testing than refactoring everything to make mocking easy or even possible. And to this... Avoid using mocks always because they make tests brittle. Your tests now have intricate knowledge of the methods called by the implementation, if the mocked interface changes... your tests break. So use your best judgment..< ...I say if my interface changes, my tests had better break. Because the whole point of unit tests is that they accurately test my code as it exists right now. A: It just depends on what type of testing you are doing. If you are doing behavior based testing you might want a dynamic mock so you can verify that some interaction with your dependancy occurs. But if you are doing state based testing you might want a stub so you verify values/etc For example, in the below test you notice that I stub out the view so I can verify a property value is set (state based testing). I then create a dynamic mock of the service class so I can make sure a specific method gets called during the test (interaction / behavior based testing). <TestMethod()> _ Public Sub Should_Populate_Products_List_OnViewLoad_When_PostBack_Is_False() mMockery = New MockRepository() mView = DirectCast(mMockery.Stub(Of IProductView)(), IProductView) mProductService = DirectCast(mMockery.DynamicMock(Of IProductService)(), IProductService) mPresenter = New ProductPresenter(mView, mProductService) Dim ProductList As New List(Of Product)() ProductList.Add(New Product()) Using mMockery.Record() SetupResult.For(mView.PageIsPostBack).Return(False) Expect.Call(mProductService.GetProducts()).Return(ProductList).Repeat.Once() End Using Using mMockery.Playback() mPresenter.OnViewLoad() End Using 'Verify that we hit the service dependency during the method when postback is false Assert.AreEqual(1, mView.Products.Count) mMockery.VerifyAll() End Sub A: It's best to use a combination, and you'll have to use your own judgement. Here's the guidelines I use: * *If making a call to external code is part of your code's expected (outward-facing) behavior, this should be tested. Use a mock. *If the call is really an implementation detail which the outside world doesn't care about, prefer stubs. However: *If you're worried that later implementations of the tested code might accidentally go around your stubs, and you want to notice if that happens, use a mock. You're coupling your test to your code, but it's for the sake of noticing that your stub is no longer sufficient and your test needs re-working. The second kind of mock is a sort of necessary evil. Really what's going on here is that whether you use a stub or a mock, in some cases you have to couple to your code more than you'd like. When that happens, it's better to use a mock than a stub only because you'll know when that coupling breaks and your code is no longer written the way your test thought it would be. It's probably best to leave a comment in your test when you do this so that whoever breaks it knows that their code isn't wrong, the test is. And again, this is a code smell and a last resort. If you find you need to do this often, try rethinking the way you write your tests. A: Never mind Statist vs. Interaction. Think about the Roles and Relationships. If an object collaborates with a neighbour to get its job done, then that relationship (as expressed in an interface) is a candidate for testing using mocks. If an object is a simple value object with a bit of behaviour, then test it directly. I can't see the point of writing mocks (or even stubs) by hand. That's how we all started and refactored away from that. For a longer discussion, consider taking a look at http://www.mockobjects.com/book A: As the mantra goes 'Go with the simplest thing that can possibly work.' * *If fake classes can get the job done, go with them. *If you need an interface with multiple methods to be mocked, go with a mock framework. Avoid using mocks always because they make tests brittle. Your tests now have intricate knowledge of the methods called by the implementation, if the mocked interface or your implementation changes... your tests break. This is bad coz you'll spend additional time getting your tests to run instead of just getting your SUT to run. Tests should not be inappropriately intimate with the implementation. So use your best judgment.. I prefer mocks when it'll help save me writing-updating a fake class with n>>3 methods. Update Epilogue/Deliberation: (Thanks to Toran Billups for example of a mockist test. See below) Hi Doug, Well I think we've transcended into another holy war - Classic TDDers vs Mockist TDDers. I think I'm belong to the former. * *If I am on test#101 Test_ExportProductList and I find I need to add a new param to IProductService.GetProducts(). I do that get this test green. I use a refactoring tool to update all other references. Now I find all the mockist tests calling this member now blow up. Then I have to go back and update all these tests - a waste of time. Why did ShouldPopulateProductsListOnViewLoadWhenPostBackIsFalse fail? Was it because the code is broken? Rather the tests are broken. I favor the one test failure = 1 place to fix. Mocking freq goes against that. Would stubs be better? If it I had a fake_class.GetProducts().. sure One place to change instead of shotgun surgery over multiple Expect calls. In the end it's a matter of style.. if you had a common utility method MockHelper.SetupExpectForGetProducts() - that'd also suffice.. but you'll see that this is uncommon. *If you place a white strip on the test name, the test is hard to read. Lot of plumbing code for the mock framework hides the actual test being performed. *requires you to learn this particular flavor of a mocking framework A: Read Luke Kanies' discussion of exactly this question in this blog post. He references a post from Jay Fields which even suggests that using [a equivalent to ruby's/mocha's] stub_everything is preferrable to make the tests more robust. To quote Fields' final words: "Mocha makes it as easy to define a mock as it is to define a stub, but that doesn't mean you should always prefer mocks. In fact, I generally prefer stubs and use mocks when necessary."
{ "language": "en", "url": "https://stackoverflow.com/questions/47749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Remove duplicates from a List in C# Anyone have a quick method for de-duplicating a generic List in C#? A: Simply initialize a HashSet with a List of the same type: var noDupes = new HashSet<T>(withDupes); Or, if you want a List returned: var noDupsList = new HashSet<T>(withDupes).ToList(); A: If you're using .Net 3+, you can use Linq. List<T> withDupes = LoadSomeData(); List<T> noDupes = withDupes.Distinct().ToList(); A: Use Linq's Union method. Note: This solution requires no knowledge of Linq, aside from that it exists. Code Begin by adding the following to the top of your class file: using System.Linq; Now, you can use the following to remove duplicates from an object called, obj1: obj1 = obj1.Union(obj1).ToList(); Note: Rename obj1 to the name of your object. How it works * *The Union command lists one of each entry of two source objects. Since obj1 is both source objects, this reduces obj1 to one of each entry. *The ToList() returns a new List. This is necessary, because Linq commands like Union returns the result as an IEnumerable result instead of modifying the original List or returning a new List. A: As a helper method (without Linq): public static List<T> Distinct<T>(this List<T> list) { return (new HashSet<T>(list)).ToList(); } A: Here's an extension method for removing adjacent duplicates in-situ. Call Sort() first and pass in the same IComparer. This should be more efficient than Lasse V. Karlsen's version which calls RemoveAt repeatedly (resulting in multiple block memory moves). public static void RemoveAdjacentDuplicates<T>(this List<T> List, IComparer<T> Comparer) { int NumUnique = 0; for (int i = 0; i < List.Count; i++) if ((i == 0) || (Comparer.Compare(List[NumUnique - 1], List[i]) != 0)) List[NumUnique++] = List[i]; List.RemoveRange(NumUnique, List.Count - NumUnique); } A: Installing the MoreLINQ package via Nuget, you can easily distinct object list by a property IEnumerable<Catalogue> distinctCatalogues = catalogues.DistinctBy(c => c.CatalogueCode); A: If you don't care about the order you can just shove the items into a HashSet, if you do want to maintain the order you can do something like this: var unique = new List<T>(); var hs = new HashSet<T>(); foreach (T t in list) if (hs.Add(t)) unique.Add(t); Or the Linq way: var hs = new HashSet<T>(); list.All( x => hs.Add(x) ); Edit: The HashSet method is O(N) time and O(N) space while sorting and then making unique (as suggested by @lassevk and others) is O(N*lgN) time and O(1) space so it's not so clear to me (as it was at first glance) that the sorting way is inferior A: If you have tow classes Product and Customer and we want to remove duplicate items from their list public class Product { public int Id { get; set; } public string ProductName { get; set; } } public class Customer { public int Id { get; set; } public string CustomerName { get; set; } } You must define a generic class in the form below public class ItemEqualityComparer<T> : IEqualityComparer<T> where T : class { private readonly PropertyInfo _propertyInfo; public ItemEqualityComparer(string keyItem) { _propertyInfo = typeof(T).GetProperty(keyItem, BindingFlags.GetProperty | BindingFlags.Instance | BindingFlags.Public); } public bool Equals(T x, T y) { var xValue = _propertyInfo?.GetValue(x, null); var yValue = _propertyInfo?.GetValue(y, null); return xValue != null && yValue != null && xValue.Equals(yValue); } public int GetHashCode(T obj) { var propertyValue = _propertyInfo.GetValue(obj, null); return propertyValue == null ? 0 : propertyValue.GetHashCode(); } } then, You can remove duplicate items in your list. var products = new List<Product> { new Product{ProductName = "product 1" ,Id = 1,}, new Product{ProductName = "product 2" ,Id = 2,}, new Product{ProductName = "product 2" ,Id = 4,}, new Product{ProductName = "product 2" ,Id = 4,}, }; var productList = products.Distinct(new ItemEqualityComparer<Product>(nameof(Product.Id))).ToList(); var customers = new List<Customer> { new Customer{CustomerName = "Customer 1" ,Id = 5,}, new Customer{CustomerName = "Customer 2" ,Id = 5,}, new Customer{CustomerName = "Customer 2" ,Id = 5,}, new Customer{CustomerName = "Customer 2" ,Id = 5,}, }; var customerList = customers.Distinct(new ItemEqualityComparer<Customer>(nameof(Customer.Id))).ToList(); this code remove duplicate items by Id if you want remove duplicate items by other property, you can change nameof(YourClass.DuplicateProperty) same nameof(Customer.CustomerName) then remove duplicate items by CustomerName Property. A: Sort it, then check two and two next to each others, as the duplicates will clump together. Something like this: list.Sort(); Int32 index = list.Count - 1; while (index > 0) { if (list[index] == list[index - 1]) { if (index < list.Count - 1) (list[index], list[list.Count - 1]) = (list[list.Count - 1], list[index]); list.RemoveAt(list.Count - 1); index--; } else index--; } Notes: * *Comparison is done from back to front, to avoid having to resort list after each removal *This example now uses C# Value Tuples to do the swapping, substitute with appropriate code if you can't use that *The end-result is no longer sorted A: I like to use this command: List<Store> myStoreList = Service.GetStoreListbyProvince(provinceId) .GroupBy(s => s.City) .Select(grp => grp.FirstOrDefault()) .OrderBy(s => s.City) .ToList(); I have these fields in my list: Id, StoreName, City, PostalCode I wanted to show list of cities in a dropdown which has duplicate values. solution: Group by city then pick the first one for the list. A: Might be easier to simply make sure that duplicates are not added to the list. if(items.IndexOf(new_item) < 0) items.add(new_item) A: It worked for me. simply use List<Type> liIDs = liIDs.Distinct().ToList<Type>(); Replace "Type" with your desired type e.g. int. A: You can use Union obj2 = obj1.Union(obj1).ToList(); A: Perhaps you should consider using a HashSet. From the MSDN link: using System; using System.Collections.Generic; class Program { static void Main() { HashSet<int> evenNumbers = new HashSet<int>(); HashSet<int> oddNumbers = new HashSet<int>(); for (int i = 0; i < 5; i++) { // Populate numbers with just even numbers. evenNumbers.Add(i * 2); // Populate oddNumbers with just odd numbers. oddNumbers.Add((i * 2) + 1); } Console.Write("evenNumbers contains {0} elements: ", evenNumbers.Count); DisplaySet(evenNumbers); Console.Write("oddNumbers contains {0} elements: ", oddNumbers.Count); DisplaySet(oddNumbers); // Create a new HashSet populated with even numbers. HashSet<int> numbers = new HashSet<int>(evenNumbers); Console.WriteLine("numbers UnionWith oddNumbers..."); numbers.UnionWith(oddNumbers); Console.Write("numbers contains {0} elements: ", numbers.Count); DisplaySet(numbers); } private static void DisplaySet(HashSet<int> set) { Console.Write("{"); foreach (int i in set) { Console.Write(" {0}", i); } Console.WriteLine(" }"); } } /* This example produces output similar to the following: * evenNumbers contains 5 elements: { 0 2 4 6 8 } * oddNumbers contains 5 elements: { 1 3 5 7 9 } * numbers UnionWith oddNumbers... * numbers contains 10 elements: { 0 2 4 6 8 1 3 5 7 9 } */ A: As kronoz said in .Net 3.5 you can use Distinct(). In .Net 2 you could mimic it: public IEnumerable<T> DedupCollection<T> (IEnumerable<T> input) { var passedValues = new HashSet<T>(); // Relatively simple dupe check alg used as example foreach(T item in input) if(passedValues.Add(item)) // True if item is new yield return item; } This could be used to dedupe any collection and will return the values in the original order. It's normally much quicker to filter a collection (as both Distinct() and this sample does) than it would be to remove items from it. A: How about: var noDupes = list.Distinct().ToList(); In .net 3.5? A: Another way in .Net 2.0 static void Main(string[] args) { List<string> alpha = new List<string>(); for(char a = 'a'; a <= 'd'; a++) { alpha.Add(a.ToString()); alpha.Add(a.ToString()); } Console.WriteLine("Data :"); alpha.ForEach(delegate(string t) { Console.WriteLine(t); }); alpha.ForEach(delegate (string v) { if (alpha.FindAll(delegate(string t) { return t == v; }).Count > 1) alpha.Remove(v); }); Console.WriteLine("Unique Result :"); alpha.ForEach(delegate(string t) { Console.WriteLine(t);}); Console.ReadKey(); } A: There are many ways to solve - the duplicates issue in the List, below is one of them: List<Container> containerList = LoadContainer();//Assume it has duplicates List<Container> filteredList = new List<Container>(); foreach (var container in containerList) { Container duplicateContainer = containerList.Find(delegate(Container checkContainer) { return (checkContainer.UniqueId == container.UniqueId); }); //Assume 'UniqueId' is the property of the Container class on which u r making a search if(!containerList.Contains(duplicateContainer) //Add object when not found in the new class object { filteredList.Add(container); } } Cheers Ravi Ganesan A: Here's a simple solution that doesn't require any hard-to-read LINQ or any prior sorting of the list. private static void CheckForDuplicateItems(List<string> items) { if (items == null || items.Count == 0) return; for (int outerIndex = 0; outerIndex < items.Count; outerIndex++) { for (int innerIndex = 0; innerIndex < items.Count; innerIndex++) { if (innerIndex == outerIndex) continue; if (items[outerIndex].Equals(items[innerIndex])) { // Duplicate Found } } } } A: David J.'s answer is a good method, no need for extra objects, sorting, etc. It can be improved on however: for (int innerIndex = items.Count - 1; innerIndex > outerIndex ; innerIndex--) So the outer loop goes top bottom for the entire list, but the inner loop goes bottom "until the outer loop position is reached". The outer loop makes sure the entire list is processed, the inner loop finds the actual duplicates, those can only happen in the part that the outer loop hasn't processed yet. Or if you don't want to do bottom up for the inner loop you could have the inner loop start at outerIndex + 1. A: A simple intuitive implementation: public static List<PointF> RemoveDuplicates(List<PointF> listPoints) { List<PointF> result = new List<PointF>(); for (int i = 0; i < listPoints.Count; i++) { if (!result.Contains(listPoints[i])) result.Add(listPoints[i]); } return result; } A: All answers copy lists, or create a new list, or use slow functions, or are just painfully slow. To my understanding, this is the fastest and cheapest method I know (also, backed by a very experienced programmer specialized on real-time physics optimization). // Duplicates will be noticed after a sort O(nLogn) list.Sort(); // Store the current and last items. Current item declaration is not really needed, and probably optimized by the compiler, but in case it's not... int lastItem = -1; int currItem = -1; int size = list.Count; // Store the index pointing to the last item we want to keep in the list int last = size - 1; // Travel the items from last to first O(n) for (int i = last; i >= 0; --i) { currItem = list[i]; // If this item was the same as the previous one, we don't want it if (currItem == lastItem) { // Overwrite last in current place. It is a swap but we don't need the last list[i] = list[last]; // Reduce the last index, we don't want that one anymore last--; } // A new item, we store it and continue else lastItem = currItem; } // We now have an unsorted list with the duplicates at the end. // Remove the last items just once list.RemoveRange(last + 1, size - last - 1); // Sort again O(n logn) list.Sort(); Final cost is: nlogn + n + nlogn = n + 2nlogn = O(nlogn) which is pretty nice. Note about RemoveRange: Since we cannot set the count of the list and avoid using the Remove funcions, I don't know exactly the speed of this operation but I guess it is the fastest way. A: Using HashSet this can be done easily. List<int> listWithDuplicates = new List<int> { 1, 2, 1, 2, 3, 4, 5 }; HashSet<int> hashWithoutDuplicates = new HashSet<int> ( listWithDuplicates ); List<int> listWithoutDuplicates = hashWithoutDuplicates.ToList(); A: Using HashSet: list = new HashSet<T>(list).ToList(); A: An extension method might be a decent way to go... something like this: public static List<T> Deduplicate<T>(this List<T> listToDeduplicate) { return listToDeduplicate.Distinct().ToList(); } And then call like this, for example: List<int> myFilteredList = unfilteredList.Deduplicate(); A: In Java (I assume C# is more or less identical): list = new ArrayList<T>(new HashSet<T>(list)) If you really wanted to mutate the original list: List<T> noDupes = new ArrayList<T>(new HashSet<T>(list)); list.clear(); list.addAll(noDupes); To preserve order, simply replace HashSet with LinkedHashSet. A: This takes distinct (the elements without duplicating elements) and convert it into a list again: List<type> myNoneDuplicateValue = listValueWithDuplicate.Distinct().ToList(); A: public static void RemoveDuplicates<T>(IList<T> list ) { if (list == null) { return; } int i = 1; while(i<list.Count) { int j = 0; bool remove = false; while (j < i && !remove) { if (list[i].Equals(list[j])) { remove = true; } j++; } if (remove) { list.RemoveAt(i); } else { i++; } } } A: If you need to compare complex objects, you will need to pass a Comparer object inside the Distinct() method. private void GetDistinctItemList(List<MyListItem> _listWithDuplicates) { //It might be a good idea to create MyListItemComparer //elsewhere and cache it for performance. List<MyListItem> _listWithoutDuplicates = _listWithDuplicates.Distinct(new MyListItemComparer()).ToList(); //Choose the line below instead, if you have a situation where there is a chance to change the list while Distinct() is running. //ToArray() is used to solve "Collection was modified; enumeration operation may not execute" error. //List<MyListItem> _listWithoutDuplicates = _listWithDuplicates.ToArray().Distinct(new MyListItemComparer()).ToList(); return _listWithoutDuplicates; } Assuming you have 2 other classes like: public class MyListItemComparer : IEqualityComparer<MyListItem> { public bool Equals(MyListItem x, MyListItem y) { return x != null && y != null && x.A == y.A && x.B.Equals(y.B); && x.C.ToString().Equals(y.C.ToString()); } public int GetHashCode(MyListItem codeh) { return codeh.GetHashCode(); } } And: public class MyListItem { public int A { get; } public string B { get; } public MyEnum C { get; } public MyListItem(int a, string b, MyEnum c) { A = a; B = b; C = c; } } A: I think the simplest way is: Create a new list and add unique item. Example: class MyList{ int id; string date; string email; } List<MyList> ml = new Mylist(); ml.Add(new MyList(){ id = 1; date = "2020/09/06"; email = "zarezadeh@gmailcom" }); ml.Add(new MyList(){ id = 2; date = "2020/09/01"; email = "zarezadeh@gmailcom" }); List<MyList> New_ml = new Mylist(); foreach (var item in ml) { if (New_ml.Where(w => w.email == item.email).SingleOrDefault() == null) { New_ml.Add(new MyList() { id = item.id, date = item.date, email = item.email }); } } A: As per remove duplicate, We have to apply below logic so It will remove duplicate in fast ways. public class Program { public static void Main(string[] arges) { List<string> cities = new List<string>() { "Chennai", "Kolkata", "Mumbai", "Mumbai","Chennai", "Delhi", "Delhi", "Delhi", "Chennai", "Kolkata", "Mumbai", "Chennai" }; cities = RemoveDuplicate(cities); foreach (var city in cities) { Console.WriteLine(city); } } public static List<string> RemoveDuplicate(List<string> cities) { if (cities.Count < 2) { return cities; } int size = cities.Count; for (int i = 0; i < size; i++) { for (int j = i+1; j < size; j++) { if (cities[i] == cities[j]) { cities.RemoveAt(j); size--; j--; } } } return cities; } } A: I have my own way. I am 2 looping same list for compare list items. And then remove second one. for(int i1 = 0; i1 < lastValues.Count; i1++) { for(int i2 = 0; i2 < lastValues.Count; i2++) { if(lastValues[i1].UserId == lastValues[i2].UserId) { lastValues.RemoveAt(i2); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/47752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "615" }
Q: How-to: Ranking Search Results I have a webapp development problem that I've developed one solution for, but am trying to find other ideas that might get around some performance issues I'm seeing. problem statement: * *a user enters several keywords/tokens *the application searches for matches to the tokens *need one result for each token * *ie, if an entry has 3 tokens, i need the entry id 3 times *rank the results * *assign X points for token match *sort the entry ids based on points *if point values are the same, use date to sort results What I want to be able to do, but have not figured out, is to send 1 query that returns something akin to the results of an in(), but returns a duplicate entry id for each token matches for each entry id checked. Is there a better way to do this than what I'm doing, of using multiple, individual queries running one query per token? If so, what's the easiest way to implement those? edit I've already tokenized the entries, so, for example, "see spot run" has an entry id of 1, and three tokens, 'see', 'spot', 'run', and those are in a separate token table, with entry ids relevant to them so the table might look like this: 'see', 1 'spot', 1 'run', 1 'run', 2 'spot', 3 A: you could achive this in one query using 'UNION ALL' in MySQL. Just loop through the tokens in PHP creating a UNION ALL for each token: e.g if the tokens are 'x', 'y' and 'z' your query may look something like this SELECT * FROM `entries` WHERE token like "%x%" union all SELECT * FROM `entries` WHERE token like "%y%" union all SELECT * FROM `entries` WHERE token like "%z%" ORDER BY score ect... The order clause should operate on the entire result set as one, which is what you need. In terms of performance it won't be all that fast (I'm guessing), however with databases the main overhead in terms of speed is often sending the query to the database engine from PHP and receiving the results. With this technique this only happens once instead of once per token, so performance will increase, I just don't know if it'll be enough. A: I know this isn't strictly an answer to the question you're asking but if your table is thousands rather than millions of rows, then a FULLTEXT solution might be the best way to go here. In MySQL when you use MATCH on your indexed column, each keyword you supply will be given a relevance score (calculated roughly by the number of times each keyword was mentioned) that will be more accurate than your method and certainly more effecient for multiple keywords. See here: http://dev.mysql.com/doc/refman/5.0/en/fulltext-search.html A: If you're using the UNION ALL pattern you may also want to include the following parts to your query: SELECT COUNT(*) AS C ... GROUP BY ID ORDER BY c DESC While this is a really trivial example it does get you the frequency of the matches for each result and this could be a pseudo rank to start with. A: You'll probably get much better performance if you used a data structure designed for search tasks rather than a database. For example, you might try looking at building an inverted index. Rather than writing it youself, however, you might also want to look into something like Lucene which does most of the work for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/47762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is there a way to parse a dependency tree from a build script output? I have an inherited project that uses a build script (not make) to build and link the project with various libraries. When it performs a build I would like to parse the build output to determine what and where the actual static libraries being linked into the final executable are and where are they coming from. The script is compiling and linking with GNU tools. A: You might try using the nm tool. Given the right options, it will look at a binary (archive or linked image) and tell you what objects were linked into it. Actually, here's a one-liner I use at work: #!/bin/sh nm -Ag $* | sed 's/^.*\/\(.*\.a\):/\1/' | sort -k 3 | grep -v ' U ' to find the culprits for undefined symbols. Just chop off the last grep expression and it should pretty much give you what you want. A: Static libraries, that makes life more difficult in this regard. In case of dynamic libraries you could just have used ldd on the resulting executable and be done with it. The best bet would be some kind of configuration file. Alternatively you could try to look for -l arguments to gcc/ld. Those are used to specify libraries. You could write a script for extracting it from the output, though I suspect that you will have to do it manually because by the time you know what the script should look for you probably already know the answer. A: It is probably possible to do something useful using e.g. Perl, but you would have to provide more details. On the other hand, it could be easier to simply analyze the script...
{ "language": "en", "url": "https://stackoverflow.com/questions/47780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Google App Engine: Is it possible to do a Gql LIKE query? Simple one really. In SQL, if I want to search a text field for a couple of characters, I can do: SELECT blah FROM blah WHERE blah LIKE '%text%' The documentation for App Engine makes no mention of how to achieve this, but surely it's a common enough problem? A: You need to use search service to perform full text search queries similar to SQL LIKE. Gaelyk provides domain specific language to perform more user friendly search queries. For example following snippet will find first ten books sorted from the latest ones with title containing fern and the genre exactly matching thriller: def documents = search.search { select all from books sort desc by published, SearchApiLimits.MINIMUM_DATE_VALUE where title =~ 'fern' and genre = 'thriller' limit 10 } Like is written as Groovy's match operator =~. It supports functions such as distance(geopoint(lat, lon), location) as well. A: BigTable, which is the database back end for App Engine, will scale to millions of records. Due to this, App Engine will not allow you to do any query that will result in a table scan, as performance would be dreadful for a well populated table. In other words, every query must use an index. This is why you can only do =, > and < queries. (In fact you can also do != but the API does this using a a combination of > and < queries.) This is also why the development environment monitors all the queries you do and automatically adds any missing indexes to your index.yaml file. There is no way to index for a LIKE query so it's simply not available. Have a watch of this Google IO session for a much better and more detailed explanation of this. A: i'm facing the same problem, but i found something on google app engine pages: Tip: Query filters do not have an explicit way to match just part of a string value, but you can fake a prefix match using inequality filters: db.GqlQuery("SELECT * FROM MyModel WHERE prop >= :1 AND prop < :2", "abc", u"abc" + u"\ufffd") This matches every MyModel entity with a string property prop that begins with the characters abc. The unicode string u"\ufffd" represents the largest possible Unicode character. When the property values are sorted in an index, the values that fall in this range are all of the values that begin with the given prefix. http://code.google.com/appengine/docs/python/datastore/queriesandindexes.html maybe this could do the trick ;) A: App engine launched a general-purpose full text search service in version 1.7.0 that supports the datastore. Details in the announcement. More information on how to use this: https://cloud.google.com/appengine/training/fts_intro/lesson2 A: Have a look at Objectify here , it is like a Datastore access API. There is a FAQ with this question specifically, here is the answer How do I do a like query (LIKE "foo%") You can do something like a startWith, or endWith if you reverse the order when stored and searched. You do a range query with the starting value you want, and a value just above the one you want. String start = "foo"; ... = ofy.query(MyEntity.class).filter("field >=", start).filter("field <", start + "\uFFFD"); A: Altough App Engine does not support LIKE queries, have a look at the properties ListProperty and StringListProperty. When an equality test is done on these properties, the test will actually be applied on all list members, e.g., list_property = value tests if the value appears anywhere in the list. Sometimes this feature might be used as a workaround to the lack of LIKE queries. For instance, it makes it possible to do simple text search, as described on this post. A: Just follow here: init.py#354">http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/ext/search/init.py#354 It works! class Article(search.SearchableModel): text = db.TextProperty() ... article = Article(text=...) article.save() To search the full text index, use the SearchableModel.all() method to get an instance of SearchableModel.Query, which subclasses db.Query. Use its search() method to provide a search query, in addition to any other filters or sort orders, e.g.: query = article.all().search('a search query').filter(...).order(...) A: I tested this with GAE Datastore low-level Java API. Me and works perfectly Query q = new Query(Directorio.class.getSimpleName()); Filter filterNombreGreater = new FilterPredicate("nombre", FilterOperator.GREATER_THAN_OR_EQUAL, query); Filter filterNombreLess = new FilterPredicate("nombre", FilterOperator.LESS_THAN, query+"\uFFFD"); Filter filterNombre = CompositeFilterOperator.and(filterNombreGreater, filterNombreLess); q.setFilter(filter); A: In general, even though this is an old post, a way to produce a 'LIKE' or 'ILIKE' is to gather all results from a '>=' query, then loop results in python (or Java) for elements containing what you're looking for. Let's say you want to filter users given a q='luigi' users = [] qry = self.user_model.query(ndb.OR(self.user_model.name >= q.lower(),self.user_model.email >= q.lower(),self.user_model.username >= q.lower())) for _qry in qry: if q.lower() in _qry.name.lower() or q.lower() in _qry.email.lower() or q.lower() in _qry.username.lower(): users.append(_qry) A: It is not possible to do a LIKE search on datastore app engine, how ever creating an Arraylist would do the trick if you need to search a word in a string. @Index public ArrayList<String> searchName; and then to search in the index using objectify. List<Profiles> list1 = ofy().load().type(Profiles.class).filter("searchName =",search).list(); and this will give you a list with all the items that contain the world you did on the search A: If the LIKE '%text%' always compares to a word or a few (think permutations) and your data changes slowly (slowly means that it's not prohibitively expensive - both price-wise and performance-wise - to create and updates indexes) then Relation Index Entity (RIE) may be the answer. Yes, you will have to build additional datastore entity and populate it appropriately. Yes, there are some constraints that you will have to play around (one is 5000 limit on the length of list property in GAE datastore). But the resulting searches are lightning fast. For details see my RIE with Java and Ojbectify and RIE with Python posts. A: "Like" is often uses as a poor-man's substitute for text search. For text search, it is possible to use Whoosh-AppEngine.
{ "language": "en", "url": "https://stackoverflow.com/questions/47786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "125" }
Q: Generator expressions vs. list comprehensions When should you use generator expressions and when should you use list comprehensions in Python? # Generator expression (x*2 for x in range(256)) # List comprehension [x*2 for x in range(256)] A: The important point is that the list comprehension creates a new list. The generator creates a an iterable object that will "filter" the source material on-the-fly as you consume the bits. Imagine you have a 2TB log file called "hugefile.txt", and you want the content and length for all the lines that start with the word "ENTRY". So you try starting out by writing a list comprehension: logfile = open("hugefile.txt","r") entry_lines = [(line,len(line)) for line in logfile if line.startswith("ENTRY")] This slurps up the whole file, processes each line, and stores the matching lines in your array. This array could therefore contain up to 2TB of content. That's a lot of RAM, and probably not practical for your purposes. So instead we can use a generator to apply a "filter" to our content. No data is actually read until we start iterating over the result. logfile = open("hugefile.txt","r") entry_lines = ((line,len(line)) for line in logfile if line.startswith("ENTRY")) Not even a single line has been read from our file yet. In fact, say we want to filter our result even further: long_entries = ((line,length) for (line,length) in entry_lines if length > 80) Still nothing has been read, but we've specified now two generators that will act on our data as we wish. Lets write out our filtered lines to another file: outfile = open("filtered.txt","a") for entry,length in long_entries: outfile.write(entry) Now we read the input file. As our for loop continues to request additional lines, the long_entries generator demands lines from the entry_lines generator, returning only those whose length is greater than 80 characters. And in turn, the entry_lines generator requests lines (filtered as indicated) from the logfile iterator, which in turn reads the file. So instead of "pushing" data to your output function in the form of a fully-populated list, you're giving the output function a way to "pull" data only when its needed. This is in our case much more efficient, but not quite as flexible. Generators are one way, one pass; the data from the log file we've read gets immediately discarded, so we can't go back to a previous line. On the other hand, we don't have to worry about keeping data around once we're done with it. A: The benefit of a generator expression is that it uses less memory since it doesn't build the whole list at once. Generator expressions are best used when the list is an intermediary, such as summing the results, or creating a dict out of the results. For example: sum(x*2 for x in xrange(256)) dict( (k, some_func(k)) for k in some_list_of_keys ) The advantage there is that the list isn't completely generated, and thus little memory is used (and should also be faster) You should, though, use list comprehensions when the desired final product is a list. You are not going to save any memeory using generator expressions, since you want the generated list. You also get the benefit of being able to use any of the list functions like sorted or reversed. For example: reversed( [x*2 for x in xrange(256)] ) A: Sometimes you can get away with the tee function from itertools, it returns multiple iterators for the same generator that can be used independently. A: I'm using the Hadoop Mincemeat module. I think this is a great example to take a note of: import mincemeat def mapfn(k,v): for w in v: yield 'sum',w #yield 'count',1 def reducefn(k,v): r1=sum(v) r2=len(v) print r2 m=r1/r2 std=0 for i in range(r2): std+=pow(abs(v[i]-m),2) res=pow((std/r2),0.5) return r1,r2,res Here the generator gets numbers out of a text file (as big as 15GB) and applies simple math on those numbers using Hadoop's map-reduce. If I had not used the yield function, but instead a list comprehension, it would have taken a much longer time calculating the sums and average (not to mention the space complexity). Hadoop is a great example for using all the advantages of Generators. A: Some notes for built-in Python functions: Use a generator expression if you need to exploit the short-circuiting behaviour of any or all. These functions are designed to stop iterating when the answer is known, but a list comprehension must evaluate every element before the function can be called. For example, if we have from time import sleep def long_calculation(value): sleep(1) # for simulation purposes return value == 1 then any([long_calculation(x) for x in range(10)]) takes about ten seconds, as long_calculation will be called for every x. any(long_calculation(x) for x in range(10)) takes only about two seconds, since long_calculation will only be called with 0 and 1 inputs. When any and all iterate over the list comprehension, they will still stop checking elements for truthiness once an answer is known (as soon as any finds a true result, or all finds a false one); however, this is usually trivial compared to the actual work done by the comprehension. Generator expressions are of course more memory efficient, when it's possible to use them. List comprehensions will be slightly faster with the non-short-circuiting min, max and sum (timings for max shown here): $ python -m timeit "max(_ for _ in range(1))" 500000 loops, best of 5: 476 nsec per loop $ python -m timeit "max([_ for _ in range(1)])" 500000 loops, best of 5: 425 nsec per loop $ python -m timeit "max(_ for _ in range(100))" 50000 loops, best of 5: 4.42 usec per loop $ python -m timeit "max([_ for _ in range(100)])" 100000 loops, best of 5: 3.79 usec per loop $ python -m timeit "max(_ for _ in range(10000))" 500 loops, best of 5: 468 usec per loop $ python -m timeit "max([_ for _ in range(10000)])" 500 loops, best of 5: 442 usec per loop A: John's answer is good (that list comprehensions are better when you want to iterate over something multiple times). However, it's also worth noting that you should use a list if you want to use any of the list methods. For example, the following code won't work: def gen(): return (something for something in get_some_stuff()) print gen()[:2] # generators don't support indexing or slicing print [5,6] + gen() # generators can't be added to lists Basically, use a generator expression if all you're doing is iterating once. If you want to store and use the generated results, then you're probably better off with a list comprehension. Since performance is the most common reason to choose one over the other, my advice is to not worry about it and just pick one; if you find that your program is running too slowly, then and only then should you go back and worry about tuning your code. A: Iterating over the generator expression or the list comprehension will do the same thing. However, the list comprehension will create the entire list in memory first while the generator expression will create the items on the fly, so you are able to use it for very large (and also infinite!) sequences. A: When creating a generator from a mutable object (like a list) be aware that the generator will get evaluated on the state of the list at time of using the generator, not at time of the creation of the generator: >>> mylist = ["a", "b", "c"] >>> gen = (elem + "1" for elem in mylist) >>> mylist.clear() >>> for x in gen: print (x) # nothing If there is any chance of your list getting modified (or a mutable object inside that list) but you need the state at creation of the generator you need to use a list comprehension instead. A: List comprehensions are eager but generators are lazy. In list comprehensions all objects are created right away, it takes longer to create and return the list. In generator expressions, object creation is delayed until request by next(). Upon next() generator object is created and returned immediately. Iteration is faster in list comprehensions because objects are already created. If you iterate all the elements in list comprehension and generator expression, time performance is about the same. Even though generator expression return generator object right away, it does not create all the elements. Everytime you iterate over a new element, it will create and return it. But if you do not iterate through all the elements generator are more efficient. Let's say you need to create a list comprehensions that contains millions of items but you are using only 10 of them. You still have to create millions of items. You are just wasting time for making millions of calculations to create millions of items to use only 10. Or if you are making millions of api requests but end up using only 10 of them. Since generator expressions are lazy, it does not make all the calculations or api calls unless it is requested. In this case using generator expressions will be more efficient. In list comprehensions entire collection is loaded to the memory. But generator expressions, once it returns a value to you upon your next() call, it is done with it and it does not need to store it in the memory any more. Only a single item is loaded to the memory. If you are iterating over a huge file in disk, if file is too big you might get memory issue. In this case using generator expression is more efficient. A: Python 3.7: List comprehensions are faster. Generators are more memory efficient. As all others have said, if you're looking to scale infinite data, you'll need a generator eventually. For relatively static small and medium-sized jobs where speed is necessary, a list comprehension is best. A: Use list comprehensions when the result needs to be iterated over multiple times, or where speed is paramount. Use generator expressions where the range is large or infinite. See Generator expressions and list comprehensions for more info. A: There is something that I think most of the answers have missed. List comprehension basically creates a list and adds it to the stack. In cases where the list object is extremely large, your script process would be killed. A generator would be more preferred in this case as its values are not stored in memory but rather stored as a stateful function. Also speed of creation; list comprehension are slower than generator comprehension In short; use list comprehension when the size of the obj is not excessively large else use generator comprehension A: For functional programming, we want to use as little indexing as possible. For this reason, If we want to continue using the elements after we take the first slice of elements, islice() is a better choice since the iterator state is saved. from itertools import islice def slice_and_continue(sequence): ret = [] seq_i = iter(sequence) #create an iterator from the list seq_slice = islice(seq_i,3) #take first 3 elements and print for x in seq_slice: print(x), for x in seq_i: print(x**2), #square the rest of the numbers slice_and_continue([1,2,3,4,5]) output: 1 2 3 16 25
{ "language": "en", "url": "https://stackoverflow.com/questions/47789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "512" }
Q: User Authentication in Pylons + AuthKit I am trying to create a web application using Pylons and the resources on the web point to the PylonsBook page which isn't of much help. I want authentication and authorisation and is there anyway to setup Authkit to work easily with Pylons? I tried downloading the SimpleSiteTemplate from the cheeseshop but wasn't able to run the setup-app command. It throws up an error: File "/home/cnu/env/lib/python2.5/site-packages/SQLAlchemy-0.4.7-py2.5.egg/sqlalchemy/schema.py", line 96, in __call__ table = metadata.tables[key] AttributeError: 'module' object has no attribute 'tables' I use Pylons 0.9.7rc1, SQLAlchemy 0.4.7, Authkit 0.4. A: Ok, another update on the subject. It seems that the cheeseshop template is broken. I've followed the chapter you linked in the post and it seems that authkit is working fine. There are some caveats: * *sqlalchemy has to be in 0.5 version *authkit has to be the dev version from svn (easy_install authkit==dev) I managed to get it working fine. A: I gave up on authkit and rolled my own: http://tonylandis.com/openid-db-authentication-in-pylons-is-easy-with-rpx/ A: I don't think AuthKit is actively maintained anymore. It does use the Paste (http://pythonpaste.org) libs though for things like HTTP Basic/Digest authentication. I would probably go ahead and take a look at the source for some inspiration and then use the Paste tools if you want to use HTTP authentication. There is also OpenID which is very easy to setup. The python-openid libs have an excellent example that is easy to translate to WSGI for wrapping a Pylons app. You can look at an example: http://ionrock.org/hg/brightcontent-main/file/d87b7dcc606c/brightcontent/plugins/openidauth.py A: This actually got me interested:Check out this mailing on the pylons list. So AuthKit is being developed, and I will follow the book and get back on the results.
{ "language": "en", "url": "https://stackoverflow.com/questions/47801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Most elegant way to force a TEXTAREA element to line-wrap, *regardless* of whitespace Html Textarea elements only wrap when they reach a space or tab character. This is fine, until the user types a looooooooooooooooooooooong enough word. I'm looking for a way to strictly enforce line breaks (eg.: even if it results in "loooooooooooo \n ooooooooooong"). The best I've found is to add a zero-width unicode space after every letter, but this breaks copy and paste operations. Anyone know of a better way? Note: I'm referring to the "textarea" element here (i.e.: the one that behaves similarly to a text input) - not just a plain old block of text. A: * *quirksmode.org has an overview of various methods. *There's a related SO question: "In HTML, how to word-break on a dash?" *In browsers that support it, word-wrap: break-word might give the desired effect as well. A: Breaking long words at textarea width size: 1) for modern browsers: textarea { word-break: break-all; } 2) for IE8 compatibility add: textarea { -ms-word-break: break-all; } https://msdn.microsoft.com/en-us/library/ms531184%28v=vs.85%29.aspx 3) add IE11 compatibility hack: Internet Explorer 11 word wrap is not working @media all and (-ms-high-contrast:none) { *::-ms-backdrop, textarea { white-space: pre; } } This code it's working fine on: -IE 11, Chrome 51, Firefox 46 (Windows 7); -IE 8, Chrome 49, Firefox 18 (Windows Xp); -Edge 12.10240 , Opera 30 (Windows 10); A: There's the non-standard element wbr that is supported by at least Firefox, http://developer.mozilla.org/En/HTML/Element Internet Explorer, http://msdn.microsoft.com/en-us/library/ms535917(VS.85).aspx and Opera. A: The CSS settings word-wrap:break-word and text-wrap:unrestricted appear to be CSS 3 features. Good luck finding a way to do this on current implementations. A: I tested the <wbr>, &#8203; and &shy; techniques. All three worked well in IE 7, Firefox 3 and Chrome. The only one that did not break the copy/paste was the <wbr> tag. A: According to my tests, only Firefox has the described behavior among current browsers. So I guess your best bet is to wait for the imminent release of Firefox 3.1 to solve your problem :) A: The most elegant way is to use wrap="soft" for wrapping entire words or wrap="hard" for wrapping by character or wrap="off" for not wrapping at all though the last one wrap="off" is often not needed as automatically the browser uses automatically as if it was wrap="off". EXAMPLE: <textarea name="tbox" cols="24" rows="4" wrap="soft"></textarea>
{ "language": "en", "url": "https://stackoverflow.com/questions/47817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: How do you remove all the options of a select box and then add one option and select it with jQuery? Using core jQuery, how do you remove all the options of a select box, then add one option and select it? My select box is the following. <Select id="mySelect" size="9"> </Select> EDIT: The following code was helpful with chaining. However, (in Internet Explorer) .val('whatever') did not select the option that was added. (I did use the same 'value' in both .append and .val.) $('#mySelect').find('option').remove().end() .append('<option value="whatever">text</option>').val('whatever'); EDIT: Trying to get it to mimic this code, I use the following code whenever the page/form is reset. This select box is populated by a set of radio buttons. .focus() was closer, but the option did not appear selected like it does with .selected= "true". Nothing is wrong with my existing code - I am just trying to learn jQuery. var mySelect = document.getElementById('mySelect'); mySelect.options.length = 0; mySelect.options[0] = new Option ("Foo (only choice)", "Foo"); mySelect.options[0].selected="true"; EDIT: selected answer was close to what I needed. This worked for me: $('#mySelect').children().remove().end() .append('<option selected value="whatever">text</option>') ; But both answers led me to my final solution.. A: $("#id option").remove(); $("#id").append('<option value="testValue" >TestText</option>'); The first line of code will remove all the options of a select box as no option find criteria has been mentioned. The second line of code will add the Option with the specified value("testValue") and Text("TestText"). A: I had a bug in IE7 (works fine in IE6) where using the above jQuery methods would clear the select in the DOM but not on screen. Using the IE Developer Toolbar I could confirm that the select had been cleared and had the new items, but visually the select still showed the old items - even though you could not select them. The fix was to use standard DOM methods/properites (as the poster original had) to clear rather than jQuery - still using jQuery to add options. $('#mySelect')[0].options.length = 0; A: How about just changing the html to new data. $('#mySelect').html('<option value="whatever">text</option>'); Another example: $('#mySelect').html(' <option value="1" selected>text1</option> <option value="2">text2</option> <option value="3" disabled>text3</option> '); A: Building on mauretto's answer, this is a little easier to read and understand: $('#mySelect').find('option').not(':first').remove(); To remove all the options except one with a specific value, you can use this: $('#mySelect').find('option').not('[value=123]').remove(); This would be better if the option to be added was already there. A: * *First clear all exisiting option execpt the first one(--Select--) *Append new option values using loop one by one $('#ddlCustomer').find('option:not(:first)').remove(); for (var i = 0; i < oResult.length; i++) { $("#ddlCustomer").append(new Option(oResult[i].CustomerName, oResult[i].CustomerID + '/' + oResult[i].ID)); } A: Another way: $('#select').empty().append($('<option>').text('---------').attr('value','')); Under this link, there are good practices https://api.jquery.com/select/ A: $('#mySelect') .empty() .append('<option selected="selected" value="whatever">text</option>') ; A: Not sure exactly what you mean by "add one and select it", since it will be selected by default anyway. But, if you were to add more than one, it would make more sense. How about something like: $('select').children().remove(); $('select').append('<option id="foo">foo</option>'); $('#foo').focus(); Response to "EDIT": Can you clarify what you mean by "This select box is populated by a set of radio buttons"? A <select> element cannot (legally) contain <input type="radio"> elements. A: This will replace your existing mySelect with a new mySelect. $('#mySelect').replaceWith('<Select id="mySelect" size="9"> <option value="whatever" selected="selected" >text</option> </Select>'); A: Uses the jquery prop() to clear the selected option $('#mySelect option:selected').prop('selected', false); A: You can do simply by replacing html $('#mySelect') .html('<option value="whatever" selected>text</option>') .trigger('change'); A: I saw this code in Select2 - Clearing Selections $('#mySelect').val(null).trigger('change'); This code works well with jQuery even without Select2 A: Cleaner give me Like it let data= [] let inp = $('#mySelect') inp.empty() data.forEach(el=> inp.append( new Option(el.Nombre, el.Id) )) A: * *save the option values to be appended in an object *clear existing options in the select tag *iterate the list object and append the contents to the intended select tag var listToAppend = {'':'Select Vehicle','mc': 'Motor Cyle', 'tr': 'Tricycle'}; $('#selectID').empty(); $.each(listToAppend, function(val, text) { $('#selectID').append( new Option(text,val) ); }); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> A: $('#mySelect') .empty() .append('<option value="whatever">text</option>') .find('option:first') .attr("selected","selected") ; A: $("#control").html("<option selected=\"selected\">The Option...</option>"); A: Just one line to remove all options from the select tag and after you can add any options then make second line to add options. $('.ddlsl').empty(); $('.ddlsl').append(new Option('Select all', 'all')); One more short way but didn't tried $('.ddlsl').empty().append(new Option('Select all', 'all')); A: I used vanilla javascript let select = document.getElementById("mySelect"); select.innerHTML = ""; A: $('#mySelect') .find('option') .remove() .end() .append('<option value="whatever">text</option>') .val('whatever') ; A: why not just use plain javascript? document.getElementById("selectID").options.length = 0; A: Thanks to the answers I received, I was able to create something like the following, which suits my needs. My question was somewhat ambiguous. Thanks for following up. My final problem was solved by including "selected" in the option that I wanted selected. $(function() { $('#mySelect').children().remove().end().append('<option selected value="One">One option</option>') ; // clear the select box, then add one option which is selected $("input[name='myRadio']").filter( "[value='1']" ).attr( "checked", "checked" ); // select radio button with value 1 // Bind click event to each radio button. $("input[name='myRadio']").bind("click", function() { switch(this.value) { case "1": $('#mySelect').find('option').remove().end().append('<option selected value="One">One option</option>') ; break ; case "2": $('#mySelect').find('option').remove() ; var items = ["Item1", "Item2", "Item3"] ; // Set locally for demo var options = '' ; for (var i = 0; i < items.length; i++) { if (i==0) { options += '<option selected value="' + items[i] + '">' + items[i] + '</option>'; } else { options += '<option value="' + items[i] + '">' + items[i] + '</option>'; } } $('#mySelect').html(options); // Populate select box with array break ; } // Switch end } // Bind function end ); // bind end }); // Event listener end <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <label>One<input name="myRadio" type="radio" value="1" /></label> <label>Two<input name="myRadio" type="radio" value="2" /></label> <select id="mySelect" size="9"></select> A: If your goal is to remove all the options from the select except the first one (typically the 'Please pick an item' option) you could use: $('#mySelect').find('option:not(:first)').remove(); A: I've found on the net something like below. With a thousands of options like in my situation this is a lot faster than .empty() or .find().remove() from jQuery. var ClearOptionsFast = function(id) { var selectObj = document.getElementById(id); var selectParentNode = selectObj.parentNode; var newSelectObj = selectObj.cloneNode(false); // Make a shallow copy selectParentNode.replaceChild(newSelectObj, selectObj); return newSelectObj; } More info here. A: Hope it will work $('#myselect').find('option').remove() .append($('<option></option>').val('value1').html('option1')); A: var select = $('#mySelect'); select.find('option').remove().end() .append($('<option/>').val('').text('Select')); var data = [{"id":1,"title":"Option one"}, {"id":2,"title":"Option two"}]; for(var i in data) { var d = data[i]; var option = $('<option/>').val(d.id).text(d.title); select.append(option); } select.val(''); A: Try mySelect.innerHTML = `<option selected value="whatever">text</option>` function setOne() { console.log({mySelect}); mySelect.innerHTML = `<option selected value="whatever">text</option>`; } <button onclick="setOne()" >set one</button> <Select id="mySelect" size="9"> <option value="1">old1</option> <option value="2">old2</option> <option value="3">old3</option> </Select> A: The shortest answer: $('#mySelect option').remove().append('<option selected value="whatever">text</option>'); A: Try $('#mySelect') .html('<option value="whatever">text</option>') .find('option:first') .attr("selected","selected"); OR $('#mySelect').html('<option value="4">Value 4</option> <option value="5">Value 5</option> <option value="6">Value 6</option> <option value="7">Value 7</option> <option value="8">Value 8</option>') .find('option:first') .prop("selected",true);
{ "language": "en", "url": "https://stackoverflow.com/questions/47824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1220" }
Q: HTML Compression and SEO? At work, we have a dedicated SEO Analyst who's job is to pour over lots of data (KeyNote/Compete etc) and generate up fancy reports for the executives so they can see how we are doing against our competitors in organic search ranking. He also leads initiatives to improve the SEO rankings on our sites by optimizing things as best we can. We also have a longstanding mission to decrease our page load time, which right now is pretty shoddy on some pages. The SEO guy mentioned that semantic, valid HTML gets more points by crawlers than jumbled messy HTML. I've been working on a real time HTML compressor that will decrease our page sizes my a pretty good chunk. Will compressing the HTML hurt us in site rankings? A: I would suggest using compression at the transport layer, and eliminating whitespace from the HTML, but not sacrificing the semantics of your markup in the interest of speed. In fact, the better you "compress" your markup, the less effective the transport layer compression will be. Or, to put it a better way, let the gzip transfer-coding slim your HTML for you, and pour your energy into writing clean markup that renders quickly once it hits the browser. A: Compressing HTML should not hurt you. When you say HTML compressor I assume you mean a tool that removed whitespace etc from your pages to make them smaller, right? This doesn't impact how a crawler will see your html as it likely strips the same things from the HTML when it grabs the page from your site. The 'semantic' structure of the HTML exists whether compressed or not. You might also want to look at: * *Compressing pages with an GZIP compression in the web server *Reducing size of images, CSS, javascript etc *Considering how the browser's layout engine loads your pages. By jumbled HTML, this SEO person probably means the use of tables for layout and re-purposing of built in HTML elements (eg. <p class="headerOne">Header 1</p>). This increases the ratio of HTML tags to page content, or keyword density in SEO terms. It has bigger problems though: * *Longer page load times due to increased content to download, why not use the H1 tag? *It's difficult for screenreaders to understand and affects site accessibility. *Browsers may take longer to render the content depending on how they parse and layout pages with styles. A: I once retooled a messy tables-for-layout to xhtml 1.0 transitional and the size went from 100kb to 40kb. The images loaded went from 200kb to just 50kb. The reason I got such a large savings was because the site had all the JS embedded in every page. I also retooled all the JS so it was correct for both IE6 and FF2. The images were also compiled down to an image-map. All the techniques were well documented on A List Apart and easy to implement. A: Use gzip compression to compress the HTML in the transport stage, then just make sure that code validates and that you are using logical tags for everything. A: The SEO guy mentioned that semantic, valid HTML gets more points by crawlers than jumbled messy HTML. If a SEO guy ever tries to provide a fact about SEO then tell him to provide a source, because to the best of my knowledge that is simply untrue. If the content is there it will be crawled. It is a common urban-myth amongst SEO analysts that just isn't true. However, the use of header tags is recommended. <H1> tags for the page title and <H2> for main headings, then lower down for lower headings. I've been working on a real time HTML compressor that will decrease our page sizes my a pretty good chunk. Will compressing the HTML hurt us in site rankings? If it can be read on the client side without problem then it is perfectly fine. If you want to look up any of this I recommend anything referencing Matt Cutt's or from the following post. FAQ: Search Engine Optimisation A: Using compression does not hurt your page ranking. Matt Cutts talks about this in his article on Crawl Caching Proxy Your page load time can also be greatly improved by resizing your images. While you can use the height and width attributes in the img tag, this does not change the size of the images that is downloaded to the browser. Resizing the images before putting them on your pages can reduce the load time by 50% or more, depending on the number and type of images that you're using. Other things that can improve your page load time are: * *Use web standards/CSS for layout instead of tables *If you copy/paste content from MS Word, strip out the extra tags that Word generates *Put CSS and javascript in external files, rather then embedded in the page. Helps when users visit more than one page on your site because browsers typically cache these files This Web Page Analyzer will give you a speed reports that shows how long different elements of your page take to download. A: First you check on the code. The code is validate w3c standards like HTML & CSS
{ "language": "en", "url": "https://stackoverflow.com/questions/47827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: In C#, what is the best way to test if a dataset is empty? I know you can look at the row.count or tables.count, but are there other ways to tell if a dataset is empty? A: What's wrong with (aDataSet.Tables.Count == 0) ? A: I have created a small static util class just for that purpose Below code should read like an English sentence. public static bool DataSetIsEmpty(DataSet ds) { return !DataTableExists(ds) && !DataRowExists(ds.Tables[0].Rows); } public static bool DataTableExists(DataSet ds) { return ds.Tables != null && ds.Tables.Count > 0; } public static bool DataRowExists(DataRowCollection rows) { return rows != null && rows.Count > 0; } I would just put something like below code and be done with it. Writing a readable code does count. if (DataAccessUtil.DataSetIsEmpty(ds)) { return null; } A: I think this is a place where you could use an extension method in C# 3 to improve legibility. Using kronoz's idea... public static bool IsNotEmpty ( this dataset ) { return dataSet != null && ( from DataTable t in dataSet.Tables where t.Rows.AsQueryable().Any() select t).AsQueryable().Any(); } //then the check would be DataSet ds = /* get data */; ds.IsNotEmpty(); Due to the fact that extension methods are always expanded by the compiler this will even work if the dataset being checked is null. At compile time this is changed: ds.IsNotEmpty(); //becomes DataSetExtensions.IsNotEmpty( ds ); A: I would suggest something like:- bool nonEmptyDataSet = dataSet != null && (from DataTable t in dataSet.Tables where t.Rows.Count > 0 select t).Any(); Edits: I have significantly cleaned up the code after due consideration, I think this is much cleaner. Many thanks to Keith for the inspiration regarding the use of .Any(). In line with Keith's suggestion, here is an extension method version of this approach:- public static class ExtensionMethods { public static bool IsEmpty(this DataSet dataSet) { return dataSet == null || !(from DataTable t in dataSet.Tables where t.Rows.Count > 0 select t).Any(); } } Note, as Keith rightly corrected me on in the comments of his post, this method will work even when the data set is null. A: To be clear, you would first need to look at all the DataTables, and then look at the count of Rows for each DataTable. A: #region Extension methods public static class ExtensionMethods { public static bool IsEmpty(this DataSet dataSet) { return dataSet == null || dataSet.Tables.Count == 0 || !dataSet.Tables.Cast<DataTable>().Any(i => i.Rows.Count > 0); } } #endregion
{ "language": "en", "url": "https://stackoverflow.com/questions/47833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Getting the base element from a jQuery object I'm struggling to find the right terminology here, but if you have jQuery object... $('#MyObject') ...is it possible to extract the base element? Meaning, the equivalent of this: document.getElementById('MyObject') A: $('#MyObject').get(0); I think that's what you want. I think you can also reference it like a regular array with: $('#MyObject')[0]; But I'm not sure if that will always work. Stick with the first syntax. A: A jQuery object is a set of elements. In your case, a set of one element. This differs from certain other libraries, which wrap single elements and provide alternate syntax for selectors that return multiple matches. Aaron W and VolkerK already explained how to access the first (index 0) element in the set. A: Yes, use .get(index). According to the documentation: The .get() method grants access to the DOM nodes underlying each jQuery object. A: I tested Aaron's statements on all the browsers I have available on my box: $('#MyObject').get(0); vs $('#MyObject')[0]; As far as I can tell, it is only a matter of personal preference. Functionally, both these statements are equivalent for both existing and non-existing elements. I tested the following browsers: Chrome 27.0, FF 21.0, IE10, IE9, IE8, IE7, IE6. In the speed tests that I ran, it was not always possible to tell which variation was faster; the outcome was not always consistent, even on the same browser. For the speed tests, I only tested existing elements. My test results are here.
{ "language": "en", "url": "https://stackoverflow.com/questions/47837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69" }
Q: Justification for Reflection in C# I have wondered about the appropriateness of reflection in C# code. For example I have written a function which iterates through the properties of a given source object and creates a new instance of a specified type, then copies the values of properties with the same name from one to the other. I created this to copy data from one auto-generated LINQ object to another in order to get around the lack of inheritance from multiple tables in LINQ. However, I can't help but think code like this is really 'cheating', i.e. rather than using using the provided language constructs to achieve a given end it allows you to circumvent them. To what degree is this sort of code acceptable? What are the risks? What are legitimate uses of this approach? A: Sometimes using reflection can be a bit of a hack, but a lot of the time it's simply the most fantastic code tool. Look at the .Net property grid - anyone who's used Visual Studio will be familiar with it. You can point it at any object and it it will produce a simple property editor. That uses reflection, in fact most of VS's toolbox does. Look at unit tests - they're loaded by reflection (at least in NUnit and MSTest). Reflection allows dynamic-style behaviour from static languages. The one thing it really needs is duck typing - the C# compiler already supports this: you can foreach anything that looks like IEnumerable, whether it implements the interface or not. You can use the C#3 collection syntax on any class that has a method called Add. Use reflection wherever you need dynamic-style behaviour - for instance you have a collection of objects and you want to check the same property on each. The risks are similar for dynamic types - compile time exceptions become run time ones. You code is not as 'safe' and you have to react accordingly. The .Net reflection code is very quick, but not as fast as the explicit call would have been. A: I agree, it gives me the it works but it feels like a hack feeling. I try to avoid reflection whenever possible. I have been burned many times after refactoring code which had reflection in it. Code compiles fine, tests even run, but under special circumstances (which the tests didn't cover) the program blows up run-time because of my refactoring in one of the objects the reflection code poked into. Example 1: Reflection in OR mapper, you change the name or the type of the property in your object model: Blows up run-time. Example 2: You are in a SOA shop. Web Services are complete decoupled (or so you think). They have their own set of generated proxy classes, but in the mapping you decide to save some time and you do this: ExternalColor c = (ExternalColor)Enum.Parse(typeof(ExternalColor), internalColor.ToString()); Under the covers this is also reflection but done by the .net framework itself. Now what happens if you decide to rename InternalColor.Grey to InternalColor.Gray? Everything looks ok, it builds fine, and even runs fine.. until the day some stupid user decides to use the color Gray... at which point the mapper will blow up. A: Reflection is a wonderful tool that I could not live without. It can make programming much easier and faster. For instance, I use reflection in my ORM layer to be able to assign properties with column values from tables. If it wasn't for reflection I have had to create a copy class for each table/class mapping. As for the external color exception above. The problem is not Enum.Parse, but that the coder didnt not catch the proper exception. Since a string is parsed, the coder should always assume that the string can contain an incorrect value. The same problem applies to all advanced programming in .Net. "With great power, comes great responsibility". Using reflection gives you much power. But make sure that you know how to use it properly. There are dozens of examples on the web. A: It may be just me, but the way I'd get into this is by creating a code generator - using reflection at runtime is a bit costly and untyped. Creating classes that would get generated according to your latest code and copy everything in a strongly typed manner would mean that you will catch these errors at build-time. For instance, a generated class may look like this: static class AtoBCopier { public static B Copy(A item) { return new B() { Prop1 = item.Prop1, Prop2 = item.Prop2 }; } } If either class doesn't have the properties or their types change, the code doesn't compile. Plus, there's a huge improvement in times. A: I recently used reflection in C# for finding implementations of a specific interface. I had written a simple batch-style interpreter that looked up "actions" for each step of the computation based on the class name. Reflecting the current namespace then pops up the right implementation of my IStep inteface that can be Execute()ed. This way, adding new "actions" is as easy as creating a new derived class - no need to add it to a registry, or even worse: forgetting to add it to a registry... A: Reflection makes it very easy to implement plugin architectures where plugin DLLs are automatically loaded at runtime (not explicitly linked at compile time). These can be scanned for classes that implement/extend relevant interfaces/classes. Reflection can then be used to instantiate instances of these on demand.
{ "language": "en", "url": "https://stackoverflow.com/questions/47838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Why is creating a new process more expensive on Windows than Linux? I've heard that creating a new process on a Windows box is more expensive than on Linux. Is this true? Can somebody explain the technical reasons for why it's more expensive and provide any historical reasons for the design decisions behind those reasons? A: Uh, there seems to be a lot of "it's better this way" sort of justification going on. I think people could benefit from reading "Showstopper"; the book about the development of Windows NT. The whole reason the services run as DLLs in one process on Windows NT was that they were too slow as separate processes. If you got down and dirty you'd find that the library loading strategy is the problem. On Unices ( in general) the Shared libraries (DLLs) code segments are actually shared. Windows NT loads a copy of the DLL per process, becauase it manipulates the library code segment (and executable code segment) after loading. (Tells it where is your data?) This results in code segments in libraries that are not reusable. So, the NT process create is actually pretty expensive. And on the down side, it makes DLLs no appreciable saving in memory, but a chance for inter-app dependency problems. Sometimes it pays in engineering to step back and say, "now, if we were going to design this to really suck, what would it look like?" I worked with an embedded system that was quite temperamental once upon a time, and one day looked at it and realized it was a cavity magnetron, with the electronics in the microwave cavity. We made it much more stable (and less like a microwave) after that. A: mweerden: NT has been designed for multi-user from day one, so this is not really a reason. However, you are right about that process creation plays a less important role on NT than on Unix as NT, in contrast to Unix, favors multithreading over multiprocessing. Rob, it is true that fork is relatively cheap when COW is used, but as a matter of fact, fork is mostly followed by an exec. And an exec has to load all images as well. Discussing the performance of fork therefore is only part of the truth. When discussing the speed of process creation, it is probably a good idea to distinguish between NT and Windows/Win32. As far as NT (i.e. the kernel itself) goes, I do not think process creation (NtCreateProcess) and thread creation (NtCreateThread) is significantly slower as on the average Unix. There might be a little bit more going on, but I do not see the primary reason for the performance difference here. If you look at Win32, however, you'll notice that it adds quite a bit of overhead to process creation. For one, it requires the CSRSS to be notified about process creation, which involves LPC. It requires at least kernel32 to be loaded additionally, and it has to perform a number of additional bookkeeping work items to be done before the process is considered to be a full-fledged Win32 process. And let's not forget about all the additional overhead imposed by parsing manifests, checking if the image requires a compatbility shim, checking whether software restriction policies apply, yada yada. That said, I see the overall slowdown in the sum of all those little things that have to be done in addition to the raw creation of a process, VA space, and initial thread. But as said in the beginning -- due to the favoring of multithreading over multitasking, the only software that is seriously affected by this additional expense is poorly ported Unix software. Although this sitatuion changes when software like Chrome and IE8 suddenly rediscover the benefits of multiprocessing and begin to frequently start up and teardown processes... A: Adding to what JP said: most of the overhead belongs to Win32 startup for the process. The Windows NT kernel actually does support COW fork. SFU (Microsoft's UNIX environment for Windows) uses them. However, Win32 does not support fork. SFU processes are not Win32 processes. SFU is orthogonal to Win32: they are both environment subsystems built on the same kernel. In addition to the out-of-process LPC calls to CSRSS, in XP and later there is an out of process call to the application compatibility engine to find the program in the application compatibility database. This step causes enough overhead that Microsoft provides a group policy option to disable the compatibility engine on WS2003 for performance reasons. The Win32 runtime libraries (kernel32.dll, etc.) also do a lot of registry reads and initialization on startup that don't apply to UNIX, SFU or native processes. Native processes (with no environment subsystem) are very fast to create. SFU does a lot less than Win32 for process creation, so its processes are also fast to create. UPDATE FOR 2019: add LXSS: Windows Subsystem for Linux Replacing SFU for Windows 10 is the LXSS environment subsystem. It is 100% kernel mode and does not require any of that IPC that Win32 continues to have. Syscall for these processes is directed directly to lxss.sys/lxcore.sys, so the fork() or other process creating call only costs 1 system call for the creator, total. [A data area called the instance] keeps track of all LX processes, threads, and runtime state. LXSS processes are based on native processes, not Win32 processes. All the Win32 specific stuff like the compatibility engine aren't engaged at all. A: As there seems to be some justification of MS-Windows in some of the answers e.g. * *“NT kernel and Win32, are not the same thing. If you program to NT kernel then it is not so bad” — True, but unless you are writing a Posix subsystem, then who cares. You will be writing to win32. *“It is not fair to compare fork, with ProcessCreate, as they do different things, and Windows does not have fork“ — True, So I will compare like with like. However I will also compare fork, because it has many many use cases, such as process isolation (e.g. each tab of a web browser runs in a different process). Now let us look at the facts, what is the difference in performance? Data summerised from http://www.bitsnbites.eu/benchmarking-os-primitives/. Because bias is inevitable, when summarising, I did it in favour of MS-Windows Hardware for most tests i7 8 core 3.2GHz. Except Raspberry-Pi running Gnu/Linux Notes: On linux, fork is faster that MS-Window's preferred method CreateThread. Numbers for process creation type operations (because it is hard to see the value for Linux in the chart). In order of speed, fastest to slowest (numbers are time, small is better). * *Linux CreateThread 12 *Mac CreateThread 15 *Linux Fork 19 *Windows CreateThread 25 *Linux CreateProcess (fork+exec) 45 *Mac Fork 105 *Mac CreateProcess (fork+exec) 453 *Raspberry-Pi CreateProcess (fork+exec) 501 *Windows CreateProcess 787 *Windows CreateProcess With virus scanner 2850 *Windows Fork (simulate with CreateProcess + fixup) grater than 2850 Numbers for other measurements * *Creating a file. * *Linux 13 *Mac 113 *Windows 225 *Raspberry-Pi (with slow SD card) 241 *Windows with defender and virus scanner etc 12950 *Allocating memory * *Linux 79 *Windows 93 *Mac 152 A: Unix has a 'fork' system call which 'splits' the current process into two, and gives you a second process that is identical to the first (modulo the return from the fork call). Since the address space of the new process is already up and running this is should be cheaper than calling 'CreateProcess' in Windows and having it load the exe image, associated dlls, etc. In the fork case the OS can use 'copy-on-write' semantics for the memory pages associated with both new processes to ensure that each one gets their own copy of the pages they subsequently modify. A: In addition to the answer of Rob Walker: Nowadays you have things like the Native POSIX Thread Library - if you want. But for a long time the only way to "delegate" the work in the unix world was to use fork() (and it's still prefered in many, many circumstances). e.g. some kind of socket server socket_accept() fork() if (child) handleRequest() else goOnBeingParent() Therefore the implementation of fork had to be fast and lots optimizations have been implemented over time. Microsoft endorsed CreateThread or even fibers instead of creating new processes and usage of interprocess communication. I think it's not "fair" to compare CreateProcess to fork since they are not interchangeable. It's probably more appropriate to compare fork/exec to CreateProcess. A: The key to this matter is the historical usage of both systems, I think. Windows (and DOS before that) have originally been single-user systems for personal computers. As such, these systems typically don't have to create a lot of processes all the time; (very) simply put, a process is only created when this one lonely user requests it (and we humans don't operate very fast, relatively speaking). Unix-based systems have originally been multi-user systems and servers. Especially for the latter it is not uncommon to have processes (e.g. mail or http daemons) that split off processes to handle specific jobs (e.g. taking care of one incoming connection). An important factor in doing this is the cheap fork method (that, as mentioned by Rob Walker (47865), initially uses the same memory for the newly created process) which is very useful as the new process immediately has all the information it needs. It is clear that at least historically the need for Unix-based systems to have fast process creation is far greater than for Windows systems. I think this is still the case because Unix-based systems are still very process oriented, while Windows, due to its history, has probably been more thread oriented (threads being useful to make responsive applications). Disclaimer: I'm by no means an expert on this matter, so forgive me if I got it wrong. A: The short answer is "software layers and components". The windows SW architecture has a couple of additional layers and components that don't exist on Unix or are simplified and handled inside the kernel on Unix. On Unix, fork and exec are direct calls to the kernel. On Windows, the kernel API is not used directly, there is win32 and certain other components on top of it, so process creation must go through extra layers and then the new process must start up or connect to those layers and components. For quite some time researchers and corporations have attempted to break up Unix in a vaguely similar way, usually basing their experiments on the Mach kernel; a well-known example is OS X.. Every time they try, though, it gets so slow they end up at least partially merging the pieces back into the kernel either permanently or for production shipments. A: All that plus there's the fact that on the Win machine most probably an antivirus software will kick in during the CreateProcess... That's usually the biggest slowdown. A: It's also worth noting that the security model in Windows is vastly more complicated than in unix-based OSs, which adds a lot of overhead during process creation. Yet another reason why multithreading is preferred to multiprocessing in Windows.
{ "language": "en", "url": "https://stackoverflow.com/questions/47845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "115" }
Q: Refer to/select a drive based only on its label? (i.e., not the drive letter) I'm trying to refer to a drive whose letter may change. I'd like to refer to it by its label (e.g., MyLabel (v:) within a Batch File. It can be referred to by V:\ . I'd like to refer to it by MyLabel. (This was posted on Experts Echange for a month with no answer. Let's see how fast SO answers it ) A: The previous answers seem either overly complicated, and/or not particularly suited to a batch file. This simple one liner should place the desired drive letter in variable myDrive. Obviously change "My Label" to your actual label. for /f %%D in ('wmic volume get DriveLetter^, Label ^| find "My Label"') do set myDrive=%%D If run from the command line (not in a batch file), then %%D must be changed to %D in both places. Once the variable is set, you can refer to the drive using %myDrive%. For example dir %myDrive%\someFolder A: You can use the WMI query language for that. Take a look at http://msdn.microsoft.com/en-us/library/aa394592(VS.85).aspx for examples. The information you are looking for is available e.g. through the property VolumeName of the Win32_LogicalDisk class, http://msdn.microsoft.com/en-us/library/aa394173(VS.85).aspx SELECT * FROM Win32_LogicalDisk WHERE VolumeName="MyLabel" A: This bat file will give you the drive letter from a drive label: Option Explicit Dim num, args, objWMIService, objItem, colItems set args = WScript.Arguments num = args.Count if num <> 1 then WScript.Echo "Usage: CScript DriveFromLabel.vbs <label>" WScript.Quit 1 end if Set objWMIService = GetObject("winmgmts:\\.\root\cimv2") Set colItems = objWMIService.ExecQuery("Select * from Win32_LogicalDisk") For Each objItem in colItems If strcomp(objItem.VolumeName, args.Item(0), 1) = 0 Then Wscript.Echo objItem.Name End If Next WScript.Quit 0 Run it as: cscript /nologo DriveFromLabel.vbs label A: Here is a simple batch script getdrive.cmd to find a drive letter from a volume label. Just call "getdrive MyLabel" or getdrive "My Label". @echo off setlocal :: Initial variables set TMPFILE=%~dp0getdrive.tmp set driveletters=abcdefghijklmnopqrstuvwxyz set MatchLabel_res= for /L %%g in (2,1,25) do call :MatchLabel %%g %* if not "%MatchLabel_res%"=="" echo %MatchLabel_res% goto :END :: Function to match a label with a drive letter. :: :: The first parameter is an integer from 1..26 that needs to be :: converted in a letter. It is easier looping on a number :: than looping on letters. :: :: The second parameter is the volume name passed-on to the script :MatchLabel :: result already found, just do nothing :: (necessary because there is no break for for loops) if not "%MatchLabel_res%"=="" goto :eof :: get the proper drive letter call set dl=%%driveletters:~%1,1%% :: strip-off the " in the volume name to be able to add them again further set volname=%2 set volname=%volname:"=% :: get the volume information on that disk vol %dl%: > "%TMPFILE%" 2>&1 :: Drive/Volume does not exist, just quit if not "%ERRORLEVEL%"=="0" goto :eof set found=0 for /F "usebackq tokens=3 delims=:" %%g in (`find /C /I "%volname%" "%TMPFILE%"`) do set found=%%g :: trick to stip any whitespaces set /A found=%found% + 0 if not "%found%"=="0" set MatchLabel_res=%dl%: goto :eof :END if exist "%TMPFILE%" del "%TMPFILE%" endlocal
{ "language": "en", "url": "https://stackoverflow.com/questions/47849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How do you create a virtual network interface on Windows? On linux, it's possible to create a tun interface using a tun driver which provides a "network interface psuedo-device" that can be treated as a regular network interface. Is there a way to do this programmatically on windows? Is there a way to do this without writing my own driver? A: @Tim Depending on the licensing you might be able to use the TUN/TAP driver that is part of OpenVPN, see here for details. A: You can do this on XP with the Microsoft Loopback Adapter which is a driver for a virtual network card. On newer Windows version: Installing the Microsoft Loopback Adapter in Windows 8 and Windows Server 2012 A: In the Singularity project, Microsoft research communicates with the singularity VM through a "loopback" adapter. Maybe that'd help? Running it is easy so it may be something fun to do anyway. :) http://research.microsoft.com/os/Singularity/
{ "language": "en", "url": "https://stackoverflow.com/questions/47854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: Upgrade database from SQL Server 2000 to 2005 -- and rebuild full-text indexes? I'm loading a SQL Server 2000 database into my new SQL Server 2005 instance. As expected, the full-text catalogs don't come with it. How can I rebuild them? Right-clicking my full text catalogs and hitting "rebuild indexes" just hangs for hours and hours without doing anything, so it doesn't appear to be that simple... A: Try it using SQL. * *CREATE FULLTEXT CATALOG *ALTER FULLTEXT CATALOG Here's an example from Microsoft. --Change to accent insensitive USE AdventureWorks; GO ALTER FULLTEXT CATALOG ftCatalog REBUILD WITH ACCENT_SENSITIVITY=OFF; GO -- Check Accentsensitivity SELECT FULLTEXTCATALOGPROPERTY('ftCatalog', 'accentsensitivity'); GO --Returned 0, which means the catalog is not accent sensitive. A: Thanks, that helped because it showed what was wrong: My file paths were different. Here's how I fixed it: 1) Load database from SQL 2000 backup 2) Set compatibility mode to SQL 2005 USE mydb GO ALTER DATABASE mydb SET COMPATIBILITY_LEVEL = 90 GO 3) Get the filegroup names SELECT name FROM sys.master_files mf WHERE type = 4 AND EXISTS( SELECT * FROM sys.databases db WHERE db.database_id = mf.database_id AND name = 'mydb') 4) Then for each name (I did this in a little script) ALTER DATABASE mydb MODIFY FILE( NAME = {full text catalog name}, FILENAME="N:\ew\path\to\wherever") 5) Then collect all the "readable" names of the catalogs: SELECT name FROM sys.sysfulltextcatalogs 6) Finally, now you can rebuild each one: ALTER FULLTEXT CATALOG {full text catalog name} REBUILD
{ "language": "en", "url": "https://stackoverflow.com/questions/47862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }