text
stringlengths
8
267k
meta
dict
Q: Handling HttpRequestValidationException gracefully and ASP.net AJAX compatible? ValidateEvents is a great ASP.net function, but the Yellow Screen of Death is not so nice. I found a way how to handle the HttpRequestValidationException gracefully here, but that does not work with ASP.net AJAX properly. Basically, I got an UpdatePanel with a TextBox and a Button, and when the user types in HTML into the Textbox, a JavaScript Popup with a Error message saying not to modify the Response pops up. So I wonder what is the best way to handle HttpRequestValidationException gracefully? For "normal" requests I would like to just display an error message, but when it's an AJAX Request i'd like to throw the request away and return something to indicate an error, so that my frontend page can react on it? A: Found it and blogged about it. Basically, the EndRequestHandler and the args.set_errorHandled are our friends here. <script type="text/javascript" language="javascript"> var prm = Sys.WebForms.PageRequestManager.getInstance(); prm.add_endRequest(EndRequestHandler); function EndRequestHandler(sender, args) { if (args.get_error() != undefined) { var errorMessage; if (args.get_response().get_statusCode() == '200') { errorMessage = args.get_error().message; } else { // Error occurred somewhere other than the server page. errorMessage = 'An unspecified error occurred. '; } args.set_errorHandled(true); $get('<%= this.newsletterLabel.ClientID %>').innerHTML = errorMessage; } } </script> A: That's what I would like to avoid if possible, but this seems to be much more complicated than expected. Normally, everyone advises using the AsyncPostBackError of the ScriptManager, but this does not work if called on the Global.asax. Unfortunately, as the HttpRequestValidationException is emitted by the runtime, it never enters my code and I cannot do much within the Application_Error. So yes, it needs to be indeed done in the JavaScript, I just hope there is a way to add a "hook" like the BeginRequestHandler-Function so that I don't have to "hack" Microsoft code. If I find a solution before someone else, i'll put it up here :-) A: hmmmm, it seems you would need to find some sort of JavaScript to check for html input or a client side validator.
{ "language": "en", "url": "https://stackoverflow.com/questions/47864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Programmatically determine how many comments a blog post has What is the most efficient way to determine how many comments a particular blog post has? We want to store the data for a new web app. We have a list of permalink URl's as well as the RSS feeds. A: If the blog is controlled by you, a "Select count(commentid) FROM comments WHERE postID = 2" will possibly the best thing. If you only have the URL but still it's your blog/db, you need to create a subquery "WHERE postID = (SELECT whatever FROM posts WHERE permalink = url)" or whatever your way to join the comments to the posts from a URL. If it's a remote blog, you have the problem that each blog has different HTML. Essentially, you're going to need to build a parser that parses the HTML and looks for repeating elements like "div class=comment". But that will be mostly a manual labour for each different blogs. Some blogs may have better ways like a comment count somewhere in the HTML or some interface, but i'm not aware of any standardized way. EDIT: If you got a Comment-RSS feed, you may have luck using a mechanism that counts XML nodes, like XPath's Count. A: If I understand correctly, you want a heuristic to estimate the number of comments in an HTML page which is known to be a blog post, yes? Very often, a specific blog will have some features which make it easy to work out. If you look at mine over at http://kstruct.com/ you'll see that all the pages with comments say 'X Responses', so if you were able to do some work on a per blog basis, it's probably not really difficult. If you needed something generic, I guess there are a few common features that comments have that you might be able to detect. For one, any links in them are quite likely to have rel="nofollow" attributes, so seeing that within a block might imply that it's a comment. The main interesting thing to look for would be changes in the structure of posts for m the same site. For example, there's also a very good chance that each comment will have its own anchor so people can link directly to it, so you could look at the differing numbers of <a name="XXX"> tags in a given page on the same site to get an idea of the relative numbers of comments. As Michael Stum pointed out, if the pages have a Comment-RSS feed, your life is made a lot easier because you can get the comment data in a structured format. All in all, though, I think it's going to be quite a challenging problem to solve in general. A: Blogs almost always have an RSS feed for comments. If you have that, then you can determine the exact number of comments, since the feeds 99% of the time follow a standard. Even if the blog is your own, if you are already generating an RSS feed, then don't bother making a call to your DB. You already did that to generate the feed, so it makes sense that you would just traverse the XML nodes. That way you don't have additional overhead (depending on how often you want to get this information).
{ "language": "en", "url": "https://stackoverflow.com/questions/47869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: What is a magic number, and why is it bad? What is a magic number? Why should it be avoided? Are there cases where it's appropriate? A: I assume this is a response to my answer to your earlier question. In programming, a magic number is an embedded numerical constant that appears without explanation. If it appears in two distinct locations, it can lead to circumstances where one instance is changed and not another. For both these reasons, it's important to isolate and define the numerical constants outside the places where they're used. A: A magic number is a direct usage of a number in the code. For example, if you have (in Java): public class Foo { public void setPassword(String password) { // don't do this if (password.length() > 7) { throw new InvalidArgumentException("password"); } } } This should be refactored to: public class Foo { public static final int MAX_PASSWORD_SIZE = 7; public void setPassword(String password) { if (password.length() > MAX_PASSWORD_SIZE) { throw new InvalidArgumentException("password"); } } } It improves readability of the code and it's easier to maintain. Imagine the case where I set the size of the password field in the GUI. If I use a magic number, whenever the max size changes, I have to change in two code locations. If I forget one, this will lead to inconsistencies. The JDK is full of examples like in Integer, Character and Math classes. PS: Static analysis tools like FindBugs and PMD detects the use of magic numbers in your code and suggests the refactoring. A: I've always used the term "magic number" differently, as an obscure value stored within a data structure which can be verified as a quick validity check. For example gzip files contain 0x1f8b08 as their first three bytes, Java class files start with 0xcafebabe, etc. You often see magic numbers embedded in file formats, because files can be sent around rather promiscuously and lose any metadata about how they were created. However magic numbers are also sometimes used for in-memory data structures, like ioctl() calls. A quick check of the magic number before processing the file or data structure allows one to signal errors early, rather than schlep all the way through potentially lengthy processing in order to announce that the input was complete balderdash. A: Have you taken a look at the Wikipedia entry for magic number? It goes into a bit of detail about all of the ways the magic number reference is made. Here's a quote about magic number as a bad programming practice The term magic number also refers to the bad programming practice of using numbers directly in source code without explanation. In most cases this makes programs harder to read, understand, and maintain. Although most guides make an exception for the numbers zero and one, it is a good idea to define all other numbers in code as named constants. A: Magic Number Vs. Symbolic Constant: When to replace? Magic: Unknown semantic Symbolic Constant -> Provides both correct semantic and correct context for use Semantic: The meaning or purpose of a thing. "Create a constant, name it after the meaning, and replace the number with it." -- Martin Fowler First, magic numbers are not just numbers. Any basic value can be "magic". Basic values are manifest entities such as integers, reals, doubles, floats, dates, strings, booleans, characters, and so on. The issue is not the data type, but the "magic" aspect of the value as it appears in our code text. What do we mean by "magic"? To be precise: By "magic", we intend to point to the semantics (meaning or purpose) of the value in the context of our code; that it is unknown, unknowable, unclear, or confusing. This is the notion of "magic". A basic value is not magic when its semantic meaning or purpose-of-being-there is quickly and easily known, clear, and understood (not confusing) from the surround context without special helper words (e.g. symbolic constant). Therefore, we identify magic numbers by measuring the ability of a code reader to know, be clear, and understand the meaning and purpose of a basic value from its surrounding context. The less known, less clear, and more confused the reader is, the more "magic" the basic value is. Basics We have two scenarios for our magic basic values. Only the second is of primary importance for programmers and code: * *A lone basic value (e.g. number) from which its meaning is unknown, unknowable, unclear or confusing. *A basic value (e.g. number) in context, but its meaning remains unknown, unknowable, unclear or confusing. An overarching dependency of "magic" is how the lone basic value (e.g. number) has no commonly known semantic (like Pi), but has a locally known semantic (e.g. your program), which is not entirely clear from context or could be abused in good or bad context(s). The semantics of most programming languages will not allow us to use lone basic values, except (perhaps) as data (i.e. tables of data). When we encounter "magic numbers", we generally do so in a context. Therefore, the answer to "Do I replace this magic number with a symbolic constant?" is: "How quickly can you assess and understand the semantic meaning of the number (its purpose for being there) in its context?" Kind of Magic, but not quite With this thought in mind, we can quickly see how a number like Pi (3.14159) is not a "magic number" when placed in proper context (e.g. 2 x 3.14159 x radius or 2Pir). Here, the number 3.14159 is mentally recognized Pi without the symbolic constant identifier. Still, we generally replace 3.14159 with a symbolic constant identifier like Pi because of the length and complexity of the number. The aspects of length and complexity of Pi (coupled with a need for accuracy) usually means the symbolic identifier or constant is less prone to error. Recognition of "Pi" as a name is a simply a convenient bonus, but is not the primary reason for having the constant. Meanwhile: Back at the Ranch Laying aside common constants like Pi, let's focus primarily on numbers with special meanings, but which those meanings are constrained to the universe of our software system. Such a number might be "2" (as a basic integer value). If I use the number 2 by itself, my first question might be: What does "2" mean? The meaning of "2" by itself is unknown and unknowable without context, leaving its use unclear and confusing. Even though having just "2" in our software will not happen because of language semantics, we do want to see that "2" by itself carries no special semantics or obvious purpose being alone. Let's put our lone "2" in a context of: padding := 2, where the context is a "GUI Container". In this context the meaning of 2 (as pixels or other graphical unit) offers us a quick guess of its semantics (meaning and purpose). We might stop here and say that 2 is okay in this context and there is nothing else we need to know. However, perhaps in our software universe this is not the whole story. There is more to it, but "padding = 2" as a context cannot reveal it. Let's further pretend that 2 as pixel padding in our program is of the "default_padding" variety throughout our system. Therefore, writing the instruction padding = 2 is not good enough. The notion of "default" is not revealed. Only when I write: padding = default_padding as a context and then elsewhere: default_padding = 2 do I fully realize a better and fuller meaning (semantic and purpose) of 2 in our system. The example above is pretty good because "2" by itself could be anything. Only when we limit the range and domain of understanding to "my program" where 2 is the default_padding in the GUI UX parts of "my program", do we finally make sense of "2" in its proper context. Here "2" is a "magic" number, which is factored out to a symbolic constant default_padding within the context of the GUI UX of "my program" in order to make it use as default_padding quickly understood in the greater context of the enclosing code. Thus, any basic value, whose meaning (semantic and purpose) cannot be sufficiently and quickly understood is a good candidate for a symbolic constant in the place of the basic value (e.g. magic number). Going Further Numbers on a scale might have semantics as well. For example, pretend we are making a D&D game, where we have the notion of a monster. Our monster object has a feature called life_force, which is an integer. The numbers have meanings that are not knowable or clear without words to supply meaning. Thus, we begin by arbitrarily saying: * *full_life_force: INTEGER = 10 -- Very alive (and unhurt) *minimum_life_force: INTEGER = 1 -- Barely alive (very hurt) *dead: INTEGER = 0 -- Dead *undead: INTEGER = -1 -- Min undead (almost dead) *zombie: INTEGER = -10 -- Max undead (very undead) From the symbolic constants above, we start to get a mental picture of the aliveness, deadness, and "undeadness" (and possible ramifications or consequences) for our monsters in our D&D game. Without these words (symbolic constants), we are left with just the numbers ranging from -10 .. 10. Just the range without the words leaves us in a place of possibly great confusion and potentially with errors in our game if different parts of the game have dependencies on what that range of numbers means to various operations like attack_elves or seek_magic_healing_potion. Therefore, when searching for and considering replacement of "magic numbers" we want to ask very purpose-filled questions about the numbers within the context of our software and even how the numbers interact semantically with each other. Conclusion Let's review what questions we ought to ask: You might have a magic number if ... * *Can the basic value have a special meaning or purpose in your softwares universe? *Can the special meaning or purpose likely be unknown, unknowable, unclear, or confusing, even in its proper context? *Can a proper basic value be improperly used with bad consequences in the wrong context? *Can an improper basic value be properly used with bad consequences in the right context? *Does the basic value have a semantic or purpose relationships with other basic values in specific contexts? *Can a basic value exist in more than one place in our code with different semantics in each, thereby causing our reader a confusion? Examine stand-alone manifest constant basic values in your code text. Ask each question slowly and thoughtfully about each instance of such a value. Consider the strength of your answer. Many times, the answer is not black and white, but has shades of misunderstood meaning and purpose, speed of learning, and speed of comprehension. There is also a need to see how it connects to the software machine around it. In the end, the answer to replacement is answer the measure (in your mind) of the strength or weakness of the reader to make the connection (e.g. "get it"). The more quickly they understand meaning and purpose, the less "magic" you have. CONCLUSION: Replace basic values with symbolic constants only when the magic is large enough to cause difficult to detect bugs arising from confusions. A: A magic number is a sequence of characters at the start of a file format, or protocol exchange. This number serves as a sanity check. Example: Open up any GIF file, you will see at the very start: GIF89. "GIF89" being the magic number. Other programs can read the first few characters of a file and properly identify GIFs. The danger is that random binary data can contain these same characters. But it is very unlikely. As for protocol exchange, you can use it to quickly identify that the current 'message' that is being passed to you is corrupted or not valid. Magic numbers are still useful. A: In programming, a "magic number" is a value that should be given a symbolic name, but was instead slipped into the code as a literal, usually in more than one place. It's bad for the same reason SPOT (Single Point of Truth) is good: If you wanted to change this constant later, you would have to hunt through your code to find every instance. It is also bad because it might not be clear to other programmers what this number represents, hence the "magic". People sometimes take magic number elimination further, by moving these constants into separate files to act as configuration. This is sometimes helpful, but can also create more complexity than it's worth. A: It is worth noting that sometimes you do want non-configurable "hard-coded" numbers in your code. There are a number of famous ones including 0x5F3759DF which is used in the optimized inverse square root algorithm. In the rare cases where I find the need to use such Magic Numbers, I set them as a const in my code, and document why they are used, how they work, and where they came from. A: A Magic Number is a hard-coded value that may change at a later stage, but that can be therefore hard to update. For example, let's say you have a Page that displays the last 50 Orders in a "Your Orders" Overview Page. 50 is the Magic Number here, because it's not set through standard or convention, it's a number that you made up for reasons outlined in the spec. Now, what you do is you have the 50 in different places - your SQL script (SELECT TOP 50 * FROM orders), your Website (Your Last 50 Orders), your order login (for (i = 0; i < 50; i++)) and possibly many other places. Now, what happens when someone decides to change 50 to 25? or 75? or 153? You now have to replace the 50 in all the places, and you are very likely to miss it. Find/Replace may not work, because 50 may be used for other things, and blindly replacing 50 with 25 can have some other bad side effects (i.e. your Session.Timeout = 50 call, which is also set to 25 and users start reporting too frequent timeouts). Also, the code can be hard to understand, i.e. "if a < 50 then bla" - if you encounter that in the middle of a complicated function, other developers who are not familiar with the code may ask themselves "WTF is 50???" That's why it's best to have such ambiguous and arbitrary numbers in exactly 1 place - "const int NumOrdersToDisplay = 50", because that makes the code more readable ("if a < NumOrdersToDisplay", it also means you only need to change it in 1 well defined place. Places where Magic Numbers are appropriate is everything that is defined through a standard, i.e. SmtpClient.DefaultPort = 25 or TCPPacketSize = whatever (not sure if that is standardized). Also, everything only defined within 1 function might be acceptable, but that depends on Context. A: A problem that has not been mentioned with using magic numbers... If you have very many of them, the odds are reasonably good that you have two different purposes that you're using magic numbers for, where the values happen to be the same. And then, sure enough, you need to change the value... for only one purpose. A: A magic number can also be a number with special, hardcoded semantics. For example, I once saw a system where record IDs > 0 were treated normally, 0 itself was "new record", -1 was "this is the root" and -99 was "this was created in the root". 0 and -99 would cause the WebService to supply a new ID. What's bad about this is that you're reusing a space (that of signed integers for record IDs) for special abilities. Maybe you'll never want to create a record with ID 0, or with a negative ID, but even if not, every person who looks either at the code or at the database might stumble on this and be confused at first. It goes without saying those special values weren't well-documented. Arguably, 22, 7, -12 and 620 count as magic numbers, too. ;-) A: What about initializing a variable at the top of the class with a default value? For example: public class SomeClass { private int maxRows = 15000; ... // Inside another method for (int i = 0; i < maxRows; i++) { // Do something } public void setMaxRows(int maxRows) { this.maxRows = maxRows; } public int getMaxRows() { return this.maxRows; } In this case, 15000 is a magic number (according to CheckStyles). To me, setting a default value is okay. I don't want to have to do: private static final int DEFAULT_MAX_ROWS = 15000; private int maxRows = DEFAULT_MAX_ROWS; Does that make it more difficult to read? I never considered this until I installed CheckStyles. A: @eed3si9n: I'd even suggest that '1' is a magic number. :-) A principle that's related to magic numbers is that every fact your code deals with should be declared exactly once. If you use magic numbers in your code (such as the password length example that @marcio gave, you can easily end up duplicating that fact, and when your understand of that fact changes you've got a maintenance problem. A: Another advantage of extracting a magic number as a constant gives the possibility to clearly document the business information. public class Foo { /** * Max age in year to get child rate for airline tickets * * The value of the constant is {@value} */ public static final int MAX_AGE_FOR_CHILD_RATE = 2; public void computeRate() { if (person.getAge() < MAX_AGE_FOR_CHILD_RATE) { applyChildRate(); } } } A: What about return variables? I specially find it challenging when implementing stored procedures. Imagine the next stored procedure (wrong syntax, I know, just to show an example): int procGetIdCompanyByName(string companyName); It return the Id of the company if it exists in a particular table. Otherwise, it returns -1. Somehow it's a magic number. Some of the recommendations I've read so far says that I'll really have to do design somthing like that: int procGetIdCompanyByName(string companyName, bool existsCompany); By the way, what should it return if the company does not exists? Ok: it will set existesCompany as false, but also will return -1. Antoher option is to make two separate functions: bool procCompanyExists(string companyName); int procGetIdCompanyByName(string companyName); So a pre-condition for the second stored procedure is that company exists. But i'm afraid of concurrency, because in this system, a company can be created by another user. The bottom line by the way is: what do you think about using that kind of "magic numbers" that are relatively known and safe to tell that something is unsuccessful or that something does not exists?
{ "language": "en", "url": "https://stackoverflow.com/questions/47882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "597" }
Q: How Does gcc on Solaris Find Its Libraries? I'm trying to install 'quadrupel', a library that relies on ffmpeg on Solaris x86. I managed to build ffmpeg and its libraries live in /opt/gnu/lib and the includes are in /opt/gnu/include but when I try to build quadrupel, it can't find the ffmpeg headers. What flags/configuration is required to include those two directories in the proper search paths for libraries and includes? I'm not much of a Makefile hacker. A: I believe you need to add the following to the Makefile: CFLAGS += -I/opt/gnu/include LDFLAGS += -L/opt/gnu/lib -R/opt/gnu/lib The -I argument tell gcc where to find the include files. The -L flag tells ld where to find the libraries while linking. The -R flag writes /opt/gnu/lib into the library search path in the quadrupel binary, so it can find its libraries when it starts. A: You can override the path by setting the environmental variable LD_LIBRARY_PATH. However I would suggest changing the system paths as well so you don't have to change the library path for all users. This can be done using crel. crle -l -c /var/ld/ld.config -l /usr/lib:/usr/local/lib:/opt/gnu/lib For the includes just add -I/opt/gnu/include to your CFLAGS variable.
{ "language": "en", "url": "https://stackoverflow.com/questions/47883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is it worth learning to use MSBuild? I simply wondered whether people thought it was worth learning to use the MSBuild syntax in order to customise the build process for a .net project, or whether it is really not worth it given the ease with which one can build a project using visual studio. I am thinking in terms of nightly builds, etc., but then couldn't I use a scheduled event which uses the command-line build option built into VS? Are there superior tools out there? A: MSBuild is definitely worth learning for anyone and everyone writing .NET software. The reason a build server for .NET apps no longer requires Visual Studio to be installed (as Andrew Burns mentioned) is because MSBuild is part of the .NET Framework now. Knowing MSBuild will give you significant flexibility in choosing what technologies you use to implement continuous integration. Because I took the time to learn MSBuild, I was able to change the CI system one of our teams was using from CruiseControl.NET to TeamCity without much difficulty. Those CI servers, or something like FinalBuilder (which I'm not familiar with), are better options for performing nightly builds than a scheduled task. Learning how to implement custom MSBuild tasks will give you even more flexibility in implementing custom builds. Jivko Petiov listed a number of tasks that MSBuild makes easier. In the case of database deployment and configuration, I've written scripts that do this in MSBuild, and it makes the development and testing process much easier. If Visual Studio Team System is in your future, applications built using MSBuild will be much easier to move into that environment than those built via alternative means. There are a lot of resources available to help you get started with MSBuild. I'd start with Inside the Microsoft Build Engine. One of the co-authors also has a ton of stuff on the web, including this site, and a project on CodePlex. A: It sounds like you are a single developer working on your own site. If this is the case, it's not necessary at all, but it is still a good idea for you to learn as a part of your professional experience. Automated building of projects becomes more necessary as the number of developers working on a project increase. It is very easy for two developers to write incompatible code which will break when it is combined (imagine I'm calling a function foo(int x), and you change the signature to be foo(int x, int y): when we combine our code bases, the code will break. These types of errors increase in complexity and hassle with the amount of time between integration builds. By setting up nightly builds, or even builds that occur every check-in, these problems are greatly reduced. This practice is pretty much industry standard across projects with multiple developers. So now, to answer your question: this is a skill that will span across projects and companies. You should learn it to broaden your knowledge and skills as a developer, and to add an important line on your resume. A: Well, MSBuild is built in, so if you are doing something simple, then yes, it is recommended. But for something like nightly builds, I would suggest FinalBuilder. See this question on Build/Configuration Management Tools. A: MSBuild is incredibly simple to use, you can use VS to manage the projects and solution files and just pass the SLN to MSBuild. A: In a scenario such as yours, where you do not already have a build system, then yes, MSBuild is absolutely worth it. Not only can you use it for a variety of pre-build and post-build tasks (see Jicko Petiov's answer), but you can also integrate it nicely into a continuous integration environment (such as CruiseControl). One scenario where it might not be worth it is when you already have an automated/scripted build system in place. For example, I myself haven't taken the time with MSBuild because I've been using NAnt for this task since before MSBuild existed ... A: @kronoz I would say YES. The neat thing about MSBuild is that if you modify your csproj files to include custom build steps then those steps will happen from within VS or from MSBuild. Also if you ever have a build server you will not need to install full VS, only the SDK to build your projects. A: MSBuild is absolutely worth the time to learn. After the initial learning curve (which might be very steep actually) it becomes fairly easy to do the most common build automation steps. * *building assemblies in RELEASE mode *signing assemblies with strong name *running unit tests *modifying xml files / Web.config-s on the fly *modifying the version number of the assemblies *validating FxCop / StyleCop etc... *automated deployment - create SQL databases, IIS websites, windows services etc... A: Building from the command line with MSBuild is relatively easy to learn. Start by opening a Visual Studio Command Prompt, and running msbuild /?. Just read through the help once and then decide later if you want to learn more details. Writing project files is a bit more complicated. Most people don't need to learn it, because you can do most things in Visual Studio. However, it's also quite powerful for certain problems. I have in the past used MSBuild as scripting language, combined with lots of custom tasks. MSBuild has fantastic logging support + built-in dependency management. However, it's not an easy language to learn. PowerShell is a much better choice. A: If you develop in .net workshop, It does worth learning. I have integrated our build process with Jenkins - originally Hudson. As it mentioned above, MSbuild has steep learning curve.However once you grasp the fundamentals, you can start customizing the build. my impression so far - could be naive, bulk of the script is consisted of <PropertyGroup> <PropertyKey>value</PropertyKey> </PropertyGroup> <ItemGroup> <ItemListKey>List values<ItemListKey> </ItemGroup> <Task Source="" Target="" /> Besides using for build, I successfully used MSBuild to create a module that manages configuration files such as web.config and foo.exe.config files. it is a hybrid module that consists of .net console app, MSBuild script and batch file. what this module does is that during a project upgrade, it will create a XML transform template with connection strings, endpoints and appSettings from old configuration files. after the project has been upgraded, the module will transform newly deployed configuration files without affecting any new entries. if you have dozens of configuration files this is very effective. A: @kronoz I would say YES. The neat thing about MSBuild is that if you modify your csproj files to include custom build steps then those steps will happen from within VS or from MSBuild. Also if you ever have a build server you will not need to install full VS, only the SDK to build your projects. ==> This is not entirely true. For example, building a setup project on a build server will require Visual studio to be installed!!
{ "language": "en", "url": "https://stackoverflow.com/questions/47884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: SEO Superstitions: Are A: There's several reasons to avoid inline/internal Javascript: * *HTML is for structure, not behavior or style. For the same reason you should not put CSS directly in HTML elements, you should not put JS. *If your client does not support JS you just pushed a lot of junk. Wasted bandwith. *External JS files are cached. That saves some bandwith. *You'll have a descentralized javascript. That leads to code repetition and all the known problemns that comes with it. A: I don't know about the SEO aspect of this (because I never can tell the mambo jambo from the real deal). But as Douglas Crockford pointed out in one of his javascript webcasts the browser always stops for parsing the script, at each element. So, if possible, I'd rather deliver the whole document and enhance the page as late as possible with scripts anyway. Something like <head> --stylesheets-- </head> <body> Lorem ipsum dolor ... ... <script src="theFancyStuff.js"></script> </body> A: It's been ages since I've played the reading google's tea leafs game, but there are a few reasons your SEO expert might be saying this * *Three or four years back there was a bit of conventional wisdom floating around that the search engine algorithms would give more weight to search terms that happened sooner in the page. If all other things were equal on Pages A and B, if Page A mentions widgets earlier in the HTML file than Page B, Page A "wins". It's not that Google's engineers and PhD employees couldn't skip over the blocks, it's that they found a valuable metric in their presence. Taking that into account, it's easy to see how unless something "needs" (see #2 below) to be in the head of a document, an SEO obsessed person would want it out. *The SEO people who aren't offering a quick fix tend to be proponents of well-crafted, validating/conforming HTML/XHTML structure. Inline Javascript, particularly the kind web ignorant software engineers tend to favor makes these people (I'm one) seethe. The bias against script tags themselves could also stem from some of the work Yahoo and others have done in optimizing Ajax applications (don't make the browser parse Javascript until is has to). Not necessarily directly related to SEO, but a best practice a white hat SEO type will have picked up. *It's also possible you're misunderstanding each other. Content that's generated by Javascript is considered controversial in the SEO world. It's not that Google can't "see" this content, it's that people are unsure how its presence will rank the page, as a lot of black hat SEO games revolve around hiding and showing content with Javascript. SEO is at best Kremlinology and at worse a field that the black hats won over a long time ago. My free unsolicited advice is to stay out of the SEO game, present your managers with estimates as so how long it will take to implement their SEO related changes, and leave it at that. A: I've read in a few places that Google's spiders only index the first 100KB of a page. 20KB of JS at the top of your page would mean 20KB of content later on that Google wouldn't see, etc. Mind you, I have no idea if this fact is still true, but when combine it with the rest of the superstition/rumors/outright quackery you find in the dark underbelly of SEO forums, it starts to make a strange sort of sense. This is in addition to the fact that inline JS is a Bad Thing with respect to the separation of presentation, content, and behavior, as mentioned in other answers. A: Your SEO guru is slightly off the mark, but I understand the concern. This has nothing to do with whether or not the practice is proper, or whether or not a certain number of script tags is looked upon poorly by Google, but everything to do with page weight. Google stops caching after (I think) 150KB. The more inline scripts your page contains, the greater the chance important content will not be indexed because those scripts added too much weight. A: I've spent some time working on search engines (not Google), but have never really done much from an SEO perspective. Anyway, here are some factors which Google could reasonably use to penalise the page which should be increased by including big blocks of javascript inline. * *Overall page size. *Page download time (a mix of page size and download speed). *How early in the page the search terms occurred (might ignore script tags, but that's a lot more processing). Script tags with lots of inline javascript might be interpreted to be bad on their own. If users frequently loaded a lot of pages form the site, they'd find it much faster if the script was in a single shared file. A: I would agree with all of the other comments but would add that when a page has more than just <p> around the content you are putting your faith in Google to interpret the mark-up correctly and that is always a risky thing to do. Content is king and if Google can't read the content perfectly then it's just another reason for google to not show you the love. A: Lots of activities in SEO is not recommended by search engine. You can use <script> tag but not excessively. Even Google Analytics snippet code in <script> tag. A: This is an old question, but still pretty relevant! In my experience, script tags are bad if they cause your site to load slowly. Site speed actually does have an impact on your appearance in SERPs, but script tags in and of themselves aren't necessarily bad for SEO.
{ "language": "en", "url": "https://stackoverflow.com/questions/47886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Can UDP data be delivered corrupted? Is it possible for UDP data to come to you corrupted? I know it is possible for it to be lost. A: Possible? Absolutely. Undetected? Unlikely, since UDP employs a checksum that would require multiple-bit errors to appear valid. If an error is detected, the system will likely drop the packet - such are the risks of using UDP. A: UDP packets can also be delivered out of order, so if you are devising a protocol on top of UDP you have to take that into account as well. A: A common form of "corruption" that affects unsuspecting programmers is datagram truncation. See "Unix Network Programming" by Stevens for more information (page 539 in 2nd ed.) You might check the MSG_TRUNC flag... A: UDP packets use a 16 bit checksum. It is not impossible for UDP packets to have corruption, but it's pretty unlikely. In any case it is not more susceptible to corruption than TCP. A: First of all, the "IP checksum" referenced above is only an IP header checksum. It does not protect the payload. See RFC 791 Secondly, UDP allows transport with NO checksum, which means that the 16-bit checksum is set to 0 (ie, none). See RFC 768. (An all zero transmitted checksum value means that the transmitter generated no checksum) Thirdly, as others have mentioned, UDP has a 16-bit checkSUM, which is not the best way to detect a multi-bit error, but is not bad. It is certainly possible for an undetected error to sneak in, but very unlikely. A: Short answer: YES. Detailed answer: About 7 years ago(maybe 2011?) We found that UDP datagrams are unintentionally changed when a UDP datagram is exchanged between a computer in China and another one in Korea. Of course, Checksum in UDP packet header is also reculculated regarding to the payload change. There were no malware software in two computers. We found that the unintentional change only occurs when these conditions match: * *First several bytes of datagrams are similar to the previous datagram *Only occurse when UDP datagrams go from one nation to another I don't the cause exactly, but I roughly guess it is China Golden Shield. So we added datagram garbling algorithm into out software ProudNet and the problem went away. It is not difficult to implement. Just encode or obfuscate first several bytes of your datagram.
{ "language": "en", "url": "https://stackoverflow.com/questions/47901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: UDP vs TCP, how much faster is it? For general protocol message exchange, which can tolerate some packet loss. How much more efficient is UDP over TCP? A: UDP is faster than TCP, and the simple reason is because its non-existent acknowledge packet (ACK) that permits a continuous packet stream, instead of TCP that acknowledges a set of packets, calculated by using the TCP window size and round-trip time (RTT). For more information, I recommend the simple, but very comprehensible Skullbox explanation (TCP vs. UDP) A: UDP is slightly quicker in my experience, but not by much. The choice shouldn't be made on performance but on the message content and compression techniques. If it's a protocol with message exchange, I'd suggest that the very slight performance hit you take with TCP is more than worth it. You're given a connection between two end points that will give you everything you need. Don't try and manufacture your own reliable two-way protocol on top of UDP unless you're really, really confident in what you're undertaking. A: There has been some work done to allow the programmer to have the benefits of both worlds. SCTP It is an independent transport layer protolol, but it can be used as a library providing additional layer over UDP. The basic unit of communication is a message (mapped to one or more UDP packets). There is congestion control built in. The protocol has knobs and twiddles to switch on * *in order delivery of messages *automatic retransmission of lost messages, with user defined parameters if any of this is needed for your particular application. One issue with this is that the connection establishment is a complicated (and therefore slow process) Other similar stuff * *https://en.wikipedia.org/wiki/Reliable_User_Datagram_Protocol One more similar proprietary experimental thing * *https://en.wikipedia.org/wiki/QUIC This also tries to improve on the triple way handshake of TCP and change the congestion control to better deal with fast lines. Update 2022: Quic and HTTP/3 QUIC (mentioned above) has been standardized through RFCs and even became the basis of HTTP/3 since the original answer was written. There are various libraries such as lucas-clemente/quic-go or microsoft/msquic or google/quiche or mozilla/neqo (web-browsers need to be implementing this). These libraries expose to the programmer reliable TCP-like streams on top the UDP transport. RFC 9221 (An Unreliable Datagram Extension to QUIC) adds working with individual unreliable data packets. A: Keep in mind that TCP usually keeps multiple messages on wire. If you want to implement this in UDP you'll have quite a lot of work if you want to do it reliably. Your solution is either going to be less reliable, less fast or an incredible amount of work. There are valid applications of UDP, but if you're asking this question yours probably is not. A: If you need to quickly blast a message across the net between two IP's that haven't even talked yet, then a UDP is going to arrive at least 3 times faster, usually 5 times faster. A: It is meaningless to talk about TCP or UDP without taking the network condition into account. If the network between the two point have a very high quality, UDP is absolutely faster than TCP, but in some other case such as the GPRS network, TCP may been faster and more reliability than UDP. A: with loss tolerant Do you mean "with loss tolerance" ? Basically, UDP is not "loss tolerant". You can send 100 packets to someone, and they might only get 95 of those packets, and some might be in the wrong order. For things like video streaming, and multiplayer gaming, where it is better to miss a packet than to delay all the other packets behind it, this is the obvious choice For most other things though, a missing or 'rearranged' packet is critical. You'd have to write some extra code to run on top of UDP to retry if things got missed, and enforce correct order. This would add a small bit of overhead in certain places. Thankfully, some very very smart people have done this, and they called it TCP. Think of it this way: If a packet goes missing, would you rather just get the next packet as quickly as possible and continue (use UDP), or do you actually need that missing data (use TCP). The overhead won't matter unless you're in a really edge-case scenario. A: People say that the major thing TCP gives you is reliability. But that's not really true. The most important thing TCP gives you is congestion control: you can run 100 TCP connections across a DSL link all going at max speed, and all 100 connections will be productive, because they all "sense" the available bandwidth. Try that with 100 different UDP applications, all pushing packets as fast as they can go, and see how well things work out for you. On a larger scale, this TCP behavior is what keeps the Internet from locking up into "congestion collapse". Things that tend to push applications towards UDP: * *Group delivery semantics: it's possible to do reliable delivery to a group of people much more efficiently than TCP's point-to-point acknowledgement. *Out-of-order delivery: in lots of applications, as long as you get all the data, you don't care what order it arrives in; you can reduce app-level latency by accepting an out-of-order block. *Unfriendliness: on a LAN party, you may not care if your web browser functions nicely as long as you're blitting updates to the network as fast as you possibly can. But even if you care about performance, you probably don't want to go with UDP: * *You're on the hook for reliability now, and a lot of the things you might do to implement reliability can end up being slower than what TCP already does. *Now you're network-unfriendly, which can cause problems in shared environments. *Most importantly, firewalls will block you. You can potentially overcome some TCP performance and latency issues by "trunking" multiple TCP connections together; iSCSI does this to get around congestion control on local area networks, but you can also do it to create a low-latency "urgent" message channel (TCP's "URGENT" behavior is totally broken). A: When speaking of "what is faster" - there are at least two very different aspects: throughput and latency. If speaking about throughput - TCP's flow control (as mentioned in other answers), is extremely important and doing anything comparable over UDP, while certainly possible, would be a Big Headache(tm). As a result - using UDP when you need throughput, rarely qualifies as a good idea (unless you want to get an unfair advantage over TCP). However, if speaking about latencies - the whole thing is completely different. While in the absence of packet loss TCP and UDP behave extremely similar (any differences, if any, being marginal) - after the packet is lost, the whole pattern changes drastically. After any packet loss, TCP will wait for retransmit for at least 200ms (1sec per paragraph 2.4 of RFC6298, but practical modern implementations tend to reduce it to 200ms). Moreover, with TCP, even those packets which did reach destination host - will not be delivered to your app until the missing packet is received (i.e., the whole communication is delayed by ~200ms) - BTW, this effect, known as Head-of-Line Blocking, is inherent to all reliable ordered streams, whether TCP or reliable+ordered UDP. To make things even worse - if the retransmitted packet is also lost, then we'll be speaking about delay of ~600ms (due to so-called exponential backoff, 1st retransmit is 200ms, and second one is 200*2=400ms). If our channel has 1% packet loss (which is not bad by today's standards), and we have a game with 20 updates per second - such 600ms delays will occur on average every 8 minutes. And as 600ms is more than enough to get you killed in a fast-paced game - well, it is pretty bad for gameplay. These effects are exactly why gamedevs often prefer UDP over TCP. However, when using UDP to reduce latencies - it is important to realize that merely "using UDP" is not sufficient to get substantial latency improvement, it is all about HOW you're using UDP. In particular, while RUDP libraries usually avoid that "exponential backoff" and use shorter retransmit times - if they are used as a "reliable ordered" stream, they still have to suffer from Head-of-Line Blocking (so in case of a double packet loss, instead of that 600ms we'll get about 1.5*2*RTT - or for a pretty good 80ms RTT, it is a ~250ms delay, which is an improvement, but it is still possible to do better). On the other hand, if using techniques discussed in http://gafferongames.com/networked-physics/snapshot-compression/ and/or http://ithare.com/udp-from-mog-perspective/#low-latency-compression , it IS possible to eliminate Head-of-Line blocking entirely (so for a double-packet loss for a game with 20 updates/second, the delay will be 100ms regardless of RTT). And as a side note - if you happen to have access only to TCP but no UDP (such as in browser, or if your client is behind one of 6-9% of ugly firewalls blocking UDP) - there seems to be a way to implement UDP-over-TCP without incurring too much latencies, see here: http://ithare.com/almost-zero-additional-latency-udp-over-tcp/ (make sure to read comments too(!)). A: Which protocol performs better (in terms of throughput) - UDP or TCP - really depends on the network characteristics and the network traffic. Robert S. Barnes, for example, points out a scenario where TCP performs better (small-sized writes). Now, consider a scenario in which the network is congested and has both TCP and UDP traffic. Senders in the network that are using TCP, will sense the 'congestion' and cut down on their sending rates. However, UDP doesn't have any congestion avoidance or congestion control mechanisms, and senders using UDP would continue to pump in data at the same rate. Gradually, TCP senders would reduce their sending rates to bare minimum and if UDP senders have enough data to be sent over the network, they would hog up the majority of bandwidth available. So, in such a case, UDP senders will have greater throughput, as they get the bigger pie of the network bandwidth. In fact, this is an active research topic - How to improve TCP throughput in presence of UDP traffic. One way, that I know of, using which TCP applications can improve throughput is by opening multiple TCP connections. That way, even though, each TCP connection's throughput might be limited, the sum total of the throughput of all TCP connections may be greater than the throughput for an application using UDP. A: The network setup is crucial for any measurements. It makes a huge difference, if you are communicating via sockets on your local machine or with the other end of the world. Three things I want to add to the discussion: * *You can find here a very good article about TCP vs. UDP in the context of game development. *Additionally, iperf (jperf enhance iperf with a GUI) is a very nice tool for answering your question yourself by measuring. *I implemented a benchmark in Python (see this SO question). In average of 10^6 iterations the difference for sending 8 bytes is about 1-2 microseconds for UDP. A: Each TCP connection requires an initial handshake before data is transmitted. Also, the TCP header contains a lot of overhead intended for different signals and message delivery detection. For a message exchange, UDP will probably suffice if a small chance of failure is acceptable. If receipt must be verified, TCP is your best option. A: I will just make things clear. TCP/UDP are two cars are that being driven on the road. suppose that traffic signs & obstacles are Errors TCP cares for traffic signs, respects everything around. Slow driving because something may happen to the car. While UDP just drives off, full speed no respect to street signs. Nothing, a mad driver. UDP doesn't have error recovery, If there's an obstacle, it will just collide with it then continue. While TCP makes sure that all packets are sent & received perfectly, No errors , so , the car just passes obstacles without colliding. I hope this is a good example for you to understand, Why UDP is preferred in gaming. Gaming needs speed. TCP is preffered in downloads, or downloaded files may be corrupted. A: In some applications TCP is faster (better throughput) than UDP. This is the case when doing lots of small writes relative to the MTU size. For example, I read an experiment in which a stream of 300 byte packets was being sent over Ethernet (1500 byte MTU) and TCP was 50% faster than UDP. The reason is because TCP will try and buffer the data and fill a full network segment thus making more efficient use of the available bandwidth. UDP on the other hand puts the packet on the wire immediately thus congesting the network with lots of small packets. You probably shouldn't use UDP unless you have a very specific reason for doing so. Especially since you can give TCP the same sort of latency as UDP by disabling the Nagle algorithm (for example if you're transmitting real-time sensor data and you're not worried about congesting the network with lot's of small packets).
{ "language": "en", "url": "https://stackoverflow.com/questions/47903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "222" }
Q: Sources of good, free icons/images for applications & web apps with permissive license? I'm interested in finding good icons/images that can be used in both 'free' and proprietary programs. Please include a description of any license restrictions associated with the source of the icons you suggest. A: I use two search engines: IconFinder and IconLook. If you can't find what you want, this blog post has a list of great resources. A: I've used Silk Icons (http://www.famfamfam.com/lab/icons/silk/) on a few projects. It's covered under the creative commons license so you will have to include a link back to the site somewhere in your app. A: http://www.iconarchive.com has a nice selection A: I used Fontawesome to find icons. A: I've had the best luck with Icon Buffet and StockIcons A: A good starting point, and a nice stock icon site. Some require payment, but there are lots of free sources out there. However, one of your highest priorities for a commercial application is that it looks good enough to buy - $30 for a good set is cheap compared to the time it takes you to research and find a set, nevermind the time it'd take for you to make them yourself. -Adam
{ "language": "en", "url": "https://stackoverflow.com/questions/47915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Organization of C files I'm used to doing all my coding in one C file. However, I'm working on a project large enough that it becomes impractical to do so. I've been #including them together but I've run into cases where I'm #including some files multiple times, etc. I've heard of .h files, but I'm not sure what their function is (or why having 2 files is better than 1). What strategies should I use for organizing my code? Is it possible to separate "public" functions from "private" ones for a particular file? This question precipitated my inquiry. The tea.h file makes no reference to the tea.c file. Does the compiler "know" that every .h file has a corresponding .c file? A: Compiler You can see an example of a C 'module' at this topic - Note that there are two files - the header tea.h, and the code tea.c. You declare all the public defines, variables, and function prototypes that you want other programs to access in the header. In your main project you'll #include and that code can now access the functions and variables of the tea module that are mentioned in the header. It gets a little more complex after that. If you're using Visual Studio and many other IDEs that manage your build for you, then ignore this part - they take care of compiling and linking objects. Linker When you compile two separate C files the compiler produces individual object files - so main.c becomes main.o, and tea.c becomes tea.o. The linker's job is to look at all the object files (your main.o and tea.o), and match up the references - so when you call a tea function in main, the linker modifies that call so it actually does call the right function in tea. The linker produces the executable file. There is a great tutorial that goes into more depth on this subject, including scope and other issue you'll run into. Good luck! -Adam A: A couple of simple rules to start: * *Put those declarations that you want to make "public" into the header file for the C implementation file you are creating. *Only #include header files in the C file that are needed to implement the C file. *include header files in a header file only if required for the declarations within that header file. *Use the include guard method described by Andrew OR use #pragma once if the compiler supports it (which does the same thing -- sometimes more efficiently) A: You should regard .h files as interface files of your .c file. Every .c file represents a module with a certain amount of functionality. If functions in a .c file are used by other modules (i.e. other .c files) put the function prototype in the .h interface file. By including the interface file in your original modules .c file and every other .c file you need the function in, you make this function available to other modules. If you only need a function in a certain .c file (not in any other module), declare its scope static. This means it can only be called from within the c file it is defined in. Same goes for variables that are used across multiple modules. They should go in the header file and there they have to marked with the keyword 'extern'. Note: For functions the keyword 'extern' is optional. Functions are always considered 'extern'. The inclusion guards in header files help to not include the same header file multiple times. For example: Module1.c: #include "Module1.h" static void MyLocalFunction(void); static unsigned int MyLocalVariable; unsigned int MyExternVariable; void MyExternFunction(void) { MyLocalVariable = 1u; /* Do something */ MyLocalFunction(); } static void MyLocalFunction(void) { /* Do something */ MyExternVariable = 2u; } Module1.h: #ifndef __MODULE1.H #define __MODULE1.H extern unsigned int MyExternVariable; void MyExternFunction(void); #endif Module2.c #include "Module.1.h" static void MyLocalFunction(void); static void MyLocalFunction(void) { MyExternVariable = 1u; MyExternFunction(); } A: To answer your additional question: This question precipitated my inquiry. The tea.h file makes no reference to the tea.c file. Does the compiler "know" that every .h file has a corresponding .c file? The compiler is not primarily concerned with header files. Each invocation of the compiler compiles a source (.c) file into an object (.o) file. Behind the scenes (i.e. in the make file or project file) a command line equivalent to this is being generated: compiler --options tea.c The source file #includes all the header files for the resources it references, which is how the compiler finds header files. (I'm glossing over some details here. There is a lot to learn about building C projects.) A: As well as the answers supplied above, one small advantage of splinting up your code into modules (separate files) is that if you have to have any global variables, you can limit their scope to a single module by the use of the key word 'static'. (You could also apply this to functions). Note that this use of 'static' is different from its use inside a function. A: Your question makes it clear that you haven't really done much serious development. The usual case is that your code will generally be far too large to fit into one file. A good rule is that you should split the functionality into logical units (.c files) and each file should contain no more than what you can easily hold in your head at one time. A given software product then generally includes the output from many different .c files. How this is normally done is that the compiler produces a number of object files (in unix systems ".o" files, VC generates .obj files). It is the purpose of the "linker" to compose these object files into the output (either a shared library or executable). Generally your implementation (.c) files contain actual executable code, while the header files (.h) have the declarations of the public functions in those implementation files. You can quite easily have more header files than there are implementation files, and sometimes header files can contain inline code as well. It is generally quite unusual for implementation files to include each other. A good practice is to ensure that each implementation file separates its concerns from the other files. I would recommend you download and look at the source for the linux kernel. It is quite massive for a C program, but well organised into separate areas of functionality. A: Try to make each .c focus on a particular area of functionality. Use the corresponding .h file to declare those functions. Each .h file should have a 'header' guard around it's content. For example: #ifndef ACCOUNTS_H #define ACCOUNTS_H .... #endif That way you can include "accounts.h" as many times as you want, and the first time it's seen in a particular compilation unit will be the only one that actually pulls in its content. A: The .h files should be used to define the prototypes for your functions. This is necessary so you can include the prototypes that you need in your C-file without declaring every function that you need all in one file. For instance, when you #include <stdio.h>, this provides the prototypes for printf and other IO functions. The symbols for these functions are normally loaded by the compiler by default. You can look at the system's .h files under /usr/include if you're interested in the normal idioms involved with these files. If you're only writing trivial applications with not many functions, it's not really necessary to modularize everything out into logical groupings of procedures. However, if you have the need to develop a large system, then you'll need to pay some consideration as to where to define each of your functions.
{ "language": "en", "url": "https://stackoverflow.com/questions/47919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Combining and Caching multiple JavaScript files in ASP.net Either I had a bad dream recently or I am just too stupid to google, but I remember that someone somewhere wrote that ASP.net has a Function which allows "merging" multiple JavaScript files automatically and only delivering one file to the client, thus reducing the number of HTTP Requests. Server Side, you still kept all the individual .js files, but the Runtime itself then creates one big JavaScript file which is then included in the script-tag instead and can be properly cached etc. In case that this function really exists and is not just a product of my imagination, can someone point me in the right direction please? A: you can find here an usefull article for it A: .Net 4.5 have inbuilt support for Bundling and Minification A: It's called Script Combining. There is a video example from asp.net explaining it here.
{ "language": "en", "url": "https://stackoverflow.com/questions/47937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Invalid iPhone Application Binary I'm trying to upload an application to the iPhone App Store, but I get this error message from iTunes Connect: The binary you uploaded was invalid. The signature was invalid, or it was not signed with an Apple submission certificate. Note: The details of original question have been removed, as this page has turned into a repository for all information about possible causes of that particular error message. For general information on submitting iPhone applications to the App Store, see Steps to upload an iPhone application to the AppStore. A: Same problem, different solution. In my case, I was compressing the file using zip -r myapp.zip myapp.app Turns out, the zip command screwed the bundle. Compressing it from the finder made it work. A: I had the same issue and after trying several things - I removed the .plist entitlements from the Code Signing Entitlements (just left it blank) and it built fine and uploaded FINALLY. Good luck all :-D A: Another data point: for a while, my app went through. Now I've added support for in-app purchases, and suddenly it fails with an "Invalid binary/invalid signature" problem. Upon careful looking, I found out that the value of application-identifier in the entitlements plist file was off. This, most likely, had to do with the fact that I've replaced the provisioning profile from a wildcarded one to a app-specific one (required for in-app purchases). The wrong app ID qualified under the old profile. It did not match the app ID in the info.plist, but apparently iTunes forgave that. So, to recap: info.plist: com.mydomain.foo dist.plist: com.mydomain.bar Profile: com.mydomain.* is OK, while info.plist: com.mydomain.foo dist.plist: com.mydomain.bar Profile: com.mydomain.foo causes "Invalid binary". A: I had the same problem aswell, when building I noticed the provisioning wasn't added in the build. The fix for me was to set the build to the iphone device as where I normally use the simulator, but then it won't include the provisioning profile... This might be a noob mistake. Normally you can't build to device, but when you do it for distribution you can. A: See this link for the solution: http://greghaygood.com/2010/09/04/invalid-binary-message-from-itunesconnect The short answer is that "Eventually I double-checked my info.plist and discovered something. I added CFBundleIconFiles per the new guidelines, but there was an empty entry in the array list. I removed that and re-submitted, and it was finally accepted!" A: Well, after repeating the steps several times, I was finally successful in uploading my app. I don't know exactly what fixed it, but prior to the successful attempt, I closed Xcode and Firefox and restarted them. I guess one of those apps had some bad juju. A: Here's an issue I ran into: I added the binary to Subversion before uploading. Comparessing/zipping the binary then included the hidden .svn directories, which messed up the code signing. A: I tried various things after reading various posts including those above. What finally worked for me was starting completely over! I deleted every certificate and provisioning profile associated with my app. I recreated a new development certificate and a new distribution certificate. I downloaded the intermediate certificate again. Then I recreated both the development profile and the distribution profile. After installing the three certificates (I noticed the distribution had both private and public keys this time) and the two provisioning profiles (my distribution profile didn't get flagged as not having a valid certificate!), everything worked. Once I made the decision to revoke everything and just start over, it only took about 5 minutes to create the new stuff and re-install. A: I had a similar issue but in Monotouch. I found that my Release profile was set to use developer certs. It should look like this: A: It's been my experience that Xcode occasionally gets confused about which signing certificate to use. I got into the habit of quitting and restarting Xcode after any change to the code signing settings (and doing a clean build) to work around this problem. A: It seems this issue has many causes. Here's the solution to mine: This applies to anyone who belongs to multiple development teams (e.g. your own apps, and your companies). If you building the build with one set of credentials and re-signing it with a different one (e.g. for adhoc/appstore distribution), you must ensure that the build was originally built & signed with credentials belonging to the same iOS development team that the distribution credentials you are re-signing with belong to. So don't build with "Indy Dev Inc" credentials then try to deploy with "Company Inc" credentials. Make sure you setup both "Company Inc" dev, and distribution credentials, and use them. I posted more info about this on my blog: http://omegadelta.net/2011/06/09/fiendish-ios-code-signing-invalid-binary-issue/ A: I just wanted to mention that I too had the problem with zip from the command line as well. The problem lies in the way it handles symlinks by default. Using: zip -y -r myapp.zip myapp.app Solved that problem. A: I had the same problem. I was ready to throw in the towel on this problem but I figured it out when I went to check in my code using Murky. I always skim the diffs on the files that changed before I check in. When doing so this time I noticed that the project.pbxproj file had changed....and in the Distribution section the entry for “PROVISIONING_PROFILE[sdk=iphoneos*]” was blank. Quiting and restarting Xcode didn't work for me. Instead, I went into both my project and target settings and changed the code signing to directly select my Distribution profile rather than relying on the auto-select feature. Doing this caused the project.pbxproj file to populate with the correct values even though the auto-select feature supposedly selected the exact same profile that I selected manually. I need a beer... A: After trying all of the other fixes listed here we logged a TSI with Apple. Having followed all the steps in Technical Note TN2250 our problem was caused because a sealed resource was missing or invalid. In our case it was ._.DS_Store. The ".." is called an Apple Double file, and is the result of copying the Xcode Project folder, *unzipped*, onto and back from a file system that doesn't properly support HFS+'s 'resource forks' (used for code signatures). These extra ".." files result and cause code signing verification failure. To clean the problematic Apple Double files from your Xcode project folder, run the dot_clean command on your Xcode project's folder, do a clean build, and then rearchive and reattempt your submission. dot_clean /the/path/to/xcode/project Note: You can just drag the project folder into the terminal to automatically populate the path There is no message when you run the command but the project build might show a warning about the file when you next build. You can ignore this, the app will validate and submit successfully. A: I had the same issue and solved it this way: The property certificates were installed on my development machine and mobileprovision.embedded was included in the distribution archive. After an hour or so of Googling and digging I found the source the error. Inside Xcode I had copied the Release configuration and created a new Distribution configuration and then changed the signing identity to my distribution certificate. However, even though it was updated in the GUI the project file was not updated correctly. If you come across the same error, look in your [ProjectName].xcodeproj directory for the project.pbxproj file and open it in your favorite editor. Look for the Distribution section. My broken one looked like this: C384C90C0F9939FA00E76E41 /* Distribution */ = { isa = XCBuildConfiguration; buildSettings = { ARCHS = "$(ARCHS_STANDARD_32_BIT)"; CODE_SIGN_ENTITLEMENTS = ""; "CODE_SIGN_IDENTITY[sdk=iphoneos*]” = “iPhone Distribution: Edward McCreary”; GCC_C_LANGUAGE_STANDARD = c99; GCC_WARN_ABOUT_RETURN_TYPE = YES; GCC_WARN_UNUSED_VARIABLE = YES; PREBINDING = NO; “PROVISIONING_PROFILE[sdk=iphoneos*]” = “F00D3778-32B2-4550-9FCE-1A4090344400″; SDKROOT = iphoneos2.2.1; }; name = Distribution; }; C384C90D0F9939FA00E76E41 /* Distribution */ = { isa = XCBuildConfiguration; buildSettings = { ALWAYS_SEARCH_USER_PATHS = NO; CODE_SIGN_IDENTITY = “iPhone Developer: Edward McCreary”; “CODE_SIGN_IDENTITY[sdk=iphoneos*]” = “iPhone Developer: Edward McCreary”; COPY_PHASE_STRIP = YES; GCC_PRECOMPILE_PREFIX_HEADER = YES; GCC_PREFIX_HEADER = GenPass_Prefix.pch; INFOPLIST_FILE = Info.plist; PRODUCT_NAME = GenPass; PROVISIONING_PROFILE = “DB12BCA7-FE72-42CA-9C2B-612F76619788″; “PROVISIONING_PROFILE[sdk=iphoneos*]” = “DB12BCA7-FE72-42CA-9C2B-612F76619788″; }; name = Distribution; }; You can see the signing identity and provisioning profile are incorrect in the second section. Edit it to match the first section, rebuild, and you should be good to go. The final one looked like this: C384C90C0F9939FA00E76E41 /* Distribution */ = { isa = XCBuildConfiguration; buildSettings = { ARCHS = "$(ARCHS_STANDARD_32_BIT)"; CODE_SIGN_ENTITLEMENTS = ""; "CODE_SIGN_IDENTITY[sdk=iphoneos*]” = “iPhone Distribution: Edward McCreary”; GCC_C_LANGUAGE_STANDARD = c99; GCC_WARN_ABOUT_RETURN_TYPE = YES; GCC_WARN_UNUSED_VARIABLE = YES; PREBINDING = NO; “PROVISIONING_PROFILE[sdk=iphoneos*]” = “F00D3778-32B2-4550-9FCE-1A4090344400″; SDKROOT = iphoneos2.2.1; }; name = Distribution; }; C384C90D0F9939FA00E76E41 /* Distribution */ = { isa = XCBuildConfiguration; buildSettings = { ALWAYS_SEARCH_USER_PATHS = NO; CODE_SIGN_IDENTITY = “iPhone Distribution: Edward McCreary”; “CODE_SIGN_IDENTITY[sdk=iphoneos*]” = “iPhone Distribution: Edward McCreary”; COPY_PHASE_STRIP = YES; GCC_PRECOMPILE_PREFIX_HEADER = YES; GCC_PREFIX_HEADER = GenPass_Prefix.pch; INFOPLIST_FILE = Info.plist; PRODUCT_NAME = GenPass; PROVISIONING_PROFILE = “F00D3778-32B2-4550-9FCE-1A4090344400″; “PROVISIONING_PROFILE[sdk=iphoneos*]” = “F00D3778-32B2-4550-9FCE-1A4090344400″; }; name = Distribution; }; guids changed to protect the innocent A: I was having a similar problem, but I don't use entitlements.plist. However, after a dozen failed uploads, I checked my info.plist and discovered something. My CFBundleIconFiles array had an empty entry. I removed that and re-submitted, and it was finally accepted! Seriously, how hard would it be for Apple to expose those kind of validation errors? Edit: It's not immediately apparant where the CFBundleIconFiles are because they use a different name. In the project info view, Ctl click and select "Show Raw Keys/Values" and then you will see the references to CFBundleWhatever. In this editor's case, he was trying to use a non-existent [email protected] file. A: Resolved this by cleaning up the myProject.xcodeproj file (right click, open package), the package contained files from co-developer, after deleting these the problem was solved A: For me the solution was creating a distributing certification at: Apple Developer Provisioning Portal. A: I received an invalid binary, if the app does not use remote push notification, but I left the code for registering push and the callback delegates for registering/receiving remote notification uncommented, even if the code does not get used. This is recent. My last submission last week was fine. This week, it returns invalid binary. Luckily, there is an email that explains the error. A: For what it is worth, I want to add what it was that fixed this issue for me. I had a ? (question-mark) in my app title that was causing the error. A: I tried all other proposed solutions, but nothing helped. I ended up creating a new Xcode project and copy all my code and resources into it. That did the trick, and my app got placed into the review queue. I can also recommend Apples technical notes on code signing for debugging/verification. A: Just had this problem today but the answers here didn't help. I finally found the problem. Make sure using pull down menu: Project>Edit Active Target "ProjectName" to change Code Signing to Distribution - I was selecting the Project in the Groups & Files pane and using the Info button which shows the PROJECT info rather than the TARGET info - very confusing! Only realized when I turned code signing off in the project and built and it still wanting to code sign! I think this is why in Eddie's post he had to change it at the project.pbxproj level Also on original post in 1st step: 1. In Xcode, select the Device|Release target Surely it should be Device|Distribution target? (assuming this copied release and renamed it Distribution as per Apples instructions in the Provisioning Portal) A: My two cents: Download the latest version of the Application Loader. I've just updated and now get a different error message. A: I just went through this hassle (again) but this time I found that my distribution profile had a status of "Invalid". If you think everything else is right, double-check the status in the portal and renew/re-download anything that isn't in the Active state. A: I received an Invalid Binary after an app upload, with no e-mail followup as to why it failed. I tried doing a couple things at once, and I'm not sure which of the following actually fixed it: * *Restarted Macbook Pro *Moved the source code for my project from an NTFS drive to an HFS+ drive and recompiled. A: I had a problem with this and the 4.3 GM SDK. One of our apps would not make it past upload received. It turned out to be a provisioning profile issue. I regenerated the app store profile and it worked fine. A: My solution involved creating a new App ID. I'm not sure exactly why that fixed it, but I suspect it may have been mismatched Bundle Identifiers — creating the new App ID forced me to make sure my app and iTunes were expecting the same thing. A: Another solution: For me simply setting the 'Release' certificates under 'code signing' fixed it. They were initially set to 'Don't code sign'. A: For me the problem was solved by resaving a PNG image with the non-interlaced option. In previous versions interlaced png were allowed, but know this images can cause the invalid binary. My apple message: Corrupt Icon File - The icon file [email protected] appears to be corrupt. Your icon must not be an interlaced PNG file. You can see if the PNG is interlaced using the command "file" in the terminal: Eva-Madrazos-MacBook-Pro-2:GQ 7 integracion ads Eva$ file *.png Default.png: PNG image data, 320 x 480, 8-bit/color RGB, non-interlaced Good luck, Eva A: I want to point out the possibility to email Apple and ask them to check their logs. I did just that, after having tried loads of things first. It was necessary to remind them after almost four weeks, but finally they replied and pointed to the exact spot of the issue. The problem in my case was that I had previously tried other app icons, and a reference to the old image still remained in 'CFBundleIcons'. I used the drag and drop functionality to set the icon, but I didn't notice that the old content wasn't completely cleared before the new reference was added. To see the faulty reference it was necessary to expand the arrows to view each and every sub element in the plist file. One tips is to right-click in the file and select the option for viewing the raw content. In that way you will not need to expand anything. A: uuid is not allowed. I fixed it by remove all [[UIDevice currentDevice] uniqueIdentifier]; A: As of May 1st 2013, Apple updated their iOS Human Interface Guidelines so that if you wish to upload a new app or an update, it must be iphone 5 (4inch) friendly - meaning it should not be a 3.5inch app running on the bigger screen. From apple: Dear developer, We have discovered one or more issues with your recent delivery for "-------------". To process your delivery, the following issues must be corrected: iPhone 5 Optimization Requirement - Your binary is not optimized for iPhone 5. As of May 1, all new iPhone apps and app updates submitted must support the 4-inch display on iPhone 5. All apps must include a launch image of the appropriate size. Learn more about iPhone 5 support by reviewing the iOS Human Interface Guidelines. Once these issues have been corrected, go to the Version Details page and click "Ready to Upload Binary." Continue through the submission process until the app status is "Waiting for Upload." You can then deliver the corrected binary. Regards, The App Store team A: There is one more instance when the binary will be deemed invalid. Starting February 1, 2015 new iOS apps need to support 64-bit architecture. Here is the email from apple: Dear developer, We have discovered one or more issues with your recent delivery for "Home - Recruitment". To process your delivery, the following issues must be corrected: Missing 64-bit support - Beginning on February 1, 2015 new iOS apps submitted to the App Store must include 64-bit support and be built with the iOS 8 SDK. Beginning June 1, 2015 app updates will also need to follow the same requirements. To enable 64-bit in your project, we recommend using the default Xcode build setting of “Standard architectures” to build a single binary with both 32-bit and 64-bit code. Once these issues have been corrected, you can then redeliver the corrected binary. Regards, The App Store team A: In my case, it was the TestFlight SDK included in the project binary. I created an new project from a different old project source (that included testflight), but since this project is a new one with new IDs, TestFlight SDK is no longer allowed here. I removed it, then archived and uploaded it again. No "invalid binary" error this time.
{ "language": "en", "url": "https://stackoverflow.com/questions/47941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "78" }
Q: What are the advantages of packaging your python library/application as an .egg file? I've read some about .egg files and I've noticed them in my lib directory but what are the advantages/disadvantages of using then as a developer? A: One egg by itself is not better than a proper source release. The good part is the dependency handling. Like debian or rpm packages, you can say you depend on other eggs and they'll be installed automatically (through pypi.python.org). A second comment: the egg format itself is a binary packaged format. Normal python packages that consist of just python code are best distributed as "source releases", so "python setup.py sdist" which result in a .tar.gz. These are also commonly called "eggs" when uploaded to pypi. Where you need binary eggs: when you're bundling some C code extension. You'll need several binary eggs (a 32bit unix one, a windows one, etc.) then. A: Eggs are a pretty good way to distribute python apps. Think of it as a platform independent .deb file that will install all dependencies and whatnot. The advantage is that it's easy to use for the end user. The disadvantage are that it can be cumbersome to package your app up as a .egg file. You should also offer an alternative means of installation in addition to .eggs. There are some people who don't like using eggs because they don't like the idea of a software program installing whatever software it wants. These usually tend to be sysadmin types. A: From the Python Enterprise Application Kit community: "Eggs are to Pythons as Jars are to Java..." Python eggs are a way of bundling additional information with a Python project, that allows the project's dependencies to be checked and satisfied at runtime, as well as allowing projects to provide plugins for other projects. There are several binary formats that embody eggs, but the most common is '.egg' zipfile format, because it's a convenient one for distributing projects. All of the formats support including package-specific data, project-wide metadata, C extensions, and Python code. The primary benefits of Python Eggs are: * *They enable tools like the "Easy Install" Python package manager *.egg files are a "zero installation" format for a Python package; no build or install step is required, just put them on PYTHONPATH or sys.path and use them (may require the runtime installed if C extensions or data files are used) *They can include package metadata, such as the other eggs they depend on *They allow "namespace packages" (packages that just contain other packages) to be split into separate distributions (e.g. zope., twisted., peak.* packages can be distributed as separate eggs, unlike normal packages which must always be placed under the same parent directory. This allows what are now huge monolithic packages to be distributed as separate components.) *They allow applications or libraries to specify the needed version of a library, so that you can e.g. require("Twisted-Internet>=2.0") before doing an import twisted.internet. *They're a great format for distributing extensions or plugins to extensible applications and frameworks (such as Trac, which uses eggs for plugins as of 0.9b1), because the egg runtime provides simple APIs to locate eggs and find their advertised entry points (similar to Eclipse's "extension point" concept). *There are also other benefits that may come from having a standardized format, similar to the benefits of Java's "jar" format. -Adam A: .egg files are basically a nice way to deploy your python application. You can think of it as something like .jar files for Java. More info here. A: Whatever you do, do not stop distributing your application, also, as a tarball, as that is the easiest packagable format for operating systems with a package sysetem. A: For simple Python programs, you probably don't need to use eggs. Distributing the raw .py files should suffice; it's like distributing source files for GNU/Linux. You can also use the various OS "packagers" (like py2exe or py2app) to create .exe, .dmg, or other files for different operating systems. More complex programs, e.g. Django, pretty much require eggs due to the various modules and dependencies required.
{ "language": "en", "url": "https://stackoverflow.com/questions/47953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: What are the most useful (custom) code snippets for C#? What are the best code snippets for C#? (using visual studio) VB has a lot that are pre-defined, but there are only a handful for C#. Do you have any really useful ones for C#? Anyone want to post a good custom one you created yourself? Anyone?... Bueller? A: My absolute favorite is cw. A: There's plenty of code snippets within Visual Studio for basic programming structure but I wouldn't necessarily rate one higher than another. I would definitely say the best ones are the custom snippets you define yourself to accomplish more specific tasks that you may find yourself using on a regular basis. Definitely a big time saver. A fairly basic intro to creating custom snippets can be found at http://www.15seconds.com/issue/080724.htm to help with this. A: Microsoft have released a whole bunch of C# snippets that bring it up to parity with the ones for Visual Basic. You can download them here: http://msdn.microsoft.com/en-us/library/z41h7fat.aspx A: These are the ones I use daily. * *prop *try *if *else *for *foreach *mbox - Message box stub *The ability to role your own. I have one for Property that are saved in the view state, methods a custom class example. A: I had a few on my old blog: * *testmethod Code Snippet *onevent Code Snippet *cleantestresults Code Snippet *astype Code Snippet I also have an argnull code snippet that inserts a Guard Clause that checks an argument for null and throws an ArgumentNullException, but I haven't gotten around to post that yet. A: prop and exception are my favorites. A: I just started a blog, where I document short solutions in C# (code snippets) that I came up with and might prove useful to other coders. http://thorstenlorenz.blogspot.com/ So far I have mostly blogged about extension methods and generics. So have a look and tell me what you think. A: Just to update an older thread... here's a link for Visual Studio 2008 C# code snippet download. VS 2008 C# Code Snippet Download
{ "language": "en", "url": "https://stackoverflow.com/questions/47960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What are some advantages of duck-typing vs. static typing? I'm researching and experimenting more with Groovy and I'm trying to wrap my mind around the pros and cons of implementing things in Groovy that I can't/don't do in Java. Dynamic programming is still just a concept to me since I've been deeply steeped static and strongly typed languages. Groovy gives me the ability to duck-type, but I can't really see the value. How is duck-typing more productive than static typing? What kind of things can I do in my code practice to help me grasp the benefits of it? I ask this question with Groovy in mind but I understand it isn't necessarily a Groovy question so I welcome answers from every code camp. A: There is nothing wrong with static typing if you are using Haskell, which has an incredible static type system. However, if you are using languages like Java and C++ that have terribly crippling type systems, duck typing is definitely an improvement. Imagine trying to use something so simple as "map" in Java (and no, I don't mean the data structure). Even generics are rather poorly supported. A: Next, which is better: EMACS or vi? This is one of the running religious wars. Think of it this way: any program that is correct, will be correct if the language is statically typed. What static typing does is let the compiler have enough information to detect type mismatches at compile time instead of run time. This can be an annoyance if your doing incremental sorts of programming, although (I maintain) if you're thinking clearly about your program it doesn't much matter; on the other hand, if you're building a really big program, like an operating system or a telephone switch, with dozens or hundreds or thousands of people working on it, or with really high reliability requirements, then having he compiler be able to detect a large class of problems for you without needing a test case to exercise just the right code path. It's not as if dynamic typing is a new and different thing: C, for example, is effectively dynamically typed, since I can always cast a foo* to a bar*. It just means it's then my responsibility as a C programmer never to use code that is appropriate on a bar* when the address is really pointing to a foo*. But as a result of the issues with large programs, C grew tools like lint(1), strengthened its type system with typedef and eventually developed a strongly typed variant in C++. (And, of course, C++ in turn developed ways around the strong typing, with all the varieties of casts and generics/templates and with RTTI. One other thing, though --- don't confuse "agile programming" with "dynamic languages". Agile programming is about the way people work together in a project: can the project adapt to changing requirements to meet the customers' needs while maintaining a humane environment for the programmers? It can be done with dynamically typed languages, and often is, because they can be more productive (eg, Ruby, Smalltalk), but it can be done, has been done successfully, in C and even assembler. In fact, Rally Development even uses agile methods (SCRUM in particular) to do marketing and documentation. A: With, TDD + 100% Code Coverage + IDE tools to constantly run my tests, I do not feel a need of static typing any more. With no strong types, my unit testing has become so easy (Simply use Maps for creating mock objects). Specially , when you are using Generics, you can see the difference: //Static typing Map<String,List<Class1<Class2>>> someMap = [:] as HashMap<String,List<Class1<Class2>>> vs //Dynamic typing def someMap = [:] A: IMHO, the advantage of duck typing becomes magnified when you adhere to some conventions, such as naming you variables and methods in a consistent way. Taking the example from Ken G, I think it would read best: class SimpleResults { def mapOfListResults def total def categories } Let's say you define a contract on some operation named 'calculateRating(A,B)' where A and B adhere to another contract. In pseudocode, it would read: Long calculateRating(A someObj, B, otherObj) { //some fake algorithm here: if(someObj.doStuff('foo') > otherObj.doStuff('bar')) return someObj.calcRating()); else return otherObj.calcRating(); } If you want to implement this in Java, both A and B must implement some kind of interface that reads something like this: public interface MyService { public int doStuff(String input); } Besides, if you want to generalize you contract for calculating ratings (let's say you have another algorithm for rating calculations), you also have to create an interface: public long calculateRating(MyService A, MyServiceB); With duck typing, you can ditch your interfaces and just rely that on runtime, both A and B will respond correctly to your doStuff() calls. There is no need for a specific contract definition. This can work for you but it can also work against you. The downside is that you have to be extra careful in order to guarantee that your code does not break when some other persons changes it (ie, the other person must be aware of the implicit contract on the method name and arguments). Note that this aggravates specially in Java, where the syntax is not as terse as it could be (compared to Scala for example). A counter-example of this is the Lift framework, where they say that the SLOC count of the framework is similar to Rails, but the test code has less lines because they don't need to implement type checks within the tests. A: Here's one scenario where duck typing saves work. Here's a very trivial class class BookFinder { def searchEngine def findBookByTitle(String title) { return searchEngine.find( [ "Title" : title ] ) } } Now for the unit test: void bookFinderTest() { // with Expando we can 'fake' any object at runtime. // alternatively you could write a MockSearchEngine class. def mockSearchEngine = new Expando() mockSearchEngine.find = { return new Book("Heart of Darkness","Joseph Conrad") } def bf = new BookFinder() bf.searchEngine = mockSearchEngine def book = bf.findBookByTitle("Heart of Darkness") assert(book.author == "Joseph Conrad" } We were able to substitute an Expando for the SearchEngine, because of the absence of static type checking. With static type checking we would have had to ensure that SearchEngine was an interface, or at least an abstract class, and create a full mock implementation of it. That's labour intensive, or you can use a sophisticated single-purpose mocking framework. But duck typing is general-purpose, and has helped us. Because of duck typing, our unit test can provide any old object in place of the dependency, just as long as it implements the methods that get called. To emphasise - you can do this in a statically typed language, with careful use of interfaces and class hierarchies. But with duck typing you can do it with less thinking and fewer keystrokes. That's an advantage of duck typing. It doesn't mean that dynamic typing is the right paradigm to use in all situations. In my Groovy projects, I like to switch back to Java in circumstances where I feel that compiler warnings about types are going to help me. A: A lot of the comments for duck typing don't really substantiate the claims. Not "having to worry" about a type is not sustainable for maintenance or making an application extendable. I've really had a good opportunity to see Grails in action over my last contract and its quite funny to watch really. Everyone is happy about the gains in being able to "create-app" and get going - sadly it all catches up to you on the back end. Groovy seems the same way to me. Sure you can write very succinct code and definitely there is some nice sugar in how we get to work with properties, collections, etc... But the cost of not knowing what the heck is being passed back and forth just gets worse and worse. At some point your scratching your head wondering why the project has become 80% testing and 20% work. The lesson here is that "smaller" does not make for "more readable" code. Sorry folks, its simple logic - the more you have to know intuitively then the more complex the process of understanding that code becomes. It's why GUI's have backed off becoming overly iconic over the years - sure looks pretty but WTH is going on is not always obvious. People on that project seemed to have troubles "nailing down" the lessons learned, but when you have methods returning either a single element of type T, an array of T, an ErrorResult or a null ... it becomes rather apparent. One thing working with Groovy has done for me however - awesome billable hours woot! A: To me, they aren't horribly different if you see dynamically typed languages as simply a form of static typing where everything inherits from a sufficiently abstract base class. Problems arise when, as many have pointed out, you start getting strange with this. Someone pointed out a function that returns a single object, a collection, or a null. Have the function return a specific type, not multiple. Use multiple functions for single vs collection. What it boils down to is that anyone can write bad code. Static typing is a great safety device, but sometimes the helmet gets in the way when you want to feel the wind in your hair. A: Duck typing cripples most modern IDE's static checking, which can point out errors as you type. Some consider this an advantage. I want the IDE/Compiler to tell me I've made a stupid programmer trick as soon as possible. My most recent favorite argument against duck typing comes from a Grails project DTO: class SimpleResults { def results def total def categories } where results turns out to be something like Map<String, List<ComplexType>>, which can be discovered only by following a trail of method calls in different classes until you find where it was created. For the terminally curious, total is the sum of the sizes of the List<ComplexType>s and categories is the size of the Map It may have been clear to the original developer, but the poor maintenance guy (ME) lost a lot of hair tracking this one down. A: It's a little bit difficult to see the value of duck typing until you've used it for a little while. Once you get used to it, you'll realize how much of a load off your mind it is to not have to deal with interfaces or having to worry about exactly what type something is. A: It's not that duck typing is more productive than static typing as much as it is simply different. With static typing you always have to worry that your data is the correct type and in Java it shows up through casting to the right type. With duck typing the type doesn't matter as long as it has the right method, so it really just eliminates a lot of the hassle of casting and conversions between types.
{ "language": "en", "url": "https://stackoverflow.com/questions/47972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: Is it possible to develop DirectX apps in Linux? More out of interest than anything else, but can you compile a DirectX app under linux? Obviously there's no official SDK, but I was thinking it might be possible with wine. Presumably wine has an implementation of the DirectX interface in order to run games? Is it possible to link against that? (edit: This is called winelib) Failing that, maybe a mingw cross compiler with the app running under wine. Half answered my own question here, but wondered if anyone had heard of anything like this being done? A: If this is not about porting but creating, you should really consider OpenGL as this API is as powerful as DirectX and much easier to port to Mac or Linux. I don't know your requirements so better mention it. A: You can't link against wine as it's essentially a call interdictor/translator rather than a set of libraries you can hook into. If linux is important go OpenGL/SDL/OpenAL. A: I believe(I've never tried this) you can can compile Linux binarys against winelib. So it works just like a Linux executable, but with the windows libraries. http://www.winehq.org/site/docs/winelib-guide/index A: I've had some luck with this. I've managed to compile this simple Direct3D example. I used winelib for this (wine-dev package on Ubuntu). Thanks to alastair for pointing me to winelib. I modified the source slightly to convert the wchars to chars (1 on line 52, 2 on line 55, by removing the L before the string literals). There may be a way around this, but this got it up and running. I then compiled the source with the following: wineg++ -ld3d9 -ld3dx9 triangle.cpp This generates an a.out.exe.so binary, as well as an a.out script to run it under wine. A: go to the directory with the source and type in: winemaker --lower-uppercase -icomdlg32 -ishell32 -ishlwapi -iuser32 -igdi32 -iadvapi32 -ld3d9 . make wine yourexecutable.exe.so If you get this Error: main.c:95:5: error: ‘struct IDirect3D9’ has no member named ‘CreateDevice’ make sure you have named your file main.cpp and not main.c. A: There is currently no way to compile DirectX code to directly target Linux. You would build your application like you normally would, then run it using a compatibility layer like Wine/Cedega. A: Wine is the only way to run DirectX in Linux A: you can compile a directx apps in linux, but not launching it straight away. if you use a crosscompilator that makes windows exe and point to the windows sdk and directx sdk. A: Although this question is dated, I decided to updated on it, because it keeps popping up for me as the first suggestion for this particular problem. As the previous answers already suggested you can compile against winelib. However, there are yet another two solutions. The first solution would be either to use the MinGW provided for your distributions. MinGW is a 'cross-compiler', that compiles either from macOS or linux to windows and has support for DirectX. Note, that C++ libraries compiled with MinGW are not compatible with the MSVC compiler's ABI, thus cannot be consumed. However, the resulting binaries can be executed using Wine. The second solution would be to use clang as a cross compiler. Clang usually includes the Compiler and Linker needed for Windows out of the box. However, it'll require you to include provide the headers and libraries yourself. On the other hand, libraries compiled this way are compatible with MSVC and, thus, can be consumed by it. Side note: Latter allows you to setup an CI server using linux (I did so on a raspberry pi), which creates compatible binaries for end users.
{ "language": "en", "url": "https://stackoverflow.com/questions/47975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Deciphering C++ template error messages I'm really beginning to understand what people mean when they say that C++'s error messages are pretty terrible in regards to templates. I've seen horrendously long errors for things as simple as a function not matching its prototype. Are there any tricks to deciphering these errors? EDIT: I'm using both gcc and MSVC. They both seem to be pretty terrible. A: As @nsanders said STLFilt is a good solution. A home grown STLFilt (when you don't want to go to the trouble of installing Perl) is to copy the error message in an editor and start replacing parts of the error until it becomes (more) manageable. e.g. s/std::basic_string<char,std::char_traits<char>,std::allocator<char>>/string/g In less geeky terms this means: Replace: std::basic_string<char,std::char_traits<char>,std::allocator<char>> With: string A: Even though its an old post, this may be helpful for other people stumbling upon this post. I had exactly the same issue, in my case the errors could not even be printed to screen anymore as they were too long. So I dumped them into a text file and tried some basic search using a text editor rather than grep'ing through the file, some of which could be as large as 20 MB (not bad for just errors). Most errors would be duplicated, as I compiled in parallel, so that was another huge problem. As I was getting tired with that approach (and also it was not very productive), I developed a small helper program which I could chain in directly into my compiler toolchain, so that any output generated by the compiler could be formatted based on some rules defined in a json file. The program can be found here: https://github.com/tomrobin-teschner/dotify There are three basic functionalities: * *Don't print current output (line) from compiler if it contains a certain string *Only print a certain line if it contains a keyword (which can be colorised) *If there is a template involved, remove the content between the <> brackets and replace them by dots. So for example, MyClass<std::vector<double>, std::array<double, 3>> would be simply replaced by MyClass<...>. The full error message is still stored inside a log file (and can be later used if more detailed information are required), the parser only works on the output which is printed to the console. The command to invoke the parser is /path/to/program | tee log | /path/to/parser -f /path/to/inputFile.json /path/to/program is the program to execute (and from which the output should be formatted) /path/to/parser -f /path/to/inputFile.json, location of the parser, the -f flags specifies the input file (in json format), which, for a very simple case could look like this: { "ignoreCompleteLineIfItContainsSubstring" : [ "should be suppressed" ], "ignoreContentBetweenDelimiter" : [ { "startingDelimiter" : "<", "endingDelimiter" : ">", "replaceContentBy" : "..." } ], "styleLineContainingKeywords" : [ { "keyword" : "error", "removeDuplicates" : true, "applyStyle" : "onKeyword", "color" : "red", "style" : "bold" } ] } A full list of options and explanations can be found on the project site (https://github.com/tomrobin-teschner/dotify) A: Some compilers give better messages than others. What compiler are you using? Having said that, they are all pretty bad. C++0X will fix most of this problem (see concepts), but the standard won't be released until 2009, and broad support will probably be even later than that :-( A: At least in Visual Studio, there is more information given in the Output Build window rather than the Error List. I've had a template error in the Error List state, "Cannot convert Foo<int> to Foo<int>". There were some lines following the actual error in the Output window that helped me to decipher what the actual problem was. A: You can try the following tool to make things more sane: http://www.bdsoft.com/tools/stlfilt.html A: I sure as hell can't. Small errors explode into pages and pages of unreadable junk. Usually early in the morning, before coffee. :( My only advice is to take a deep breath, start at the top and try and parse the important pieces of information. (I know, easier said than done, right?).
{ "language": "en", "url": "https://stackoverflow.com/questions/47980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: How do I set, clear, and toggle a single bit? How do I set, clear, and toggle a bit? A: int set_nth_bit(int num, int n){ return (num | 1 << n); } int clear_nth_bit(int num, int n){ return (num & ~( 1 << n)); } int toggle_nth_bit(int num, int n){ return num ^ (1 << n); } int check_nth_bit(int num, int n){ return num & (1 << n); } A: From snip-c.zip's bitops.h: /* ** Bit set, clear, and test operations ** ** public domain snippet by Bob Stout */ typedef enum {ERROR = -1, FALSE, TRUE} LOGICAL; #define BOOL(x) (!(!(x))) #define BitSet(arg,posn) ((arg) | (1L << (posn))) #define BitClr(arg,posn) ((arg) & ~(1L << (posn))) #define BitTst(arg,posn) BOOL((arg) & (1L << (posn))) #define BitFlp(arg,posn) ((arg) ^ (1L << (posn))) OK, let's analyze things... The common expression that you seem to be having problems with in all of these is "(1L << (posn))". All this does is create a mask with a single bit on and which will work with any integer type. The "posn" argument specifies the position where you want the bit. If posn==0, then this expression will evaluate to: 0000 0000 0000 0000 0000 0000 0000 0001 binary. If posn==8, it will evaluate to: 0000 0000 0000 0000 0000 0001 0000 0000 binary. In other words, it simply creates a field of 0's with a 1 at the specified position. The only tricky part is in the BitClr() macro where we need to set a single 0 bit in a field of 1's. This is accomplished by using the 1's complement of the same expression as denoted by the tilde (~) operator. Once the mask is created it's applied to the argument just as you suggest, by use of the bitwise and (&), or (|), and xor (^) operators. Since the mask is of type long, the macros will work just as well on char's, short's, int's, or long's. The bottom line is that this is a general solution to an entire class of problems. It is, of course, possible and even appropriate to rewrite the equivalent of any of these macros with explicit mask values every time you need one, but why do it? Remember, the macro substitution occurs in the preprocessor and so the generated code will reflect the fact that the values are considered constant by the compiler - i.e. it's just as efficient to use the generalized macros as to "reinvent the wheel" every time you need to do bit manipulation. Unconvinced? Here's some test code - I used Watcom C with full optimization and without using _cdecl so the resulting disassembly would be as clean as possible: ----[ TEST.C ]---------------------------------------------------------------- #define BOOL(x) (!(!(x))) #define BitSet(arg,posn) ((arg) | (1L << (posn))) #define BitClr(arg,posn) ((arg) & ~(1L << (posn))) #define BitTst(arg,posn) BOOL((arg) & (1L << (posn))) #define BitFlp(arg,posn) ((arg) ^ (1L << (posn))) int bitmanip(int word) { word = BitSet(word, 2); word = BitSet(word, 7); word = BitClr(word, 3); word = BitFlp(word, 9); return word; } ----[ TEST.OUT (disassembled) ]----------------------------------------------- Module: C:\BINK\tst.c Group: 'DGROUP' CONST,CONST2,_DATA,_BSS Segment: _TEXT BYTE 00000008 bytes 0000 0c 84 bitmanip_ or al,84H ; set bits 2 and 7 0002 80 f4 02 xor ah,02H ; flip bit 9 of EAX (bit 1 of AH) 0005 24 f7 and al,0f7H 0007 c3 ret No disassembly errors ----[ finis ]----------------------------------------------------------------- A: Using the Standard C++ Library: std::bitset<N>. Or the Boost version: boost::dynamic_bitset. There is no need to roll your own: #include <bitset> #include <iostream> int main() { std::bitset<5> x; x[1] = 1; x[2] = 0; // Note x[0-4] valid std::cout << x << std::endl; } [Alpha:] > ./a.out 00010 The Boost version allows a runtime sized bitset compared with a standard library compile-time sized bitset. A: Here are some macros I use: SET_FLAG(Status, Flag) ((Status) |= (Flag)) CLEAR_FLAG(Status, Flag) ((Status) &= ~(Flag)) INVALID_FLAGS(ulFlags, ulAllowed) ((ulFlags) & ~(ulAllowed)) TEST_FLAGS(t,ulMask, ulBit) (((t)&(ulMask)) == (ulBit)) IS_FLAG_SET(t,ulMask) TEST_FLAGS(t,ulMask,ulMask) IS_FLAG_CLEAR(t,ulMask) TEST_FLAGS(t,ulMask,0) A: How do you set, clear, and toggle a single bit? To address a common coding pitfall when attempting to form the mask: 1 is not always wide enough What problems happen when number is a wider type than 1? x may be too great for the shift 1 << x leading to undefined behavior (UB). Even if x is not too great, ~ may not flip enough most-significant-bits. // assume 32 bit int/unsigned unsigned long long number = foo(); unsigned x = 40; number |= (1 << x); // UB number ^= (1 << x); // UB number &= ~(1 << x); // UB x = 10; number &= ~(1 << x); // Wrong mask, not wide enough To insure 1 is wide enough: Code could use 1ull or pedantically (uintmax_t)1 and let the compiler optimize. number |= (1ull << x); number |= ((uintmax_t)1 << x); Or cast - which makes for coding/review/maintenance issues keeping the cast correct and up-to-date. number |= (type_of_number)1 << x; Or gently promote the 1 by forcing a math operation that is as least as wide as the type of number. number |= (number*0 + 1) << x; As with most bit manipulations, best to work with unsigned types rather than signed ones A: Setting a bit Use the bitwise OR operator (|) to set a bit. number |= 1UL << n; That will set the nth bit of number. n should be zero, if you want to set the 1st bit and so on upto n-1, if you want to set the nth bit. Use 1ULL if number is wider than unsigned long; promotion of 1UL << n doesn't happen until after evaluating 1UL << n where it's undefined behaviour to shift by more than the width of a long. The same applies to all the rest of the examples. Clearing a bit Use the bitwise AND operator (&) to clear a bit. number &= ~(1UL << n); That will clear the nth bit of number. You must invert the bit string with the bitwise NOT operator (~), then AND it. Toggling a bit The XOR operator (^) can be used to toggle a bit. number ^= 1UL << n; That will toggle the nth bit of number. Checking a bit You didn't ask for this, but I might as well add it. To check a bit, shift the number n to the right, then bitwise AND it: bit = (number >> n) & 1U; That will put the value of the nth bit of number into the variable bit. Changing the nth bit to x Setting the nth bit to either 1 or 0 can be achieved with the following on a 2's complement C++ implementation: number ^= (-x ^ number) & (1UL << n); Bit n will be set if x is 1, and cleared if x is 0. If x has some other value, you get garbage. x = !!x will booleanize it to 0 or 1. To make this independent of 2's complement negation behaviour (where -1 has all bits set, unlike on a 1's complement or sign/magnitude C++ implementation), use unsigned negation. number ^= (-(unsigned long)x ^ number) & (1UL << n); or unsigned long newbit = !!x; // Also booleanize to force 0 or 1 number ^= (-newbit ^ number) & (1UL << n); It's generally a good idea to use unsigned types for portable bit manipulation. or number = (number & ~(1UL << n)) | (x << n); (number & ~(1UL << n)) will clear the nth bit and (x << n) will set the nth bit to x. It's also generally a good idea to not to copy/paste code in general and so many people use preprocessor macros (like the community wiki answer further down) or some sort of encapsulation. A: For the beginner I would like to explain a bit more with an example: Example: value is 0x55; bitnum : 3rd. The & operator is used check the bit: 0101 0101 & 0000 1000 ___________ 0000 0000 (mean 0: False). It will work fine if the third bit is 1 (then the answer will be True) Toggle or Flip: 0101 0101 ^ 0000 1000 ___________ 0101 1101 (Flip the third bit without affecting other bits) | operator: set the bit 0101 0101 | 0000 1000 ___________ 0101 1101 (set the third bit without affecting other bits) A: This program is based out of @Jeremy's above solution. If someone wish to quickly play around. public class BitwiseOperations { public static void main(String args[]) { setABit(0, 4); // set the 4th bit, 0000 -> 1000 [8] clearABit(16, 5); // clear the 5th bit, 10000 -> 00000 [0] toggleABit(8, 4); // toggle the 4th bit, 1000 -> 0000 [0] checkABit(8,4); // check the 4th bit 1000 -> true } public static void setABit(int input, int n) { input = input | ( 1 << n-1); System.out.println(input); } public static void clearABit(int input, int n) { input = input & ~(1 << n-1); System.out.println(input); } public static void toggleABit(int input, int n) { input = input ^ (1 << n-1); System.out.println(input); } public static void checkABit(int input, int n) { boolean isSet = ((input >> n-1) & 1) == 1; System.out.println(isSet); } } Output : 8 0 0 true A: A templated version (put in a header file) with support for changing multiple bits (works on AVR microcontrollers btw): namespace bit { template <typename T1, typename T2> constexpr inline T1 bitmask(T2 bit) {return (T1)1 << bit;} template <typename T1, typename T3, typename ...T2> constexpr inline T1 bitmask(T3 bit, T2 ...bits) {return ((T1)1 << bit) | bitmask<T1>(bits...);} /** Set these bits (others retain their state) */ template <typename T1, typename ...T2> constexpr inline void set (T1 &variable, T2 ...bits) {variable |= bitmask<T1>(bits...);} /** Set only these bits (others will be cleared) */ template <typename T1, typename ...T2> constexpr inline void setOnly (T1 &variable, T2 ...bits) {variable = bitmask<T1>(bits...);} /** Clear these bits (others retain their state) */ template <typename T1, typename ...T2> constexpr inline void clear (T1 &variable, T2 ...bits) {variable &= ~bitmask<T1>(bits...);} /** Flip these bits (others retain their state) */ template <typename T1, typename ...T2> constexpr inline void flip (T1 &variable, T2 ...bits) {variable ^= bitmask<T1>(bits...);} /** Check if any of these bits are set */ template <typename T1, typename ...T2> constexpr inline bool isAnySet(const T1 &variable, T2 ...bits) {return variable & bitmask<T1>(bits...);} /** Check if all these bits are set */ template <typename T1, typename ...T2> constexpr inline bool isSet (const T1 &variable, T2 ...bits) {return ((variable & bitmask<T1>(bits...)) == bitmask<T1>(bits...));} /** Check if all these bits are not set */ template <typename T1, typename ...T2> constexpr inline bool isNotSet (const T1 &variable, T2 ...bits) {return ((variable & bitmask<T1>(bits...)) != bitmask<T1>(bits...));} } Example of use: #include <iostream> #include <bitset> // for console output of binary values // and include the code above of course using namespace std; int main() { uint8_t v = 0b1111'1100; bit::set(v, 0); cout << bitset<8>(v) << endl; bit::clear(v, 0,1); cout << bitset<8>(v) << endl; bit::flip(v, 0,1); cout << bitset<8>(v) << endl; bit::clear(v, 0,1,2,3,4,5,6,7); cout << bitset<8>(v) << endl; bit::flip(v, 0,7); cout << bitset<8>(v) << endl; } BTW: It turns out that constexpr and inline is not used if not sending the optimizer argument (e.g.: -O3) to the compiler. Feel free to try the code at https://godbolt.org/ and look at the ASM output. A: Here is a routine in C to perform the basic bitwise operations: #define INT_BIT (unsigned int) (sizeof(unsigned int) * 8U) //number of bits in unsigned int int main(void) { unsigned int k = 5; //k is the bit position; here it is the 5th bit from the LSb (0th bit) unsigned int regA = 0x00007C7C; //we perform bitwise operations on regA regA |= (1U << k); //Set kth bit regA &= ~(1U << k); //Clear kth bit regA ^= (1U << k); //Toggle kth bit regA = (regA << k) | regA >> (INT_BIT - k); //Rotate left by k bits regA = (regA >> k) | regA << (INT_BIT - k); //Rotate right by k bits return 0; } A: Here's my favorite bit arithmetic macro, which works for any type of unsigned integer array from unsigned char up to size_t (which is the biggest type that should be efficient to work with): #define BITOP(a,b,op) \ ((a)[(size_t)(b)/(8*sizeof *(a))] op ((size_t)1<<((size_t)(b)%(8*sizeof *(a))))) To set a bit: BITOP(array, bit, |=); To clear a bit: BITOP(array, bit, &=~); To toggle a bit: BITOP(array, bit, ^=); To test a bit: if (BITOP(array, bit, &)) ... etc. A: As this is tagged "embedded" I'll assume you're using a microcontroller. All of the above suggestions are valid & work (read-modify-write, unions, structs, etc.). However, during a bout of oscilloscope-based debugging I was amazed to find that these methods have a considerable overhead in CPU cycles compared to writing a value directly to the micro's PORTnSET / PORTnCLEAR registers which makes a real difference where there are tight loops / high-frequency ISR's toggling pins. For those unfamiliar: In my example, the micro has a general pin-state register PORTn which reflects the output pins, so doing PORTn |= BIT_TO_SET results in a read-modify-write to that register. However, the PORTnSET / PORTnCLEAR registers take a '1' to mean "please make this bit 1" (SET) or "please make this bit zero" (CLEAR) and a '0' to mean "leave the pin alone". so, you end up with two port addresses depending whether you're setting or clearing the bit (not always convenient) but a much faster reaction and smaller assembled code. A: The other option is to use bit fields: struct bits { unsigned int a:1; unsigned int b:1; unsigned int c:1; }; struct bits mybits; defines a 3-bit field (actually, it's three 1-bit felds). Bit operations now become a bit (haha) simpler: To set or clear a bit: mybits.b = 1; mybits.c = 0; To toggle a bit: mybits.a = !mybits.a; mybits.b = ~mybits.b; mybits.c ^= 1; /* all work */ Checking a bit: if (mybits.c) //if mybits.c is non zero the next line below will execute This only works with fixed-size bit fields. Otherwise you have to resort to the bit-twiddling techniques described in previous posts. A: Let suppose few things first num = 55 Integer to perform bitwise operations (set, get, clear, toggle). n = 4 0 based bit position to perform bitwise operations. How to get a bit? * *To get the nth bit of num right shift num, n times. Then perform bitwise AND & with 1. bit = (num >> n) & 1; How it works? 0011 0111 (55 in decimal) >> 4 (right shift 4 times) ----------------- 0000 0011 & 0000 0001 (1 in decimal) ----------------- => 0000 0001 (final result) How to set a bit? * *To set a particular bit of number. Left shift 1 n times. Then perform bitwise OR | operation with num. num |= (1 << n); // Equivalent to; num = (1 << n) | num; How it works? 0000 0001 (1 in decimal) << 4 (left shift 4 times) ----------------- 0001 0000 | 0011 0111 (55 in decimal) ----------------- => 0001 0000 (final result) How to clear a bit? * *Left shift 1, n times i.e. 1 << n. *Perform bitwise complement with the above result. So that the nth bit becomes unset and rest of bit becomes set i.e. ~ (1 << n). *Finally, perform bitwise AND & operation with the above result and num. The above three steps together can be written as num & (~ (1 << n)); num &= (~(1 << n)); // Equivalent to; num = num & (~(1 << n)); How it works? 0000 0001 (1 in decimal) << 4 (left shift 4 times) ----------------- ~ 0001 0000 ----------------- 1110 1111 & 0011 0111 (55 in decimal) ----------------- => 0010 0111 (final result) How to toggle a bit? To toggle a bit we use bitwise XOR ^ operator. Bitwise XOR operator evaluates to 1 if corresponding bit of both operands are different, otherwise evaluates to 0. Which means to toggle a bit, we need to perform XOR operation with the bit you want to toggle and 1. num ^= (1 << n); // Equivalent to; num = num ^ (1 << n); How it works? * *If the bit to toggle is 0 then, 0 ^ 1 => 1. *If the bit to toggle is 1 then, 1 ^ 1 => 0. 0000 0001 (1 in decimal) << 4 (left shift 4 times) ----------------- 0001 0000 ^ 0011 0111 (55 in decimal) ----------------- => 0010 0111 (final result) Recommended reading - Bitwise operator exercises A: The bitfield approach has other advantages in the embedded arena. You can define a struct that maps directly onto the bits in a particular hardware register. struct HwRegister { unsigned int errorFlag:1; // one-bit flag field unsigned int Mode:3; // three-bit mode field unsigned int StatusCode:4; // four-bit status code }; struct HwRegister CR3342_AReg; You need to be aware of the bit packing order - I think it's MSB first, but this may be implementation-dependent. Also, verify how your compiler handlers fields crossing byte boundaries. You can then read, write, test the individual values as before. A: I use macros defined in a header file to handle bit set and clear: /* a=target variable, b=bit number to act upon 0-n */ #define BIT_SET(a,b) ((a) |= (1ULL<<(b))) #define BIT_CLEAR(a,b) ((a) &= ~(1ULL<<(b))) #define BIT_FLIP(a,b) ((a) ^= (1ULL<<(b))) #define BIT_CHECK(a,b) (!!((a) & (1ULL<<(b)))) // '!!' to make sure this returns 0 or 1 #define BITMASK_SET(x, mask) ((x) |= (mask)) #define BITMASK_CLEAR(x, mask) ((x) &= (~(mask))) #define BITMASK_FLIP(x, mask) ((x) ^= (mask)) #define BITMASK_CHECK_ALL(x, mask) (!(~(x) & (mask))) #define BITMASK_CHECK_ANY(x, mask) ((x) & (mask)) A: Check a bit at an arbitrary location in a variable of arbitrary type: #define bit_test(x, y) ( ( ((const char*)&(x))[(y)>>3] & 0x80 >> ((y)&0x07)) >> (7-((y)&0x07) ) ) Sample usage: int main(void) { unsigned char arr[8] = { 0x01, 0x23, 0x45, 0x67, 0x89, 0xAB, 0xCD, 0xEF }; for (int ix = 0; ix < 64; ++ix) printf("bit %d is %d\n", ix, bit_test(arr, ix)); return 0; } Notes: This is designed to be fast (given its flexibility) and non-branchy. It results in efficient SPARC machine code when compiled Sun Studio 8; I've also tested it using MSVC++ 2008 on amd64. It's possible to make similar macros for setting and clearing bits. The key difference of this solution compared with many others here is that it works for any location in pretty much any type of variable. A: More general, for arbitrary sized bitmaps: #define BITS 8 #define BIT_SET( p, n) (p[(n)/BITS] |= (0x80>>((n)%BITS))) #define BIT_CLEAR(p, n) (p[(n)/BITS] &= ~(0x80>>((n)%BITS))) #define BIT_ISSET(p, n) (p[(n)/BITS] & (0x80>>((n)%BITS))) A: This program is to change any data bit from 0 to 1 or 1 to 0: { unsigned int data = 0x000000F0; int bitpos = 4; int bitvalue = 1; unsigned int bit = data; bit = (bit>>bitpos)&0x00000001; int invbitvalue = 0x00000001&(~bitvalue); printf("%x\n",bit); if (bitvalue == 0) { if (bit == 0) printf("%x\n", data); else { data = (data^(invbitvalue<<bitpos)); printf("%x\n", data); } } else { if (bit == 1) printf("elseif %x\n", data); else { data = (data|(bitvalue<<bitpos)); printf("else %x\n", data); } } } A: If you're doing a lot of bit twiddling you might want to use masks which will make the whole thing quicker. The following functions are very fast and are still flexible (they allow bit twiddling in bit maps of any size). const unsigned char TQuickByteMask[8] = { 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, }; /** Set bit in any sized bit mask. * * @return none * * @param bit - Bit number. * @param bitmap - Pointer to bitmap. */ void TSetBit( short bit, unsigned char *bitmap) { short n, x; x = bit / 8; // Index to byte. n = bit % 8; // Specific bit in byte. bitmap[x] |= TQuickByteMask[n]; // Set bit. } /** Reset bit in any sized mask. * * @return None * * @param bit - Bit number. * @param bitmap - Pointer to bitmap. */ void TResetBit( short bit, unsigned char *bitmap) { short n, x; x = bit / 8; // Index to byte. n = bit % 8; // Specific bit in byte. bitmap[x] &= (~TQuickByteMask[n]); // Reset bit. } /** Toggle bit in any sized bit mask. * * @return none * * @param bit - Bit number. * @param bitmap - Pointer to bitmap. */ void TToggleBit( short bit, unsigned char *bitmap) { short n, x; x = bit / 8; // Index to byte. n = bit % 8; // Specific bit in byte. bitmap[x] ^= TQuickByteMask[n]; // Toggle bit. } /** Checks specified bit. * * @return 1 if bit set else 0. * * @param bit - Bit number. * @param bitmap - Pointer to bitmap. */ short TIsBitSet( short bit, const unsigned char *bitmap) { short n, x; x = bit / 8; // Index to byte. n = bit % 8; // Specific bit in byte. // Test bit (logigal AND). if (bitmap[x] & TQuickByteMask[n]) return 1; return 0; } /** Checks specified bit. * * @return 1 if bit reset else 0. * * @param bit - Bit number. * @param bitmap - Pointer to bitmap. */ short TIsBitReset( short bit, const unsigned char *bitmap) { return TIsBitSet(bit, bitmap) ^ 1; } /** Count number of bits set in a bitmap. * * @return Number of bits set. * * @param bitmap - Pointer to bitmap. * @param size - Bitmap size (in bits). * * @note Not very efficient in terms of execution speed. If you are doing * some computationally intense stuff you may need a more complex * implementation which would be faster (especially for big bitmaps). * See (http://graphics.stanford.edu/~seander/bithacks.html). */ int TCountBits( const unsigned char *bitmap, int size) { int i, count = 0; for (i=0; i<size; i++) if (TIsBitSet(i, bitmap)) count++; return count; } Note, to set bit 'n' in a 16 bit integer you do the following: TSetBit( n, &my_int); It's up to you to ensure that the bit number is within the range of the bit map that you pass. Note that for little endian processors that bytes, words, dwords, qwords, etc., map correctly to each other in memory (main reason that little endian processors are 'better' than big-endian processors, ah, I feel a flame war coming on...). A: It is sometimes worth using an enum to name the bits: enum ThingFlags = { ThingMask = 0x0000, ThingFlag0 = 1 << 0, ThingFlag1 = 1 << 1, ThingError = 1 << 8, } Then use the names later on. I.e. write thingstate |= ThingFlag1; thingstate &= ~ThingFlag0; if (thing & ThingError) {...} to set, clear and test. This way you hide the magic numbers from the rest of your code. Other than that, I endorse Paige Ruten's solution. A: Use this: int ToggleNthBit ( unsigned char n, int num ) { if(num & (1 << n)) num &= ~(1 << n); else num |= (1 << n); return num; } A: Expanding on the bitset answer: #include <iostream> #include <bitset> #include <string> using namespace std; int main() { bitset<8> byte(std::string("10010011"); // Set Bit byte.set(3); // 10010111 // Clear Bit byte.reset(2); // 10010101 // Toggle Bit byte.flip(7); // 00010101 cout << byte << endl; return 0; } A: If you want to perform this all operation with C programming in the Linux kernel then I suggest to use standard APIs of the Linux kernel. See https://www.kernel.org/doc/htmldocs/kernel-api/ch02s03.html set_bit Atomically set a bit in memory clear_bit Clears a bit in memory change_bit Toggle a bit in memory test_and_set_bit Set a bit and return its old value test_and_clear_bit Clear a bit and return its old value test_and_change_bit Change a bit and return its old value test_bit Determine whether a bit is set Note: Here the whole operation happens in a single step. So these all are guaranteed to be atomic even on SMP computers and are useful to keep coherence across processors. A: Visual C 2010, and perhaps many other compilers, have direct support for boolean operations built in. A bit has two possible values, just like a boolean, so we can use booleans instead - even if they take up more space than a single bit in memory in this representation. This works, even the sizeof() operator works properly. bool IsGph[256], IsNotGph[256]; // Initialize boolean array to detect printable characters for(i=0; i<sizeof(IsGph); i++) { IsGph[i] = isgraph((unsigned char)i); } So, to your question, IsGph[i] =1, or IsGph[i] =0 make setting and clearing bools easy. To find unprintable characters: // Initialize boolean array to detect UN-printable characters, // then call function to toggle required bits true, while initializing a 2nd // boolean array as the complement of the 1st. for(i=0; i<sizeof(IsGph); i++) { if(IsGph[i]) { IsNotGph[i] = 0; } else { IsNotGph[i] = 1; } } Note there is nothing "special" about this code. It treats a bit like an integer - which technically, it is. A 1 bit integer that can hold 2 values, and 2 values only. I once used this approach to find duplicate loan records, where loan_number was the ISAM key, using the 6-digit loan number as an index into the bit array. Savagely fast, and after 8 months, proved that the mainframe system we were getting the data from was in fact malfunctioning. The simplicity of bit arrays makes confidence in their correctness very high - vs a searching approach for example. A: Setting the nth bit to x (bit value) without using -1 Sometimes when you are not sure what -1 or the like will result in, you may wish to set the nth bit without using -1: number = (((number | (1 << n)) ^ (1 << n))) | (x << n); Explanation: ((number | (1 << n) sets the nth bit to 1 (where | denotes bitwise OR), then with (...) ^ (1 << n) we set the nth bit to 0, and finally with (...) | x << n) we set the nth bit that was 0, to (bit value) x. This also works in golang. A: Try one of these functions in the C language to change n bit: char bitfield; // Start at 0th position void chang_n_bit(int n, int value) { bitfield = (bitfield | (1 << n)) & (~( (1 << n) ^ (value << n) )); } Or void chang_n_bit(int n, int value) { bitfield = (bitfield | (1 << n)) & ((value << n) | ((~0) ^ (1 << n))); } Or void chang_n_bit(int n, int value) { if(value) bitfield |= 1 << n; else bitfield &= ~0 ^ (1 << n); } char get_n_bit(int n) { return (bitfield & (1 << n)) ? 1 : 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/47981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3062" }
Q: Is it worth investing time in learning to use Emacs? Right up front: I do not want to start a religious war. I've used vi for as long as I can remember, and the few times I've tried to pick up Emacs I've been so lost that I've quickly given up. Lots of people find Emacs very powerful, however. Its programmability is somewhat legendary. I'm primarily doing Solaris+Java development, and I'd like to ask a simple question: will my productivity increase if I invest time in getting my head around Emacs? Is the functionality that it offers over Vim going to be paid back in productivity increases in a reasonable timeframe? Repeat: I don't want a "my editor is better than yours" answer. I just want a yes or no answer as to whether it's worth investing the time or not. Will my productivity really increase? A: Emacs will provide a productivity gain if you're willing to learn and customize it to fit your needs. Most people are not. To increase your productivity you must use the tool for more than simple editing - most people never progress past simple editing. Here's a quick test: have you customized your window manager to make your environment more efficient (tailored to fit your needs)? If 'no' then likely you will not get the ROI by learning emacs. That being said, if you're developing Java, Eclipse is the standard answer, so your question is pretty moot. A: I was very happy with my Vim, but once I heard of org-mode, I started learning Emacs. org-mode could be one strong reason to learn Emacs. A: I love emacs and use it every day. That said, I don't think the cost of learning it will be recouped by productivity gains down the road. If you're programming Java, you need a good IDE. Emacs goes a fair way towards being one, but let's face it, IDEA et al beat it hands down. (emacs probably inspired a lot of those IDEs, but that's another story). A: [Disclaimer: personally, I prefer Vim. Disclaimer disclaimer: read on.] Vim excels in the small: by making motion and action separate concepts and providing facilities for complex repeats, you can perform incredibly powerful editing operations in just a short sequence of keystrokes. You can easily do things in Vim in the normal course of editing that would require you to drop down to scripting in Emacs. Also, most of the power you use comes out of the box, so even if you have extensive .vimrc customisations, chances are you will be able to work productively with any Vim installation. Emacs excels in the large: by mapping all of its UI concepts directly to basic constructs and concepts in Elisp, it becomes very easy to globally introduce features for specific kinds of files or circumstances, making Emacs something like a text-based and much more structuredly programmable form of Excel. This presumes that you are going to spend a lot of time customising your environment for personal needs and preferences. Of course, Emacs does do its best to make it easy to stay inside that one environment for everything and anything you may want to do. Ultimately, neither is superior. They offer different styles, and depending on your proclivities, one or the other will suit your personal needs and way of thinking better. It is always helpful to know both (plus more editors), of course. But you aren’t going to be appreciably more productive this way or that. A: Twice I've tried to learn Emacs. It just doesn't fit how my brain works, and so I don't use it. Emacs (or vim) is not significantly better than vim (or Emacs). Both have many options to add to them that allow them to do amazing things. I have no doubt that anything you can get done in Emacs you can also get done in Vim, just not standard. Try Emacs. See if it fits better. It's a no-lose situation. A: I prefer emacs to vi, but I'm comfortable in both. There are some things that you can do in emacs that make it more powerful than vi, but not all of them are even programming-related. (Can you send email or read news from within vi? No, but who cares?) If you're comfortable with lisp (I'm not), you might be able to write add-ons and modes and stuff to make your life easier, but that's just likely to be syntax colouring and brace matching and eye candy like that. I will stop rambling now. Will your productivity increase using emacs? No. Update: See my comment below. Since I posted this, I have come across ways that using emacs has made me more productive than using vi. A: No (and I've used both). A: I want to look into emacs further, but I just can't use it for long stretches of time; it hurts my hands. Am I doing something horribly wrong? A: vim and emacs, they are THE most capable editors and have been for quite some time. If you know one really well, I doubt that you will gain that much in the process... However, it is always a good idea to look into what plugin that are available since a couple of new plugins can do wonders for the productivity. /Johan A: vi is a kitchen knife. vim is a really nice, sharp, balanced chef's knife. Emacs is a light saber. Most of the time, my job requires me to chop vegetables. Occasionally, I have to take on an entire army of robots. I've been using Emacs for 20 years. I'm typing in Emacs right now with a widget called "It's All Text" that lets me suck text in and out of text boxes in Firefox. I can go really fast in Emacs. I am significantly less productive without it. This is highly debateable, but I also think that learning Emacs can teach you a surprising amount about programming. A: Depending on how you code, you may see a productivity increase. For background, I'm also a long-time vim user, but I learned emacs about 2 years ago, and now use them interchangeably. What drove me to the point of actually learning emacs was its useful ability to have a large number of files open at once, and to easily switch between them. I was in the middle of introducing a feature that added and touched a large number of classes. (This was C++, so there were typically two files per class.) Since I was still firming up the interface, I would typically be in the middle of updating one file when I would realize that I needed to change another. With gvim, it was easiest to open a new window for each file, which was starting to get unwieldy. With Emacs, though, it was simple to open a new file in the same window (Ctrl-x, Ctrl-f). Once Emacs has a file open, it's very easy to switch back and forth between the open buffers (Ctrl-x, Ctrl-b). Taking that one step further, a single emacs session may open many windows, so in addition to splitting the window vertically, I could decide, without interrupting work on a file, to open another next to it, letting me effectively work side-by-side while still keeping each window at the default 80-character width. There are still some things that I find easier in vim (e.g. block-select mode, simple macro recording, diff mode), and things that are easier in Emacs (line alignment, file/buffer management, window/screen management). Therefore, I find myself alternating between the two (and sometimes using both simultaneously), depending the editing task I anticipate. If you're still unsure, I'd suggest trying it out. Run through the Emacs tutorial and then use it to write code for a morning or a day, leaning heavily on the help. If you still don't like what you see, stay with vim. Regardless of what the editor brings to the table, your familiarity and knowledge of the tool will by far be the most important factor in your productivity. A: Along the same line of not looking for a religious war (but go ahead and downvote me if you feel you must), why do you feel that the only option to vi is emacs? Is it the OS you develop on, or just the options you explored? The Java development landscape enjoys some of the best IDEs these days (both free and paid for), if not the best when it comes to code editing and refactoring support.IntelliJ IDEA even has a vi plugin that can help you feel more at home, for instance (not sure if something similar is available for Eclipse). While changing tools does imply a learning curve, the time spent doing it might be worth it if the leap is big enough. A: How fast do you type? If you hunt and peck, then emacs is not for you. If your fast tho, it can help not having to grab your mouse all the time. A: Generally, emacs is more powerful than vi. You could do a lot more things in emacs. A: I don't want a holy war, but please answer a highly subjective question with a yes/no answer. Yes, you may see a productivity increase because of the powerful functionality. No, you will not see a productivity increase because the patterns and metaphors used in emacs may not align with you brain. A: The short answer to your question is, "YES". More detail below. I used vi almost exclusively from about 1980 to 1991. The only time I didn't use vi was when I was dealing with a minimal install of Unix that was too small to include vi, so I had to drop back to ed which is the minimal subset of editing functionality that the original vi was built on top of. From about 1985 on, other programmers where I worked were constantly singing the praises of emacs. But every time I'd try to learn it I wouldn't get very far. I'd spend an hour going through the emacs turorial (C-h t) and by the end of it all I'd know would be how to insert and modify text and move around the screen. I could do so much more with vi than what I'd learned in that hour with emacs that I couldn't make the switch. Three months later I'd find time to spend another hour and I'd end up going through the same material. Emacs has a Learning Curve with a capital "L". It wasn't until I was doing a contract where everybody else used emacs that I eventually decided I needed to devote more than an hour at a time to learning it. After spending a little over a day doing nothing but working through the tutorial and the included documentation, I finally got to the point where I could do things with emacs that I couldn't with vi. From then on, I've never wanted to go back. I can still type vi commands in my sleep, but I can do so much more with emacs. Understand that I'm comparing emacs and vi, not vim. I've never learned the extensions that vim has added to vi, and it's likely that many of them are features copied from emacs. If so, and if you're already proficient with vim, emacs may not hold as many advantages for you. Among the things I depend on all the time in emacs are: * *When you use emacs, everything's treated as text. This means that you can manipulate any data in any buffer with pretty much the same commands. And in cases where a buffer's in a mode where some of the standard commands are unavailable, you can copy text to another buffer running in fundamental mode and use the standard commands there. *Emacs provides a multi-"window" environment displayable in on a character-cell terminal. In the days before bitmapped graphics and real windows, emacs was written to simulate window-like behavior using nothing but ascii characters and cursor positioning. You're probably thinking, "That's ancient history. Why should anyone care about that today?" I still use that capability every day. I use a webhosting company that allows me SSH access. So I can log into a Linux host across the Internet and run shell commands. While that's pretty powerful, it's far more powerful to be able to divide my terminal emulator up into "windows" using emacs, run shells in several of those "windows", edit files in other windows, and view and edit directories in still other "windows". Actually, when I said "window" in the previous paragraph, I really meant "buffer". Emacs' character cell emulation of windows is a way of dividing up the screen real-estate. An emacs buffer is associated with content (a file, a bash shell, a directory, arbitrary text not associated with a file, etc.) which may or may not currently be displayed. To view what's in a buffer, you pick a window and tell it what buffer you want to see. So you can be working on way more things than you have space on the screen to display. It's roughly analogous to what you do in a modern bitmapped-graphics GUI when you iconify/de-iconify a window. *I've already alluded to the fact that you can run a shell inside an emacs buffer. You can have as many buffers running shells as you like. You can copy and paste text back and forth between a shell buffer and a text file, or compare a portion of text between a shell buffer and a text file using the exact same keystroke sequences you would use to copy text or compare text between two different text files. Actually, this is true for most types of buffers, not just shell buffers and buffers associated with files. *When you use emacs' command to open a file, but what you've selected is actually a directory, the buffer runs in dired (directory editor) mode. In this mode, a single keystroke will open whatever the cursor's currently pointing at, be it a file or subdirectory. A buffer in dired mode is a file manager - a character-cell terminal oriented analog to Finder on the Mac or Windows Explorer. *One of the emacs functions I use almost constantly is "compare-windows". I greatly prefer this to command-line "diff" or GUI comparison tools like what's built in to Eclipse. Diff or Eclipse compare entire files, and show you which lines differ. But what happens when you have two different lines that look very similar? Consider the following: What's the difference between this line and the other? What’s the difference between this line and the other? How long would it take you spot the difference? (Hint: ASCII and Unicode apostrophe look pretty much alike.) Unlike diff and Eclipse, which just show the lines that differ, emacs' "compare-windows" function is interactive. You position the cursor in each of two side-by-side windows at a point where the window contents are the same. Run "compare-windows", and the cursor in each window will move to the first character that differs. Reposition the cursor in one of the windows to the point where it's the same as the other window, and rerun "compare-windows" to find the next difference. This makes it easy to compare subportions of files. Another thing I regularly use "compare-windows" for is comparing checksums. Many software projects distribute a tarball of the application on a page that also includes an MD5 hash of the tarball. So, how do you compare the MD5 hash on the distribution page with the MD5 hash computed from the downloaded file. Emacs makes this trivial. First copy the MD5 hash from the webpage into a new emacs buffer. Then, after downloading the .tar.gz file, run: md5sum downloadedfile.tar.gz in a shell buffer. With those two buffers displayed in side-by-side emacs windows, position the cursor in each window at the beginning of the checksum and run "compare-windows". If they're the same, the cursor in each window will be positioned at the end of each checksum. *In the previous point, I gave the example of running "compare-windows" on the lines: What's the difference between this line and the other? What’s the difference between this line and the other? "compare-windows" will leave the cursor positioned on the apostrophe in each line. So, now you know which characters differ. But what characters are they? Type the two keystroke command CTRL-x =, and emacs will display the character, its ascii value in octal, decimal, and hex, the character offset from the beginning of the file, and the character offset from the beginning of the line. Since ASCII is a 7-bit encoding, all ASCII characters have their high-order bit turned off. Once you see that the value of the first apostrophe is 0x27 and the second one is 0x92, it's obvious that the first one is in the ASCII character set and the second one is not. *Emacs was one of the first IDEs, perhaps the very first one. It has modes for specific languages. I find them handy for imposing consistent indentation on my code to make it more readable. There's also built-in functionality for compiling and debugging code. I don't use the compiling functionality that much because when I was writing for a compiled language like C, I was used to doing that at a shell prompt. The debugging functionality was very nice for C and C++. It integrated gdb with the editor in such a way that you got pretty much the same functionality as the debugging capabilities now in Eclipse, but didn't waste screen real-estate the way modern GUI-based IDEs do. Theoretically the debugger integration should be easy to make it apply to virtually any other language, but I haven't checked to see what other languages it works with nowadays. *Emacs allows you to create macros by telling it when to start remembering what you're typing and when to stop. This is extremely powerful for tasks you frequently do. *Emacs is infinitely extensible if you know Lisp. But even though I've never learned Emacs Lisp, I still find Emacs one of the most powerful tools I've ever used. *Emacs key-bindings. I'll be the first one to admit that Emacs key-bindings suck. But it's so much more powerful than anything else I've used, that I'm willing to put up with the key-bindings. *In a humorous vein, years ago Emacs' author Richard Stallman (also originator of the GPL, founder of the GNU project and founder of FSF) lampooned those who treat vi vs. emacs as a holy war. He invented the character "Saint IGNUcius" of the Church of Emacs. In that guise, Stallman commented, "Sometimes people ask me whether it is a sin in the Church of Emacs to use the other text editor vi. Well, it's true that vi vi vi is the editor of the beast, but using a free version of vi is not a sin, it's a penance." (See http://stallman.org/saint.html. There's also a cute photo of him, but since I'm new to StackOverflow, it won't let me post more than one URL. So go to the same domain, but fetch the file saintignucius.jpg) A: I used Vim for 10 years leading up to delving into Emacs 2 years ago. I have a reasonably fresh recollection of just how my productivity curve modified over time. My points are all conditional, YMMV depending on your strengths and experience. If you have used Unix and the command line long enough that you are familiar with C-a, C-e, C-n, C-p, C-k, C-y, etc as they function on the shell, it will not take long to transition to using those same bindings (the defaults) in Emacs. I recently discovered that XCode uses these bindings as well. If you are comfortable with an always running editor, tending buffers (like you would browser tabs) and thus living in the application (like you would with Web2.0 apps in the browser), Emacs will likely show immediate productivity enhancements. If you generally work in projects of many related files, this persistence pays some added benefits in maintaining context to that buffer. Each buffer is contexted at its open file allowing for convenient use of various productivity boosting tools for your that project (like grep-find, eshell, run-python and slime). This coupled with text completion, yasnippets, etc start to look a tiny fraction like an IDE although ad-hoc and heavily individualized by your configuration. This is apart from more civilized Emacs IDE-like services like ECB. My productivity took a hit initially as I typed "jjjkkk" constantly Esc-Esc-Esc-Esc for the first week or so. The following week I cautiously started using the right navigation keys. Then I discovered the configuration file... Honestly, if I had had Emacs Starter Kit from the start, I would have said my productivity slowly worked back to parity over the 3rd-4th week but I did go down the config file rabbit hole. A co-worker of mine, though, has just transitioned from vim to emacs and he just grabbed the Starter Kit and he is on his way. First week in and he seems comfortable and enjoying all the surprise benefits (that sensation will probably last a decade). Finally, if you make mistakes you will immediately gain productivity (and confidence) from the circular kill/yank-ring and undo-ring. I am also personally a fan of region specific undos. My short answer is Yes it is worth taking 3-4 weeks of a diminishing productivity-hit to learn Emacs. Even if you decide you prefer a streamlined unix utility combo over Emacs for development you will derive from it an education widely applicable beyond the editor. A: Emacs documentation is a forest. I came from Emacs to Vim when I realized how organized Vim's documentation is, and how chordable many of the features are. I don't know what lies down the path of an Emacs expert, but I will warn you that learning to do anything useful in it takes a long time, and won't make you any better at nethack. Stick with Vim. Textmate is a better Emacs for Macs, though that won't help you with Solaris. Eclipse is kind of cool, and has a lot of plugins. A: You're productivity will increase if you decide to put the time in to program your text editor. Of the two editors, emacs presents a better framework or constant customization. If you don't program your text editor, just stay with what is comfortable. A: One good reason to learn Emacs is because other programs use Emacs keybindings too. You can use Emacs keybindings at a bash prompt, for example, or anything else using GNU readline. It's good to learn the basic movement and word/line deletion and undo/redo chords in Emacs so that you can use them in other programs. Your productivity will increase in those other tools even if you never use Emacs again. I know Vim and Emacs, and Vim fits my brain and my habits better. But other people claim the same about Emacs. You never know for yourself unless you try. It doesn't take that long to learn Emacs well enough to see whether you're going to like it. A: Since vi/Vim and Emacs are pretty close in terms of what they can or cannot do, productivity with these two editors comes from experience in using it. In my opinion, being a programmer it won't take you long to get the general idea about Emacs once you start using it. Others can only say so much, you got to try it out for yourself to know it. As for me, I use both. It's like taking more than one weapon to a war, use the right one in the right circumstances. ;) A: Will my productivity really increase? For the first few days/weeks, absolutely not. After you stop having to read through the tutorial every time you want to edit something - sure.. Emacs is more "powerful" than vim, it's scripting engine is far more flexible, and there are far more scripts, modes and the likes built around emacs. That said, the opposite is true.. If you spent the same amount of time improving your knowledge of vim, you'd could be just as productive.. Maybe not productive in the same way - I'd say vim is quicker for editing files, emacs is better at doing everything else (again, I would personally say things like flymake-mode, VCS bindings are such are quicker to use than the vim equivalent) A: I agree with Alan Storm: "because the patterns and metaphors used in Emacs may not align with you brain" This is a very important factor. Different brains adapt differently to different interfaces. Some of the main - and easily available - features I really love Emacs for, and I count as productivity enhancers: 1. "yank-pop" facility - every cut/copy is saved into a stack so you can later choose which to paste (don't know if vi / Vim has this but most Java IDEs don't) 2. the Ctrl-key navigation mapping - this allows you to navigate your file without moving your hands off to use the arrow keys. (key-binding in other editors helps of course) 3. available on almost every platform (true of vi/Vim too of course) - whether GUI- or text-based (Java IDEs are available on most platforms too but only in GUI mode, and are significantly larger and need to be installed separately whereas Emacs is generally more widely available - BSD / *nix / Linux / Mac systems 4. I prefer my editor to stay out of the way until I need it - Emacs' spartan display forces me to think before I type. 5. The basic navigation keys in Emacs are kind of universally available - on my Mac OS, I can use these keys in terminal, mac mail, etc. Ultimately, if Emacs' philosophy appeals to you, you will put in the extra effort to learn it. And it will reward you. A: I like Emacs, you can extend it by your needs- in my eyes, any system which you can extend by yourself is award-worthy. A: If you are concerned about the health of your hands choose Vim. I suffered from a bout of RSI in the past, and I found one of the main culprits was "chording" i.e. holding down many keys at the same time. Emacs uses chording extensively whilst VIM uses single letter commands chained in quick succession. This puts much less strain on your hands as the muscles don't have to twist and contort to perform commands in the editor. Injury due to RSI can ruin your productivity so in your calculations be sure to account for this. A: Disclaimer: I'm ignorant. I've been an emacs user for about 4 years, and vim user for about 6 months, maybe more like 15 if you count all the times I've tried to learn it and hated it. (The writing vs moving mode distinction kills me. Every time. So if it doesn't kill you then my opinion might be completely worthless.) That said, I think my opinion is actually interestingly different from the 26 others that I've seen on here, so I'm going to voice it. :Disclamer My opinion: * *Emacs is better for typing, especially large-scale "I'm writing a new feature and it will be a while before I even try to see if it runs". *Vim is better for editing, especially quick edits. When I need to understand and hack in 8 files simultaneously, Emacs' properties as a tiling window manager with multi-buffer (buffers have a 1.2:1 correspondence to files, they're often the same thing, but aren't necessarily) regexp-search (and replace) are incredible. If I don't like some small thing because of git diff in the shell (I don't use emacs' VC features very often, although when I do I love them) I open it with vim and get the hell out faster than I could hit Alt-TAB. The fact that Emacs' editing commands are more readily available while typing make typing much faster than it is in Vim. Ctrl+a is much faster than ESC ^ i, and you don't have the cognitive load of "do I want a or i or o or O..." which, god, I hate thinking about. And same for all the other movement commands commands. I type faster, much faster, in Emacs. That means things like Org Mode (which I use for everything: TODO lists, bug tracking, notes, long emails, documentation...) make more sense (to me) in Emacs than they would in Vim. And, Elisp is incredible, even though it sucks. It totally makes up for Emacs' broken regular expressions: you can use the full power of emacs everywhere, including in a multi-file regexp-replacement. And in text snippets. A: I really see no reason to switch. I've used vi for a long time and am quite comfortable with it; about every six months I would install emacs to give it a go, then quickly just switch back. Yes there were things I much preferred about vi, but the main reason I never stuck with it is because the time investment to fully learn another editor when I already know an extremely capable one isn't worth it. I'm reminded of this rather dated study. In my opinion, SLIME is about the only reason to switch to emacs if you're already proficient with vi. A: No I've been using emacs for years, I'm a convert from VIM, and I love it to bits. But any productivity gains from having a better, programmable editor will be totally wiped out by the enormous amount of head-fucking that it takes to get the hang of emacs. It was designed as a console editor, and its idea of interface is not yours. And even when you've got it completely, your extra productivity will mainly be expressed in the extra emacs lisp you can write. Who cares? It's great fun, and lisp is the dogs! If you want to 'get things done', then forget about programming. You can always hire programmers to 'do' 'things'. The only circumstance under which I'd recommend learning emacs for productivity reasons is if you're a lisp/scheme/clojure programmer. It makes such a good lisp environment that then the few seconds it will save you every time you want to do anything will quickly add up to a real gain. And elisp (which stands in relation to lisp as excel macros stand to ALGOL) will seem much less alien if you already use a real lisp. If you do give it a try, use it on a virtual console where it feels more like a sane way to arrange an editor. Only when that makes sense try to use it under a window system, which will fight with it. A: In an earlier answer, Aristotle Pagaltzis wrote: "Vim excels in the small ... You can easily do things in Vim in the normal course of editing that would require you to drop down to scripting in Emacs." I switched to Emacs after over a decade of exclusively using vi, and initially I would have agreed with the claim, "You can easily do things in Vim in the normal course of editing that would require you to drop down to scripting in Emacs." But then I discovered that by using Emacs' macro capability and a large repeat count, I could easily make Emacs do pretty much everything that vi had made easy, and a great deal more. Emacs' macro functionality involves three commands: C-x ( start remembering keystrokes C-x ) stop remembering keystrokes C-x e replay the remembered keystrokes For example, in vi if I wanted to find all <a> tags in an HTML file and add a target attribute, I might do something like the following: :g/^<a/s/>/ target="_blank">/ This example is not perfect, since it assumes that all <a> tags are on a line by themselves. But it's good enough for illustrating how one accomplishes the equivalent task in two different editors. To achieve the same effect easily in emacs, here's what I do: 1. C-x ( 2. M-C-s <a\> 3. C-b 4. C-s > 5. C-b 6. target="_blank" 7. C-x ) 8. C-u 10000 C-x e Here's a description of what each keystroke above does: 1. start remembering keystrokes 2. regex search for <a. Note that the "\>" after the "a" is not HTML. It's emacs regex notation for end-of-word. 3. back up one character - as a side-effect this gets you out of search mode 4. search for the next ">" 5. back up over the ">" 6. enter space as an attribute-delimiter followed by the target="_blank" attribute 7. stop remembering keystrokes 8. replay the remembered keystrokes 10,000 times or until the search fails It looks complicated, but it's actually very easy to type. And you can use this approach to do lots of things that vi can't do, without ever dropping down to Lisp code.
{ "language": "en", "url": "https://stackoverflow.com/questions/48006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: Implications of Instantiating Objects with Dynamic Variables in PHP What are the performance, security, or "other" implications of using the following form to declare a new class instance in PHP <?php $class_name = 'SomeClassName'; $object = new $class_name; ?> This is a contrived example, but I've seen this form used in Factories (OOP) to avoid having a big if/switch statement. Problems that come immediately to mind are * *You lose the ability to pass arguments into a constructor (LIES. Thanks Jeremy) *Smells like eval(), with all the security concerns it brings to the table (but not necessarily the performance concerns?) What other implications are there, or what search engine terms other than "Rank PHP Hackery" can someone use to research this? A: It looks you can still pass arguments to the constructor, here's my test code: <?php class Test { function __construct($x) { echo $x; } } $class = 'Test'; $object = new $class('test'); // echoes "test" ?> That is what you meant, right? So the only other problem you mentioned and that I can think of is the security of it, but it shouldn't be too difficult to make it secure, and it's obviously a lot more secure than using eval(). A: One of the issues with the resolving at run time is that you make it really hard for the opcode caches (like APC). Still, for now, doing something like you describe in your question is a valid way if you need a certain amount of indirection when instanciating stuff. As long as you don't do something like $classname = 'SomeClassName'; for ($x = 0; $x < 100000; $x++){ $object = new $classname; } you are probably fine :-) (my point being: Dynamically looking up a class here and then doesn't hurt. If you do it often, it will). Also, be sure that $classname can never be set from the outside - you'd want to have some control over what exact class you will be instantiating. A: I would add that you can also instanciate it with a dynamic number of parameters using : <?php $class = "Test"; $args = array('a', 'b'); $ref = new ReflectionClass($class); $instance = $ref->newInstanceArgs($args); ?> But of course you add some more overhead by doing this. About the security issue I don't think it matters much, at least it's nothing compared to eval(). In the worst case the wrong class gets instanciated, of course this is a potential security breach but much harder to exploit, and it's easy to filter using an array of allowed classes, if you really need user input to define the class name. A: There may indeed be a performance hit for having to resolve the name of the variable before looking up the class definition. But, without declaring classes dynamically you have no real way to do "dyanmic" or "meta" programming. You would not be able to write code generation programs or anything like a domain-specific language construct. We use this convention all over the place in some of the core classes of our internal framework to make the URL to controller mappings work. I have also seen it in many commercial open source applications (I'll try and dig for an example and post it). Anyway, the point of my answer is that it seems well worth what is probably a slight performance decrease if it makes more flexible, dynamic code. The other trade-off that I should mention, though, is that performance aside, it does make the code slightly less obvious and readable unless you are very careful with your variable names. Most code is written once, and re-read and modified many times, so readability is important. A: Alan, there's nothing wrong with dynamic class initialisation. This technique is present also in Java language, where one can convert string to class using Class.forClass('classname') method. It is also quite handy to defer algorithm complexity to several classes instead of having list of if-conditions. Dynamic class names are especially well suited in situations where you want your code to remain open for extension without the need for modifications. I myself often use different classes in conjunction with database tables. In one column I keep class name that will be used to handle the record. This gives me great power of adding new types of records and handle them in unique way without changing a single byte in existing code. You shouldn't be concerned about the performance. It has almost no overhead and objects themselves are super fast in PHP. If you need to spawn thousands of identical objects, use Flyweight design pattern to reduce memory footprint. Especially, you should not sacrifice your time as a developer just to save milliseconds on server. Also op-code optimisers work seamlessly with this technique. Scripts compiled with Zend Optimizer did not misbehave. A: So I've recently encountered this, and wanted to give my thoughts on the "other" implications of using dynamic instantiation. For one thing func_get_args() throws a bit of a wrench into things. For example I want to create a method that acts as a constructor for a specific class (e.g. a factory method). I'd need to be able to pass along the params passed to my factory method to the constructor of the class I'm instantiating. If you do: public function myFactoryMethod() { $class = 'SomeClass'; // e.g. you'd get this from a switch statement $obj = new $class( func_get_args() ); return $obj; } and then call: $factory->myFactoryMethod('foo','bar'); You're actually passing an array as the first/only param, which is the same as new SomeClass( array( 'foo', 'bar' ) ) This is obviously not what we want. The solution (as noted by @Seldaek) requires us to convert the array into params of a constructor: public function myFactoryMethod() { $class = 'SomeClass'; // e.g. you'd get this from a switch statement $ref = new ReflectionClass( $class ); $obj = $ref->newInstanceArgs( func_get_args() ); return $obj; } Note: This could not be accomplished using call_user_func_array, because you can't use this approach to instantiate new objects. HTH! A: I use dynamic instantiation in my custom framework. My application controller needs to instantiate a sub-controller based on the request, and it would be simply ridiculous to use a gigantic, ever-changing switch statement to manage the loading of those controllers. As a result, I can add controller after controller to my application without having to modify the app controller to call them. As long as my URIs adhere to the conventions of my framework, the app controller can use them without having to know anything until runtime. I'm using this framework in a production shopping cart application right now, and the performance is quite favorable, too. That being said, I'm only using the dynamic class selection in one or two spots in the whole app. I wonder in what circumstances you would need to use it frequently, and whether or not those situations are ones that are suffering from a programmer's desire to over-abstract the application (I've been guilty of this before). A: One problem is that you can't address static members like that, for instance <?php $className = 'ClassName'; $className::someStaticMethod(); //doesn't work ?> A: @coldFlame: IIRC you can use call_user_func(array($className, 'someStaticMethod') and call_user_func_array() to pass params A: class Test { function testExt() { print 'hello from testExt :P'; } function test2Ext() { print 'hi from test2Ext :)'; } } $class = 'Test'; $method_1 = "testExt"; $method_2 = "test2Ext"; $object = new $class(); // echoes "test" $object->{$method_2}(); // will print 'hi from test2Ext :)' $object->{$method_1}(); // will print 'hello from testExt :P'; this trick works in both php4 and php5 :D enjoy..
{ "language": "en", "url": "https://stackoverflow.com/questions/48009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do the CakePHP and codeigniter frameworks compare to the ASP.NET MVC framework? As a classic ASP developer about once a year since ASP.NET came out I decide I really gotta buckle down and learn this fancy new ASP.NET. A few days in and messing with code-behinds and webforms and all this other stuff. I decide the new fancy stuff is whack and go find something else to learn (PHP and Ruby and Python were all fun to play with but I couldn't use it much with my existing ASP stuff). Anyway, one project came up and I was able to use PHP and CakePHP and after getting my head around MVC I finally found something I liked and felt it was worth using over ASP (PHP is cool too but it feels a lot like ASP so maybe that's why I like it so much). But now with Jeff and the SO team raving about ASP.NET MVC, I think it's about time I start messing with ASP.NET again but I keep thinking that PHP is free and blah blah blah . . . is ASP.NET MVC that much better than PHP with tools like CakePHP? I know about compiled vs. not compiled and speed issues but most of that seems like a non-issue when you factor in all the caching and the fact that you can compile your PHP if you want. A: For a classic ASP developer moving to ASP.NET MVC you are looking at learning a new language (C# or VB.NET), a new database layer (ADO.NET), and a new framework (ASP.NET MVC). That's a lot of new technologies to wrap your head around all at once. Also, I don't think it is so much that ASP.NET MVC is so much better than CakePHP (or Code Igniter, Ruby on Rails, etc.) The great thing about ASP.NET MVC (and other ASP.NET-based technologies such as MonoRail http://www.castleproject.org/monorail/index.html) is that developers who are using ASP.NET now have the option of following the MVC pattern using tools and languages they are familiar with. That is an option that wasn't available before. A: Not too experienced with Microsoft's web stack, so I can't speak to that. But I will say I as a web developer I was pretty disappointed by CakePHP. What especially bothers me about it is that because it forces itself to be backward compatible with PHP4, therefore, it lacks much of the oop design and structure I am used to. Everything ends up being array based instead of the state in an object. Personally, after spending some time with cake and being disappointed, I decided to suck it up and learn Ruby on Rails, which I am doing now. If you wanted to stay with PHP I would look at Symfony, but they are all really heavily inspired by rails. A: ASP.NET MVC is sparsely documented at present -- and of course it depends on your background. If you don't know ASP.NET yet, I wouldn't recommend jumping into it with ASP.NET MVC, too many layers of learning at once.
{ "language": "en", "url": "https://stackoverflow.com/questions/48012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What is a jump table? Can someone explain the mechanics of a jump table and why is would be needed in embedded systems? A: A jump table can be either an array of pointers to functions or an array of machine code jump instructions. If you have a relatively static set of functions (such as system calls or virtual functions for a class) then you can create this table once and call the functions using a simple index into the array. This would mean retrieving the pointer and calling a function or jumping to the machine code depending on the type of table used. The benefits of doing this in embedded programming are: * *Indexes are more memory efficient than machine code or pointers, so there is a potential for memory savings in constrained environments. *For any particular function the index will remain stable and changing the function merely requires swapping out the function pointer. If does cost you a tiny bit of performance for accessing the table, but this is no worse than any other virtual function call. A: Jump tables are commonly (but not exclusively) used in finite state machines to make them data driven. Instead of nested switch/case switch (state) case A: switch (event): case e1: .... case e2: .... case B: switch (event): case e3: .... case e1: .... you can make a 2d array or function pointers and just call handleEvent[state][event] A: A jump table, also known as a branch table, is a series of instructions, all unconditionally branching to another point in code. You can think of them as a switch (or select) statement where all the cases are filled: MyJump(int c) { switch(state) { case 0: goto func0label; case 1: goto func1label; case 2: goto func2label; } } Note that there's no return - the code that it jumps to will execute the return, and it will jump back to wherever myjump was called. This is useful for state machines where you execute certain code based on the state variable. There are many, many other uses, but this is one of the main uses. It's used where you don't want to waste time fiddling with the stack, and want to save code space. It is especially of use in interrupt handlers where speed is extremely important, and the peripheral that caused the interrupt is only known by a single variable. This is similar to the vector table in processors with interrupt controllers. One use would be taking a $0.60 microcontroller and generating a composite (TV) signal for video applications. the micro isn't powerful - in fact it's just barely fast enough to write each scan line. A jump table would be used to draw characters, because it would take too long to load a bitmap from memory, and use a for() loop to shove the bitmap out. Instead there's a separate jump to the letter and scan line, and then 8 or so instructions that actually write the data directly to the port. -Adam A: From Wikipedia: In computer programming, a branch table (sometimes known as a jump table) is a term used to describe an efficient method of transferring program control (branching) to another part of a program (or a different program that may have been dynamically loaded) using a table of branch instructions. The branch table construction is commonly used when programming in assembly language but may also be generated by a compiler. A branch table consists of a serial list of unconditional branch instructions that is branched into using an offset created by multiplying a sequential index by the instruction length (the number of bytes in memory occupied by each branch instruction). It makes use of the fact that machine code instructions for branching have a fixed length and can be executed extremely efficiently by most hardware, and is most useful when dealing with raw data values that may be easily converted to sequential index values. Given such data, a branch table can be extremely efficient; it usually consists of the following steps: optionally validating the input data to ensure it is acceptable; transforming the data into an offset into the branch table, this usually involves multiplying or shifting it to take into account the instruction length; and branching to an address made up of the base of the table and the generated offset: this often involves an addition of the offset onto the program counter register. A: A jump table is described here, but briefly, it's an array of addresses the CPU should jump to based on certain conditions. As an example, a C switch statement is often implemented as a jump table where each jump entry will go to a particular "case" label. In embedded systems, where memory usage is at a premium, many constructs are better served by using a jump table instead of more memory-intensive methods (like a massive if-else-if). A: Wikipedia sums it up pretty well: In computer programming, a branch table (sometimes known as a jump table) is a term used to describe an efficient method of transferring program control (branching) to another part of a program (or a different program that may have been dynamically loaded) using a table of branch instructions. The branch table construction is commonly used when programming in assembly language but may also be generated by a compiler. ... Use of branch tables and other raw data encoding was common in the early days of computing when memory was expensive, CPUs were slower and compact data representation and efficient choice of alternatives were important. Nowadays, they are commonly used in embedded programming and operating system development. In other words, it's a useful construct to use when your system is extremely memory and/or CPU limited, as is often the case in an embedded platform. A: Jump tables, more often known as a Branch table, are usually used only by the machine. The compiler creates a list of all labels in a assembly program and links all labels to a a memory location. A jump table pretty much is a reference card to where, a function or variable or what ever the label maybe, is stored in memory. So as a function executes, on finishing it jumps back to it's previous memory location or jumps to the next function, etc. And If your talking about what I think you are, you don't just need them in embedded systems but in any type of compiled/interpreted environment. Brian Gianforcaro
{ "language": "en", "url": "https://stackoverflow.com/questions/48017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "60" }
Q: Hbase / Hadoop Query Help I'm working on a project with a friend that will utilize Hbase to store it's data. Are there any good query examples? I seem to be writing a ton of Java code to iterate through lists of RowResult's when, in SQL land, I could write a simple query. Am I missing something? Or is Hbase missing something? A: I think you, like many of us, are making the mistake of treating bigtable and HBase like just another RDBMS when it's actually a column-oriented storage model meant for efficiently storing and retrieving large sets of sparse data. This means storing, ideally, many-to-one relationships within a single row, for example. Your queries should return very few rows but contain (potentially) many datapoints. Perhaps if you told us more about what you were trying to store, we could help you design your schema to match the bigtable/HBase way of doing things. For a good rundown of what HBase does differently than a "traditional" RDBMS, check out this awesome article: Matching Impedance: When to use HBase by Bryan Duxbury. A: If you want to access HBase using a query language and a JDBC driver it is possible. Paul Ambrose has released a library called HBQL at hbql.com that will help you do this. I've used it for a couple of projects and it works well. You obviously won't have access to full SQL, but it does make it a little easier to use. A: I looked at Hadoop and Hbase and as Sean said, I soon realised it didn't give me what I actually wanted, which was a clustered JDBC compliant database. I think you could be better off using something like C-JDBC or HA-JDBC which seem more like what I was was after. (Personally, I haven't got farther with either of these other than reading the documentation so I can't tell which of them is any good, if any.) A: I'd recommend taking a look at Apache Hive project, which is similar to HBase (in the sense that it's a distributed database) which implements a SQL-esque language. A: Thanks for the reply Sean, and sorry for my late response. I often make the mistake of treating HBase like a RDBMS. So often in fact that I've had to re-write code because of it! It's such a hard thing to unlearn. Right now we have only 4 tables. Which, in this case, is very few considering my background. I was just hoping to use some RDBMS functionality while mostly sticking to the column-oriented storage model. A: Glad to hear you guys are using HBase! I'm not an expert by any stretch of the imagination, but here are a couple of things that might help. * *HBase is based on / inspired by BigTable, which happens to be exposed by AppEngine as their db api, so browsing their docs should help a great deal if you're working on a webapp. *If you're not working on a webapp, the kind of iterating you're describing is usually handled with via map/reduce (don't emit the values you don't want). Skipping over values using iterators virtually guarantees your application will have bottlenecks with HBase-sized data sets. If you find you're still thinking in SQL, check out cloudera's pig tutorial and hive tutorial. *Basically the whole HBase/SQL mental difference (for non-webapps) boils down to "Send the computation to the data, don't send the data to the computation" -- if you keep that in mind while you're coding you'll do fine :-) Regards, David
{ "language": "en", "url": "https://stackoverflow.com/questions/48041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Is there any alternative to using % (modulus) in C/C++? I read somewhere once that the modulus operator is inefficient on small embedded devices like 8 bit micro-controllers that do not have integer division instruction. Perhaps someone can confirm this but I thought the difference is 5-10 time slower than with an integer division operation. Is there another way to do this other than keeping a counter variable and manually overflowing to 0 at the mod point? const int FIZZ = 6; for(int x = 0; x < MAXCOUNT; x++) { if(!(x % FIZZ)) print("Fizz\n"); // slow on some systems } vs: The way I am currently doing it: const int FIZZ = 6; int fizzcount = 1; for(int x = 1; x < MAXCOUNT; x++) { if(fizzcount >= FIZZ) { print("Fizz\n"); fizzcount = 0; } } A: There is an overhead most of the time in using modulo that are not powers of 2. This is regardless of the processor as (AFAIK) even processors with modulus operators are a few cycles slower for divide as opposed to mask operations. For most cases this is not an optimisation that is worth considering, and certainly not worth calculating your own shortcut operation (especially if it still involves divide or multiply). However, one rule of thumb is to select array sizes etc. to be powers of 2. so if calculating day of week, may as well use %7 regardless if setting up a circular buffer of around 100 entries... why not make it 128. You can then write % 128 and most (all) compilers will make this & 0x7F A: Ah, the joys of bitwise arithmetic. A side effect of many division routines is the modulus - so in few cases should division actually be faster than modulus. I'm interested to see the source you got this information from. Processors with multipliers have interesting division routines using the multiplier, but you can get from division result to modulus with just another two steps (multiply and subtract) so it's still comparable. If the processor has a built in division routine you'll likely see it also provides the remainder. Still, there is a small branch of number theory devoted to Modular Arithmetic which requires study if you really want to understand how to optimize a modulus operation. Modular arithmatic, for instance, is very handy for generating magic squares. So, in that vein, here's a very low level look at the math of modulus for an example of x, which should show you how simple it can be compared to division: Maybe a better way to think about the problem is in terms of number bases and modulo arithmetic. For example, your goal is to compute DOW mod 7 where DOW is the 16-bit representation of the day of the week. You can write this as: DOW = DOW_HI*256 + DOW_LO DOW%7 = (DOW_HI*256 + DOW_LO) % 7 = ((DOW_HI*256)%7 + (DOW_LO % 7)) %7 = ((DOW_HI%7 * 256%7) + (DOW_LO%7)) %7 = ((DOW_HI%7 * 4) + (DOW_LO%7)) %7 Expressed in this manner, you can separately compute the modulo 7 result for the high and low bytes. Multiply the result for the high by 4 and add it to the low and then finally compute result modulo 7. Computing the mod 7 result of an 8-bit number can be performed in a similar fashion. You can write an 8-bit number in octal like so: X = a*64 + b*8 + c Where a, b, and c are 3-bit numbers. X%7 = ((a%7)*(64%7) + (b%7)*(8%7) + c%7) % 7 = (a%7 + b%7 + c%7) % 7 = (a + b + c) % 7 since 64%7 = 8%7 = 1 Of course, a, b, and c are c = X & 7 b = (X>>3) & 7 a = (X>>6) & 7 // (actually, a is only 2-bits). The largest possible value for a+b+c is 7+7+3 = 17. So, you'll need one more octal step. The complete (untested) C version could be written like: unsigned char Mod7Byte(unsigned char X) { X = (X&7) + ((X>>3)&7) + (X>>6); X = (X&7) + (X>>3); return X==7 ? 0 : X; } I spent a few moments writing a PIC version. The actual implementation is slightly different than described above Mod7Byte: movwf temp1 ; andlw 7 ;W=c movwf temp2 ;temp2=c rlncf temp1,F ; swapf temp1,W ;W= a*8+b andlw 0x1F addwf temp2,W ;W= a*8+b+c movwf temp2 ;temp2 is now a 6-bit number andlw 0x38 ;get the high 3 bits == a' xorwf temp2,F ;temp2 now has the 3 low bits == b' rlncf WREG,F ;shift the high bits right 4 swapf WREG,F ; addwf temp2,W ;W = a' + b' ; at this point, W is between 0 and 10 addlw -7 bc Mod7Byte_L2 Mod7Byte_L1: addlw 7 Mod7Byte_L2: return Here's a liitle routine to test the algorithm clrf x clrf count TestLoop: movf x,W RCALL Mod7Byte cpfseq count bra fail incf count,W xorlw 7 skpz xorlw 7 movwf count incfsz x,F bra TestLoop passed: Finally, for the 16-bit result (which I have not tested), you could write: uint16 Mod7Word(uint16 X) { return Mod7Byte(Mod7Byte(X & 0xff) + Mod7Byte(X>>8)*4); } Scott A: Unless you really need high performance on multiple embedded platforms, don't change how you code for performance reasons until you profile! Code that's written awkwardly to optimize for performance is hard to debug and hard to maintain. Write a test case, and profile it on your target. Once you know the actual cost of modulus, then decide if the alternate solution is worth coding. A: If you are calculating a number mod some power of two, you can use the bit-wise and operator. Just subtract one from the second number. For example: x % 8 == x & 7 x % 256 == x & 255 A few caveats: * *This only works if the second number is a power of two. *It's only equivalent if the modulus is always positive. The C and C++ standards don't specify the sign of the modulus when the first number is negative (until C++11, which does guarantee it will be negative, which is what most compilers were already doing). A bit-wise and gets rid of the sign bit, so it will always be positive (i.e. it's a true modulus, not a remainder). It sounds like that's what you want anyway though. *Your compiler probably already does this when it can, so in most cases it's not worth doing it manually. A: @Matthew is right. Try this: int main() { int i; for(i = 0; i<=1024; i++) { if (!(i & 0xFF)) printf("& i = %d\n", i); if (!(i % 0x100)) printf("mod i = %d\n", i); } } A: x%y == (x-(x/y)*y) Hope this helps. A: In the embedded world, the "modulus" operations you need to do are often the ones that break down nicely into bit operations that you can do with &, | and sometimes >>. A: Do you have access to any programmable hardware on the embedded device? Like counters and such? If so, you might be able to write a hardware based mod unit, instead of using the simulated %. (I did that once in VHDL. Not sure if I still have the code though.) Mind you, you did say that division was 5-10 times faster. Have you considered doing a division, multiplication, and subtraction to simulated the mod? (Edit: Misunderstood the original post. I did think it was odd that division was faster than mod, they are the same operation.) In your specific case, though, you are checking for a mod of 6. 6 = 2*3. So you could MAYBE get some small gains if you first checked if the least significant bit was a 0. Something like: if((!(x & 1)) && (x % 3)) { print("Fizz\n"); } If you do that, though, I'd recommend confirming that you get any gains, yay for profilers. And doing some commenting. I'd feel bad for the next guy who has to look at the code otherwise. A: You should really check the embedded device you need. All the assembly language I have seen (x86, 68000) implement the modulus using a division. Actually, the division assembly operation returns the result of the division and the remaining in two different registers. A: Not that this is necessarily better, but you could have an inner loop which always goes up to FIZZ, and an outer loop which repeats it all some certain number of times. You've then perhaps got to special case the final few steps if MAXCOUNT is not evenly divisible by FIZZ. That said, I'd suggest doing some research and performance profiling on your intended platforms to get a clear idea of the performance constraints you're under. There may be much more productive places to spend your optimisation effort. A: @Jeff V: I see a problem with it! (Beyond that your original code was looking for a mod 6 and now you are essentially looking for a mod 8). You keep doing an extra +1! Hopefully your compiler optimizes that away, but why not just test start at 2 and go to MAXCOUNT inclusive? Finally, you are returning true every time that (x+1) is NOT divisible by 8. Is that what you want? (I assume it is, but just want to confirm.) A: For modulo 6 you can change the Python code to C/C++: def mod6(number): while number > 7: number = (number >> 3 << 1) + (number & 0x7) if number > 5: number -= 6 return number A: The print statement will take orders of magnitude longer than even the slowest implementation of the modulus operator. So basically the comment "slow on some systems" should be "slow on all systems". Also, the two code snippets provided don't do the same thing. In the second one, the line if(fizzcount >= FIZZ) is always false so "FIZZ\n" is never printed.
{ "language": "en", "url": "https://stackoverflow.com/questions/48053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Moving ViewState out of the page? We are trying to lighten our page load as much as possible. Since ViewState can sometimes swell up to 100k of the page, I'd love to completely eliminate it. I'd love to hear some techniques other people have used to move ViewState to a custom provider. That said, a few caveats: * *We serve on average 2 Million unique visitors per hour. *Because of this, Database reads have been a serious issue in performance, so I don't want to store ViewState in the database. *We also are behind a load balancer, so any solution has to work with the user bouncing from machine to machine per postback. Ideas? A: How do you handle Session State? There is a built-in "store the viewstate in the session state" provider. If you are storing the session state in some fast, out of proc system, that might be the best option for the viewstate. edit: to do this add the following code to the your Page classes / global page base class protected override PageStatePersister PageStatePersister { get { return new SessionPageStatePersister(this); } } Also... this is by no means a perfect (or even good) solution to a large viewstate. As always, minimize the size of the viewstate as much as possible. However, the SessionPageStatePersister is relatively intelligent and avoids storing an unbounded number of viewstates per session as well as avoids storing only a single viewstate per session. A: I have tested many ways to remove the load of view state from the page and between all hacks and some software out there the only thing that it is truly scalable is the StrangeLoops As10000 appliance. Transparent, no need to change the underlying application. A: As previously stated, I have used the database to store the ViewState in the past. Although this works for us, we don't come close to 2 million unique visitors per hour. I think a hardware solution is definitely the way to go, whether using the StrangeLoop products or another product. A: The following works quite well for me: string vsid; protected override object LoadPageStateFromPersistenceMedium() { Pair vs = base.LoadPageStateFromPersistenceMedium() as Pair; vsid = vs.First as string; object result = Session[vsid]; Session.Remove(vsid); return result; } protected override void SavePageStateToPersistenceMedium(object state) { if (vsid == null) { vsid = Guid.NewGuid().ToString(); } Session[vsid] = state; base.SavePageStateToPersistenceMedium(new Pair(vsid, null)); } A: You can always compress ViewState so you get the benefits of ViewState without so much bloat: public partial class _Default : System.Web.UI.Page { protected override object LoadPageStateFromPersistenceMedium() { string viewState = Request.Form["__VSTATE"]; byte[] bytes = Convert.FromBase64String(viewState); bytes = Compressor.Decompress(bytes); LosFormatter formatter = new LosFormatter(); return formatter.Deserialize(Convert.ToBase64String(bytes)); } protected override void SavePageStateToPersistenceMedium(object viewState) { LosFormatter formatter = new LosFormatter(); StringWriter writer = new StringWriter(); formatter.Serialize(writer, viewState); string viewStateString = writer.ToString(); byte[] bytes = Convert.FromBase64String(viewStateString); bytes = Compressor.Compress(bytes); ClientScript.RegisterHiddenField("__VSTATE", Convert.ToBase64String(bytes)); } // ... } using System.IO; using System.IO.Compression; public static class Compressor { public static byte[] Compress(byte[] data) { MemoryStream output = new MemoryStream(); GZipStream gzip = new GZipStream(output, CompressionMode.Compress, true); gzip.Write(data, 0, data.Length); gzip.Close(); return output.ToArray(); } public static byte[] Decompress(byte[] data) { MemoryStream input = new MemoryStream(); input.Write(data, 0, data.Length); input.Position = 0; GZipStream gzip = new GZipStream(input, CompressionMode.Decompress, true); MemoryStream output = new MemoryStream(); byte[] buff = new byte[64]; int read = -1; read = gzip.Read(buff, 0, buff.Length); while(read > 0) { output.Write(buff, 0, read); read = gzip.Read(buff, 0, buff.Length); } gzip.Close(); return output.ToArray(); } } A: Due to the typical organizational bloat, requesting new hardware takes eons, and requesting hardware that would involve a complete rewire of our current setup would probably get some severe resistance from the engineering department. I really need to come up with a software solution, because that's the only world I have some control over. Yay for Enterprise :( A: I've tried to find some of the products I had researched in the past that works just like StrangeLoops (but software based) It looks like they went all out of business, the only thing from my list that still up there is ScaleOut but they are specialized in session state caching. I understand how hard it is to sell hardware solutions to senior management but it is always a good idea to at least get management to accept listening to the hardware's sales rep. I am much rather putting some hardware that will present me with an immediate solution because it allows me (or buy me some time) to get some other real job done. I understand, it really sucks but the alternative is to change your code for optimization and that would maybe cost a lot more than getting an appliance. Let me know if you find another software based solution. A: I'm going to see if I can come up with a way to leverage our current State server to contain the viewstate in memory, I should be able to use the user session ID to keep things synched up between machines. If I come up with a good solution, I'll remove any IP protected code and put it out for public use. A: Oh no, red tape. Well this is going to be a tall order to fill. You mentioned here that you use a state server to serve your session state. How do you have this setup? Maybe you can do something similar here also? Edit Awh @Jonathan, you posted while I was typing this answer up. I think going that route could be promising. One thing is that it will definitely be memory intensive. @Mike I don't think storing it in the session information will be a good idea, due to the memory intensiveness of viewstate and also how many times you will need to access the viewstate. SessionState is accessed a lot less often as the viewstate. I would keep the two separate. I think the ultimate solution would be storing the ViewState on the client some how and maybe worth looking at. With Google Gears, this could be possible now. A: Have you considered if you really need all that viewstate? For example, if you populate a datagrid from a database, all the data will be saved in viewstate by default. However, if the grid is just for presenting data, you dont need a form a all, and hence no viewstate. You only need viewstate when there is some interaction with the user through postbacks, and even then the actual form data may be sufficient to recreate the view. You can selectively disable viewstate for controls on the page. You have a very special UI if you actually need 100K of viewstate. If you reduce the viewstate to what is absolutely necessary, it might turn out to be the easiest and most scalable to keep the viewstate in the page. A: I might have a simple solution for you in another post. It's a simple class to include in your app and a few lines of code in the asp.net page itself. If you combine it with a distributed caching system you could save a lot of dough as viewstate is large and costly. Microsoft’s velocity might be a good product to attach this method too. If you do use it and save a ton of money though I'd love a little mention for that. Also if you are unsure of anything let me know and I can talk with you in person. Here is the link to my code. link text If you are concerned with scaling then using the session token as a unique identifier or storing the state in session is more or less guaranteed to work in a web farm scenario. A: Store the viewstate in a session object and use a distributed cache or state service to store session seperate from the we servers such as microsofts velocity. A: I know this is a little stale, but I've been working for a couple of days on an opensource "virtual appliance" using squid and ecap to: 1.) gzip 2.) handle ssl 3.) replace viewstate with a token on request / response 4.) memcache for object caching Anyways, it looks pretty promising. basically it would sit in front of the loadbalancers and should really help client performance. Doesnt seem to be very hard to set up either. A: I blogged on this a while ago - the solution is at http://www.adverseconditionals.com/2008/06/storing-viewstate-in-memcached-ultimate.html This lets you change the ViewState provider to one of your choice without having to change each of your Page classes, by using a custom PageAdapter. I stored the ViewState in memcached. In retrospect I think storing it in a database or on disk is better - we filled memcached up very quickly. Its a very low friction solution. A: No need to buy or sell anything to eliminate viewstate bloating. Just need to extend the HiddenFieldPageStatePersister. The 100-200KB of ViewState will stay on the server and will send only a 62byte token on the page instead. Here is a detailed article on how this can be done: http://ashishnangla.com/2011/07/21/reducing-size-of-viewstate-in-asp-net-webforms-by-writing-a-custom-viewstate-provider-pagestatepersister-part-12/
{ "language": "en", "url": "https://stackoverflow.com/questions/48070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is there a .NET Control Similar to the Access 2007 Split Form? Is there a .NET Control Similar to the Access 2007 Split Form? Or has anyone built such a control? I upgraded a small personal Name and Address DB to Access 2007 and noticed the Form had a property called “Default View” which can be set to “Split Form”. “Split Form” mode has a GridView and a Form together in one control. When a record is clicked in the GridView, it shows up in the form and it can be edited in either the GridView and/or the Form by setting some properties. Pretty slick. A: Not that I know of, but pretty much all you need is: * *a split container *a user control containing your actual form *a grid view Now you just hook up the grid view's item selection events with a controller that loads data into the user control's child controls. From what I can tell, there Access Split Form doesn't do a lot more. A: A Flex Grid Control will probably do what you want but you will need to write a bit of VBA code, have a look at my Flex Grid Demo program for some examples. Go to http://www.rogersaccesslibrary.com/forum/forum_posts.asp?TID=180 HTH Peter Hibbs.
{ "language": "en", "url": "https://stackoverflow.com/questions/48083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Select N random elements from a List in C# I need a quick algorithm to select 5 random elements from a generic list. For example, I'd like to get 5 random elements from a List<string>. A: From Dragons in the Algorithm, an interpretation in C#: int k = 10; // items to select var items = new List<int>(new[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 }); var selected = new List<int>(); double needed = k; double available = items.Count; var rand = new Random(); while (selected.Count < k) { if( rand.NextDouble() < needed / available ) { selected.Add(items[(int)available-1]) needed--; } available--; } This algorithm will select unique indicies of the items list. A: Was thinking about comment by @JohnShedletsky on the accepted answer regarding (paraphrase): you should be able to to this in O(subset.Length), rather than O(originalList.Length) Basically, you should be able to generate subset random indices and then pluck them from the original list. The Method public static class EnumerableExtensions { public static Random randomizer = new Random(); // you'd ideally be able to replace this with whatever makes you comfortable public static IEnumerable<T> GetRandom<T>(this IEnumerable<T> list, int numItems) { return (list as T[] ?? list.ToArray()).GetRandom(numItems); // because ReSharper whined about duplicate enumeration... /* items.Add(list.ElementAt(randomizer.Next(list.Count()))) ) numItems--; */ } // just because the parentheses were getting confusing public static IEnumerable<T> GetRandom<T>(this T[] list, int numItems) { var items = new HashSet<T>(); // don't want to add the same item twice; otherwise use a list while (numItems > 0 ) // if we successfully added it, move on if( items.Add(list[randomizer.Next(list.Length)]) ) numItems--; return items; } // and because it's really fun; note -- you may get repetition public static IEnumerable<T> PluckRandomly<T>(this IEnumerable<T> list) { while( true ) yield return list.ElementAt(randomizer.Next(list.Count())); } } If you wanted to be even more efficient, you would probably use a HashSet of the indices, not the actual list elements (in case you've got complex types or expensive comparisons); The Unit Test And to make sure we don't have any collisions, etc. [TestClass] public class RandomizingTests : UnitTestBase { [TestMethod] public void GetRandomFromList() { this.testGetRandomFromList((list, num) => list.GetRandom(num)); } [TestMethod] public void PluckRandomly() { this.testGetRandomFromList((list, num) => list.PluckRandomly().Take(num), requireDistinct:false); } private void testGetRandomFromList(Func<IEnumerable<int>, int, IEnumerable<int>> methodToGetRandomItems, int numToTake = 10, int repetitions = 100000, bool requireDistinct = true) { var items = Enumerable.Range(0, 100); IEnumerable<int> randomItems = null; while( repetitions-- > 0 ) { randomItems = methodToGetRandomItems(items, numToTake); Assert.AreEqual(numToTake, randomItems.Count(), "Did not get expected number of items {0}; failed at {1} repetition--", numToTake, repetitions); if(requireDistinct) Assert.AreEqual(numToTake, randomItems.Distinct().Count(), "Collisions (non-unique values) found, failed at {0} repetition--", repetitions); Assert.IsTrue(randomItems.All(o => items.Contains(o)), "Some unknown values found; failed at {0} repetition--", repetitions); } } } A: Selecting N random items from a group shouldn't have anything to do with order! Randomness is about unpredictability and not about shuffling positions in a group. All the answers that deal with some kinda ordering is bound to be less efficient than the ones that do not. Since efficiency is the key here, I will post something that doesn't change the order of items too much. 1) If you need true random values which means there is no restriction on which elements to choose from (ie, once chosen item can be reselected): public static List<T> GetTrueRandom<T>(this IList<T> source, int count, bool throwArgumentOutOfRangeException = true) { if (throwArgumentOutOfRangeException && count > source.Count) throw new ArgumentOutOfRangeException(); var randoms = new List<T>(count); randoms.AddRandomly(source, count); return randoms; } If you set the exception flag off, then you can choose random items any number of times. If you have { 1, 2, 3, 4 }, then it can give { 1, 4, 4 }, { 1, 4, 3 } etc for 3 items or even { 1, 4, 3, 2, 4 } for 5 items! This should be pretty fast, as it has nothing to check. 2) If you need individual members from the group with no repetition, then I would rely on a dictionary (as many have pointed out already). public static List<T> GetDistinctRandom<T>(this IList<T> source, int count) { if (count > source.Count) throw new ArgumentOutOfRangeException(); if (count == source.Count) return new List<T>(source); var sourceDict = source.ToIndexedDictionary(); if (count > source.Count / 2) { while (sourceDict.Count > count) sourceDict.Remove(source.GetRandomIndex()); return sourceDict.Select(kvp => kvp.Value).ToList(); } var randomDict = new Dictionary<int, T>(count); while (randomDict.Count < count) { int key = source.GetRandomIndex(); if (!randomDict.ContainsKey(key)) randomDict.Add(key, sourceDict[key]); } return randomDict.Select(kvp => kvp.Value).ToList(); } The code is a bit lengthier than other dictionary approaches here because I'm not only adding, but also removing from list, so its kinda two loops. You can see here that I have not reordered anything at all when count becomes equal to source.Count. That's because I believe randomness should be in the returned set as a whole. I mean if you want 5 random items from 1, 2, 3, 4, 5, it shouldn't matter if its 1, 3, 4, 2, 5 or 1, 2, 3, 4, 5, but if you need 4 items from the same set, then it should unpredictably yield in 1, 2, 3, 4, 1, 3, 5, 2, 2, 3, 5, 4 etc. Secondly, when the count of random items to be returned is more than half of the original group, then its easier to remove source.Count - count items from the group than adding count items. For performance reasons I have used source instead of sourceDict to get then random index in the remove method. So if you have { 1, 2, 3, 4 }, this can end up in { 1, 2, 3 }, { 3, 4, 1 } etc for 3 items. 3) If you need truly distinct random values from your group by taking into account the duplicates in the original group, then you may use the same approach as above, but a HashSet will be lighter than a dictionary. public static List<T> GetTrueDistinctRandom<T>(this IList<T> source, int count, bool throwArgumentOutOfRangeException = true) { if (count > source.Count) throw new ArgumentOutOfRangeException(); var set = new HashSet<T>(source); if (throwArgumentOutOfRangeException && count > set.Count) throw new ArgumentOutOfRangeException(); List<T> list = hash.ToList(); if (count >= set.Count) return list; if (count > set.Count / 2) { while (set.Count > count) set.Remove(list.GetRandom()); return set.ToList(); } var randoms = new HashSet<T>(); randoms.AddRandomly(list, count); return randoms.ToList(); } The randoms variable is made a HashSet to avoid duplicates being added in the rarest of rarest cases where Random.Next can yield the same value, especially when input list is small. So { 1, 2, 2, 4 } => 3 random items => { 1, 2, 4 } and never { 1, 2, 2} { 1, 2, 2, 4 } => 4 random items => exception!! or { 1, 2, 4 } depending on the flag set. Some of the extension methods I have used: static Random rnd = new Random(); public static int GetRandomIndex<T>(this ICollection<T> source) { return rnd.Next(source.Count); } public static T GetRandom<T>(this IList<T> source) { return source[source.GetRandomIndex()]; } static void AddRandomly<T>(this ICollection<T> toCol, IList<T> fromList, int count) { while (toCol.Count < count) toCol.Add(fromList.GetRandom()); } public static Dictionary<int, T> ToIndexedDictionary<T>(this IEnumerable<T> lst) { return lst.ToIndexedDictionary(t => t); } public static Dictionary<int, T> ToIndexedDictionary<S, T>(this IEnumerable<S> lst, Func<S, T> valueSelector) { int index = -1; return lst.ToDictionary(t => ++index, valueSelector); } If its all about performance with tens of 1000s of items in the list having to be iterated 10000 times, then you may want to have faster random class than System.Random, but I don't think that's a big deal considering the latter most probably is never a bottleneck, its quite fast enough.. Edit: If you need to re-arrange order of returned items as well, then there's nothing that can beat dhakim's Fisher-Yates approach - short, sweet and simple.. A: I combined several of the above answers to create a Lazily-evaluated extension method. My testing showed that Kyle's approach (Order(N)) is many times slower than drzaus' use of a set to propose the random indices to choose (Order(K)). The former performs many more calls to the random number generator, plus iterates more times over the items. The goals of my implementation were: 1) Do not realize the full list if given an IEnumerable that is not an IList. If I am given a sequence of a zillion items, I do not want to run out of memory. Use Kyle's approach for an on-line solution. 2) If I can tell that it is an IList, use drzaus' approach, with a twist. If K is more than half of N, I risk thrashing as I choose many random indices again and again and have to skip them. Thus I compose a list of the indices to NOT keep. 3) I guarantee that the items will be returned in the same order that they were encountered. Kyle's algorithm required no alteration. drzaus' algorithm required that I not emit items in the order that the random indices are chosen. I gather all the indices into a SortedSet, then emit items in sorted index order. 4) If K is large compared to N and I invert the sense of the set, then I enumerate all items and test if the index is not in the set. This means that I lose the Order(K) run time, but since K is close to N in these cases, I do not lose much. Here is the code: /// <summary> /// Takes k elements from the next n elements at random, preserving their order. /// /// If there are fewer than n elements in items, this may return fewer than k elements. /// </summary> /// <typeparam name="TElem">Type of element in the items collection.</typeparam> /// <param name="items">Items to be randomly selected.</param> /// <param name="k">Number of items to pick.</param> /// <param name="n">Total number of items to choose from. /// If the items collection contains more than this number, the extra members will be skipped. /// If the items collection contains fewer than this number, it is possible that fewer than k items will be returned.</param> /// <returns>Enumerable over the retained items. /// /// See http://stackoverflow.com/questions/48087/select-a-random-n-elements-from-listt-in-c-sharp for the commentary. /// </returns> public static IEnumerable<TElem> TakeRandom<TElem>(this IEnumerable<TElem> items, int k, int n) { var r = new FastRandom(); var itemsList = items as IList<TElem>; if (k >= n || (itemsList != null && k >= itemsList.Count)) foreach (var item in items) yield return item; else { // If we have a list, we can infer more information and choose a better algorithm. // When using an IList, this is about 7 times faster (on one benchmark)! if (itemsList != null && k < n/2) { // Since we have a List, we can use an algorithm suitable for Lists. // If there are fewer than n elements, reduce n. n = Math.Min(n, itemsList.Count); // This algorithm picks K index-values randomly and directly chooses those items to be selected. // If k is more than half of n, then we will spend a fair amount of time thrashing, picking // indices that we have already picked and having to try again. var invertSet = k >= n/2; var positions = invertSet ? (ISet<int>) new HashSet<int>() : (ISet<int>) new SortedSet<int>(); var numbersNeeded = invertSet ? n - k : k; while (numbersNeeded > 0) if (positions.Add(r.Next(0, n))) numbersNeeded--; if (invertSet) { // positions contains all the indices of elements to Skip. for (var itemIndex = 0; itemIndex < n; itemIndex++) { if (!positions.Contains(itemIndex)) yield return itemsList[itemIndex]; } } else { // positions contains all the indices of elements to Take. foreach (var itemIndex in positions) yield return itemsList[itemIndex]; } } else { // Since we do not have a list, we will use an online algorithm. // This permits is to skip the rest as soon as we have enough items. var found = 0; var scanned = 0; foreach (var item in items) { var rand = r.Next(0,n-scanned); if (rand < k - found) { yield return item; found++; } scanned++; if (found >= k || scanned >= n) break; } } } } I use a specialized random number generator, but you can just use C#'s Random if you want. (FastRandom was written by Colin Green and is part of SharpNEAT. It has a period of 2^128-1 which is better than many RNGs.) Here are the unit tests: [TestClass] public class TakeRandomTests { /// <summary> /// Ensure that when randomly choosing items from an array, all items are chosen with roughly equal probability. /// </summary> [TestMethod] public void TakeRandom_Array_Uniformity() { const int numTrials = 2000000; const int expectedCount = numTrials/20; var timesChosen = new int[100]; var century = new int[100]; for (var i = 0; i < century.Length; i++) century[i] = i; for (var trial = 0; trial < numTrials; trial++) { foreach (var i in century.TakeRandom(5, 100)) timesChosen[i]++; } var avg = timesChosen.Average(); var max = timesChosen.Max(); var min = timesChosen.Min(); var allowedDifference = expectedCount/100; AssertBetween(avg, expectedCount - 2, expectedCount + 2, "Average"); //AssertBetween(min, expectedCount - allowedDifference, expectedCount, "Min"); //AssertBetween(max, expectedCount, expectedCount + allowedDifference, "Max"); var countInRange = timesChosen.Count(i => i >= expectedCount - allowedDifference && i <= expectedCount + allowedDifference); Assert.IsTrue(countInRange >= 90, String.Format("Not enough were in range: {0}", countInRange)); } /// <summary> /// Ensure that when randomly choosing items from an IEnumerable that is not an IList, /// all items are chosen with roughly equal probability. /// </summary> [TestMethod] public void TakeRandom_IEnumerable_Uniformity() { const int numTrials = 2000000; const int expectedCount = numTrials / 20; var timesChosen = new int[100]; for (var trial = 0; trial < numTrials; trial++) { foreach (var i in Range(0,100).TakeRandom(5, 100)) timesChosen[i]++; } var avg = timesChosen.Average(); var max = timesChosen.Max(); var min = timesChosen.Min(); var allowedDifference = expectedCount / 100; var countInRange = timesChosen.Count(i => i >= expectedCount - allowedDifference && i <= expectedCount + allowedDifference); Assert.IsTrue(countInRange >= 90, String.Format("Not enough were in range: {0}", countInRange)); } private IEnumerable<int> Range(int low, int count) { for (var i = low; i < low + count; i++) yield return i; } private static void AssertBetween(int x, int low, int high, String message) { Assert.IsTrue(x > low, String.Format("Value {0} is less than lower limit of {1}. {2}", x, low, message)); Assert.IsTrue(x < high, String.Format("Value {0} is more than upper limit of {1}. {2}", x, high, message)); } private static void AssertBetween(double x, double low, double high, String message) { Assert.IsTrue(x > low, String.Format("Value {0} is less than lower limit of {1}. {2}", x, low, message)); Assert.IsTrue(x < high, String.Format("Value {0} is more than upper limit of {1}. {2}", x, high, message)); } } A: Here you have one implementation based on Fisher-Yates Shuffle whose algorithm complexity is O(n) where n is the subset or sample size, instead of the list size, as John Shedletsky pointed out. public static IEnumerable<T> GetRandomSample<T>(this IList<T> list, int sampleSize) { if (list == null) throw new ArgumentNullException("list"); if (sampleSize > list.Count) throw new ArgumentException("sampleSize may not be greater than list count", "sampleSize"); var indices = new Dictionary<int, int>(); int index; var rnd = new Random(); for (int i = 0; i < sampleSize; i++) { int j = rnd.Next(i, list.Count); if (!indices.TryGetValue(j, out index)) index = j; yield return list[index]; if (!indices.TryGetValue(i, out index)) index = i; indices[j] = index; } } A: Extending from @ers's answer, if one is worried about possible different implementations of OrderBy, this should be safe: // Instead of this YourList.OrderBy(x => rnd.Next()).Take(5) // Temporarily transform YourList .Select(v => new {v, i = rnd.Next()}) // Associate a random index to each entry .OrderBy(x => x.i).Take(5) // Sort by (at this point fixed) random index .Select(x => x.v); // Go back to enumerable of entry A: public static List<T> GetRandomElements<T>(this IEnumerable<T> list, int elementsCount) { return list.OrderBy(arg => Guid.NewGuid()).Take(elementsCount).ToList(); } A: The simple solution I use (probably not good for large lists): Copy the list into temporary list, then in loop randomly select Item from temp list and put it in selected items list while removing it form temp list (so it can't be reselected). Example: List<Object> temp = OriginalList.ToList(); List<Object> selectedItems = new List<Object>(); Random rnd = new Random(); Object o; int i = 0; while (i < NumberOfSelectedItems) { o = temp[rnd.Next(temp.Count)]; selectedItems.Add(o); temp.Remove(o); i++; } A: This is actually a harder problem than it sounds like, mainly because many mathematically-correct solutions will fail to actually allow you to hit all the possibilities (more on this below). First, here are some easy-to-implement, correct-if-you-have-a-truly-random-number generator: (0) Kyle's answer, which is O(n). (1) Generate a list of n pairs [(0, rand), (1, rand), (2, rand), ...], sort them by the second coordinate, and use the first k (for you, k=5) indices to get your random subset. I think this is easy to implement, although it is O(n log n) time. (2) Init an empty list s = [] that will grow to be the indices of k random elements. Choose a number r in {0, 1, 2, ..., n-1} at random, r = rand % n, and add this to s. Next take r = rand % (n-1) and stick in s; add to r the # elements less than it in s to avoid collisions. Next take r = rand % (n-2), and do the same thing, etc. until you have k distinct elements in s. This has worst-case running time O(k^2). So for k << n, this can be faster. If you keep s sorted and track which contiguous intervals it has, you can implement it in O(k log k), but it's more work. @Kyle - you're right, on second thought I agree with your answer. I hastily read it at first, and mistakenly thought you were indicating to sequentially choose each element with fixed probability k/n, which would have been wrong - but your adaptive approach appears correct to me. Sorry about that. Ok, and now for the kicker: asymptotically (for fixed k, n growing), there are n^k/k! choices of k element subset out of n elements [this is an approximation of (n choose k)]. If n is large, and k is not very small, then these numbers are huge. The best cycle length you can hope for in any standard 32 bit random number generator is 2^32 = 256^4. So if we have a list of 1000 elements, and we want to choose 5 at random, there's no way a standard random number generator will hit all the possibilities. However, as long as you're ok with a choice that works fine for smaller sets, and always "looks" random, then these algorithms should be ok. Addendum: After writing this, I realized that it's tricky to implement idea (2) correctly, so I wanted to clarify this answer. To get O(k log k) time, you need an array-like structure that supports O(log m) searches and inserts - a balanced binary tree can do this. Using such a structure to build up an array called s, here is some pseudopython: # Returns a container s with k distinct random numbers from {0, 1, ..., n-1} def ChooseRandomSubset(n, k): for i in range(k): r = UniformRandom(0, n-i) # May be 0, must be < n-i q = s.FirstIndexSuchThat( s[q] - q > r ) # This is the search. s.InsertInOrder(q ? r + q : r + len(s)) # Inserts right before q. return s I suggest running through a few sample cases to see how this efficiently implements the above English explanation. A: This is the best I could come up with on a first cut: public List<String> getRandomItemsFromList(int returnCount, List<String> list) { List<String> returnList = new List<String>(); Dictionary<int, int> randoms = new Dictionary<int, int>(); while (randoms.Count != returnCount) { //generate new random between one and total list count int randomInt = new Random().Next(list.Count); // store this in dictionary to ensure uniqueness try { randoms.Add(randomInt, randomInt); } catch (ArgumentException aex) { Console.Write(aex.Message); } //we can assume this element exists in the dictonary already //check for randoms length and then iterate through the original list //adding items we select via random to the return list if (randoms.Count == returnCount) { foreach (int key in randoms.Keys) returnList.Add(list[randoms[key]]); break; //break out of _while_ loop } } return returnList; } Using a list of randoms within a range of 1 - total list count and then simply pulling those items in the list seemed to be the best way, but using the Dictionary to ensure uniqueness is something I'm still mulling over. Also note I used a string list, replace as needed. A: Using linq: YourList.OrderBy(x => rnd.Next()).Take(5) A: Based on Kyle's answer, here's my c# implementation. /// <summary> /// Picks random selection of available game ID's /// </summary> private static List<int> GetRandomGameIDs(int count) { var gameIDs = (int[])HttpContext.Current.Application["NonDeletedArcadeGameIDs"]; var totalGameIDs = gameIDs.Count(); if (count > totalGameIDs) count = totalGameIDs; var rnd = new Random(); var leftToPick = count; var itemsLeft = totalGameIDs; var arrPickIndex = 0; var returnIDs = new List<int>(); while (leftToPick > 0) { if (rnd.Next(0, itemsLeft) < leftToPick) { returnIDs .Add(gameIDs[arrPickIndex]); leftToPick--; } arrPickIndex++; itemsLeft--; } return returnIDs ; } A: This method may be equivalent to Kyle's. Say your list is of size n and you want k elements. Random rand = new Random(); for(int i = 0; k>0; ++i) { int r = rand.Next(0, n-i); if(r<k) { //include element i k--; } } Works like a charm :) -Alex Gilbert A: Here is a benchmark of three different methods: * *The implementation of the accepted answer from Kyle. *An approach based on random index selection with HashSet duplication filtering, from drzaus. *A more academic approach posted by Jesús López, called Fisher–Yates shuffle. The testing will consist of benchmarking the performance with multiple different list sizes and selection sizes. I also included a measurement of the standard deviation of these three methods, i.e. how well distributed the random selection appears to be. In a nutshell, drzaus's simple solution seems to be the best overall, from these three. The selected answer is great and elegant, but it's not that efficient, given that the time complexity is based on the sample size, not the selection size. Consequently, if you select a small number of items from a long list, it will take orders of magnitude more time. Of course it still performs better than the solutions based on complete reordering. Curiously enough, this O(n) time complexity issue is true even if you only touch the list when you actually return an item, like I do in my implementation. The only thing I can thing of is that Random.Next() is pretty slow, and that performance benefits if you generate only one random number for each selected item. And, also interestingly, the StdDev of Kyle's solution was significantly higher comparatively. I have no clue why; maybe the fault is in my implementation. Sorry for the long code and output that will commence now; but I hope it's somewhat illuminative. Also, if you spot any issues in the tests or implementations, let me know and I'll fix it. static void Main() { BenchmarkRunner.Run<Benchmarks>(); new Benchmarks() { ListSize = 100, SelectionSize = 10 } .BenchmarkStdDev(); } [MemoryDiagnoser] public class Benchmarks { [Params(50, 500, 5000)] public int ListSize; [Params(5, 10, 25, 50)] public int SelectionSize; private Random _rnd; private List<int> _list; private int[] _hits; [GlobalSetup] public void Setup() { _rnd = new Random(12345); _list = Enumerable.Range(0, ListSize).ToList(); _hits = new int[ListSize]; } [Benchmark] public void Test_IterateSelect() => Random_IterateSelect(_list, SelectionSize).ToList(); [Benchmark] public void Test_RandomIndices() => Random_RandomIdices(_list, SelectionSize).ToList(); [Benchmark] public void Test_FisherYates() => Random_FisherYates(_list, SelectionSize).ToList(); public void BenchmarkStdDev() { RunOnce(Random_IterateSelect, "IterateSelect"); RunOnce(Random_RandomIdices, "RandomIndices"); RunOnce(Random_FisherYates, "FisherYates"); void RunOnce(Func<IEnumerable<int>, int, IEnumerable<int>> method, string methodName) { Setup(); for (int i = 0; i < 1000000; i++) { var selected = method(_list, SelectionSize).ToList(); Debug.Assert(selected.Count() == SelectionSize); foreach (var item in selected) _hits[item]++; } var stdDev = GetStdDev(_hits); Console.WriteLine($"StdDev of {methodName}: {stdDev :n} (% of average: {stdDev / (_hits.Average() / 100) :n})"); } double GetStdDev(IEnumerable<int> hits) { var average = hits.Average(); return Math.Sqrt(hits.Average(v => Math.Pow(v - average, 2))); } } public IEnumerable<T> Random_IterateSelect<T>(IEnumerable<T> collection, int needed) { var count = collection.Count(); for (int i = 0; i < count; i++) { if (_rnd.Next(count - i) < needed) { yield return collection.ElementAt(i); if (--needed == 0) yield break; } } } public IEnumerable<T> Random_RandomIdices<T>(IEnumerable<T> list, int needed) { var selectedItems = new HashSet<T>(); var count = list.Count(); while (needed > 0) if (selectedItems.Add(list.ElementAt(_rnd.Next(count)))) needed--; return selectedItems; } public IEnumerable<T> Random_FisherYates<T>(IEnumerable<T> list, int sampleSize) { var count = list.Count(); if (sampleSize > count) throw new ArgumentException("sampleSize may not be greater than list count", "sampleSize"); var indices = new Dictionary<int, int>(); int index; for (int i = 0; i < sampleSize; i++) { int j = _rnd.Next(i, count); if (!indices.TryGetValue(j, out index)) index = j; yield return list.ElementAt(index); if (!indices.TryGetValue(i, out index)) index = i; indices[j] = index; } } } Output: | Method | ListSize | Select | Mean | Error | StdDev | Gen 0 | Allocated | |-------------- |--------- |------- |------------:|----------:|----------:|-------:|----------:| | IterateSelect | 50 | 5 | 711.5 ns | 5.19 ns | 4.85 ns | 0.0305 | 144 B | | RandomIndices | 50 | 5 | 341.1 ns | 4.48 ns | 4.19 ns | 0.0644 | 304 B | | FisherYates | 50 | 5 | 573.5 ns | 6.12 ns | 5.72 ns | 0.0944 | 447 B | | IterateSelect | 50 | 10 | 967.2 ns | 4.64 ns | 3.87 ns | 0.0458 | 220 B | | RandomIndices | 50 | 10 | 709.9 ns | 11.27 ns | 9.99 ns | 0.1307 | 621 B | | FisherYates | 50 | 10 | 1,204.4 ns | 10.63 ns | 9.94 ns | 0.1850 | 875 B | | IterateSelect | 50 | 25 | 1,358.5 ns | 7.97 ns | 6.65 ns | 0.0763 | 361 B | | RandomIndices | 50 | 25 | 1,958.1 ns | 15.69 ns | 13.91 ns | 0.2747 | 1298 B | | FisherYates | 50 | 25 | 2,878.9 ns | 31.42 ns | 29.39 ns | 0.3471 | 1653 B | | IterateSelect | 50 | 50 | 1,739.1 ns | 15.86 ns | 14.06 ns | 0.1316 | 629 B | | RandomIndices | 50 | 50 | 8,906.1 ns | 88.92 ns | 74.25 ns | 0.5951 | 2848 B | | FisherYates | 50 | 50 | 4,899.9 ns | 38.10 ns | 33.78 ns | 0.4349 | 2063 B | | IterateSelect | 500 | 5 | 4,775.3 ns | 46.96 ns | 41.63 ns | 0.0305 | 144 B | | RandomIndices | 500 | 5 | 327.8 ns | 2.82 ns | 2.50 ns | 0.0644 | 304 B | | FisherYates | 500 | 5 | 558.5 ns | 7.95 ns | 7.44 ns | 0.0944 | 449 B | | IterateSelect | 500 | 10 | 5,387.1 ns | 44.57 ns | 41.69 ns | 0.0458 | 220 B | | RandomIndices | 500 | 10 | 648.0 ns | 9.12 ns | 8.54 ns | 0.1307 | 621 B | | FisherYates | 500 | 10 | 1,154.6 ns | 13.66 ns | 12.78 ns | 0.1869 | 889 B | | IterateSelect | 500 | 25 | 6,442.3 ns | 48.90 ns | 40.83 ns | 0.0763 | 361 B | | RandomIndices | 500 | 25 | 1,569.6 ns | 15.79 ns | 14.77 ns | 0.2747 | 1298 B | | FisherYates | 500 | 25 | 2,726.1 ns | 25.32 ns | 22.44 ns | 0.3777 | 1795 B | | IterateSelect | 500 | 50 | 7,775.4 ns | 35.47 ns | 31.45 ns | 0.1221 | 629 B | | RandomIndices | 500 | 50 | 2,976.9 ns | 27.11 ns | 24.03 ns | 0.6027 | 2848 B | | FisherYates | 500 | 50 | 5,383.2 ns | 36.49 ns | 32.35 ns | 0.8163 | 3870 B | | IterateSelect | 5000 | 5 | 45,208.6 ns | 459.92 ns | 430.21 ns | - | 144 B | | RandomIndices | 5000 | 5 | 328.7 ns | 5.15 ns | 4.81 ns | 0.0644 | 304 B | | FisherYates | 5000 | 5 | 556.1 ns | 10.75 ns | 10.05 ns | 0.0944 | 449 B | | IterateSelect | 5000 | 10 | 49,253.9 ns | 420.26 ns | 393.11 ns | - | 220 B | | RandomIndices | 5000 | 10 | 642.9 ns | 4.95 ns | 4.13 ns | 0.1307 | 621 B | | FisherYates | 5000 | 10 | 1,141.9 ns | 12.81 ns | 11.98 ns | 0.1869 | 889 B | | IterateSelect | 5000 | 25 | 54,044.4 ns | 208.92 ns | 174.46 ns | 0.0610 | 361 B | | RandomIndices | 5000 | 25 | 1,480.5 ns | 11.56 ns | 10.81 ns | 0.2747 | 1298 B | | FisherYates | 5000 | 25 | 2,713.9 ns | 27.31 ns | 24.21 ns | 0.3777 | 1795 B | | IterateSelect | 5000 | 50 | 54,418.2 ns | 329.62 ns | 308.32 ns | 0.1221 | 629 B | | RandomIndices | 5000 | 50 | 2,886.4 ns | 36.53 ns | 34.17 ns | 0.6027 | 2848 B | | FisherYates | 5000 | 50 | 5,347.2 ns | 59.45 ns | 55.61 ns | 0.8163 | 3870 B | StdDev of IterateSelect: 671.88 (% of average: 0.67) StdDev of RandomIndices: 296.07 (% of average: 0.30) StdDev of FisherYates: 280.47 (% of average: 0.28) A: I think the selected answer is correct and pretty sweet. I implemented it differently though, as I also wanted the result in random order. static IEnumerable<SomeType> PickSomeInRandomOrder<SomeType>( IEnumerable<SomeType> someTypes, int maxCount) { Random random = new Random(DateTime.Now.Millisecond); Dictionary<double, SomeType> randomSortTable = new Dictionary<double,SomeType>(); foreach(SomeType someType in someTypes) randomSortTable[random.NextDouble()] = someType; return randomSortTable.OrderBy(KVP => KVP.Key).Take(maxCount).Select(KVP => KVP.Value); } A: I just ran into this problem, and some more google searching brought me to the problem of randomly shuffling a list: http://en.wikipedia.org/wiki/Fisher-Yates_shuffle To completely randomly shuffle your list (in place) you do this: To shuffle an array a of n elements (indices 0..n-1): for i from n − 1 downto 1 do j ← random integer with 0 ≤ j ≤ i exchange a[j] and a[i] If you only need the first 5 elements, then instead of running i all the way from n-1 to 1, you only need to run it to n-5 (ie: n-5) Lets say you need k items, This becomes: for (i = n − 1; i >= n-k; i--) { j = random integer with 0 ≤ j ≤ i exchange a[j] and a[i] } Each item that is selected is swapped toward the end of the array, so the k elements selected are the last k elements of the array. This takes time O(k), where k is the number of randomly selected elements you need. Further, if you don't want to modify your initial list, you can write down all your swaps in a temporary list, reverse that list, and apply them again, thus performing the inverse set of swaps and returning you your initial list without changing the O(k) running time. Finally, for the real stickler, if (n == k), you should stop at 1, not n-k, as the randomly chosen integer will always be 0. A: Iterate through and for each element make the probability of selection = (number needed)/(number left) So if you had 40 items, the first would have a 5/40 chance of being selected. If it is, the next has a 4/39 chance, otherwise it has a 5/39 chance. By the time you get to the end you will have your 5 items, and often you'll have all of them before that. This technique is called selection sampling, a special case of Reservoir Sampling. It's similar in performance to shuffling the input, but of course allows the sample to be generated without modifying the original data. A: You can use this but the ordering will happen on client side .AsEnumerable().OrderBy(n => Guid.NewGuid()).Take(5); A: 12 years on and the this question is still active, I didn't find an implementation of Kyle's solution I liked so here it is: public IEnumerable<T> TakeRandom<T>(IEnumerable<T> collection, int take) { var random = new Random(); var available = collection.Count(); var needed = take; foreach (var item in collection) { if (random.Next(available) < needed) { needed--; yield return item; if (needed == 0) { break; } } available--; } } A: It is a lot harder than one would think. See the great Article "Shuffling" from Jeff. I did write a very short article on that subject including C# code: Return random subset of N elements of a given array A: public static IEnumerable<T> GetRandom<T>(IList<T> list, int count, Random random) { // Probably you should throw exception if count > list.Count count = Math.Min(list.Count, count); var selectedIndices = new SortedSet<int>(); // Random upper bound (exclusive) int randomMax = list.Count; while (selectedIndices.Count < count) { int randomIndex = random.Next(0, randomMax); // skip over already selected indices foreach (var selectedIndex in selectedIndices) if (selectedIndex <= randomIndex) ++randomIndex; else break; yield return list[randomIndex]; selectedIndices.Add(randomIndex); --randomMax; } } Memory: ~count Complexity: O(count2) A: Goal: Select N number of items from collection source without duplication. I created an extension for any generic collection. Here's how I did it: public static class CollectionExtension { public static IList<TSource> RandomizeCollection<TSource>(this IList<TSource> source, int maxItems) { int randomCount = source.Count > maxItems ? maxItems : source.Count; int?[] randomizedIndices = new int?[randomCount]; Random random = new Random(); for (int i = 0; i < randomizedIndices.Length; i++) { int randomResult = -1; while (randomizedIndices.Contains((randomResult = random.Next(0, source.Count)))) { //0 -> since all list starts from index 0; source.Count -> maximum number of items that can be randomize //continue looping while the generated random number is already in the list of randomizedIndices } randomizedIndices[i] = randomResult; } IList<TSource> result = new List<TSource>(); foreach (int index in randomizedIndices) result.Add(source.ElementAt(index)); return result; } } A: Short and simple. Hope this helps someone! if (list.Count > maxListCount) { var rndList = new List<YourEntity>(); var r = new Random(); while (rndList.Count < maxListCount) { var addingElement = list[r.Next(list.Count)]; //element uniqueness checking //choose your case //if (rndList.Contains(addingElement)) //if (rndList.Any(p => p.Id == addingElement.Id)) continue; rndList.Add(addingElement); } return rndList; } A: public static IEnumerable<TItem> RandomSample<TItem>(this IReadOnlyList<TItem> items, int count) { if (count < 1 || count > items.Count) { throw new ArgumentOutOfRangeException(nameof(count)); } List<int> indexes = Enumerable.Range(0, items.Count).ToList(); int yieldedCount = 0; while (yieldedCount < count) { int i = RandomNumberGenerator.GetInt32(indexes.Count); int randomIndex = indexes[i]; yield return items[randomIndex]; // indexes.RemoveAt(i); // Avoid removing items from the middle of the list indexes[i] = indexes[indexes.Count - 1]; // Replace yielded index with the last one indexes.RemoveAt(indexes.Count - 1); yieldedCount++; } } A: I recently did this on my project using an idea similar to Tyler's point 1. I was loading a bunch of questions and selecting five at random. Sorting was achieved using an IComparer. aAll questions were loaded in the a QuestionSorter list, which was then sorted using the List's Sort function and the first k elements where selected. private class QuestionSorter : IComparable<QuestionSorter> { public double SortingKey { get; set; } public Question QuestionObject { get; set; } public QuestionSorter(Question q) { this.SortingKey = RandomNumberGenerator.RandomDouble; this.QuestionObject = q; } public int CompareTo(QuestionSorter other) { if (this.SortingKey < other.SortingKey) { return -1; } else if (this.SortingKey > other.SortingKey) { return 1; } else { return 0; } } } Usage: List<QuestionSorter> unsortedQuestions = new List<QuestionSorter>(); // add the questions here unsortedQuestions.Sort(unsortedQuestions as IComparer<QuestionSorter>); // select the first k elements A: why not something like this: Dim ar As New ArrayList Dim numToGet As Integer = 5 'hard code just to test ar.Add("12") ar.Add("11") ar.Add("10") ar.Add("15") ar.Add("16") ar.Add("17") Dim randomListOfProductIds As New ArrayList Dim toAdd As String = "" For i = 0 To numToGet - 1 toAdd = ar(CInt((ar.Count - 1) * Rnd())) randomListOfProductIds.Add(toAdd) 'remove from id list ar.Remove(toAdd) Next 'sorry i'm lazy and have to write vb at work :( and didn't feel like converting to c# A: Here's my approach (full text here http://krkadev.blogspot.com/2010/08/random-numbers-without-repetition.html ). It should run in O(K) instead of O(N), where K is the number of wanted elements and N is the size of the list to choose from: public <T> List<T> take(List<T> source, int k) { int n = source.size(); if (k > n) { throw new IllegalStateException( "Can not take " + k + " elements from a list with " + n + " elements"); } List<T> result = new ArrayList<T>(k); Map<Integer,Integer> used = new HashMap<Integer,Integer>(); int metric = 0; for (int i = 0; i < k; i++) { int off = random.nextInt(n - i); while (true) { metric++; Integer redirect = used.put(off, n - i - 1); if (redirect == null) { break; } off = redirect; } result.add(source.get(off)); } assert metric <= 2*k; return result; } A: This isn't as elegant or efficient as the accepted solution, but it's quick to write up. First, permute the array randomly, then select the first K elements. In python, import numpy N = 20 K = 5 idx = np.arange(N) numpy.random.shuffle(idx) print idx[:K] A: I would use an extension method. public static IEnumerable<T> TakeRandom<T>(this IEnumerable<T> elements, int countToTake) { var random = new Random(); var internalList = elements.ToList(); var selected = new List<T>(); for (var i = 0; i < countToTake; ++i) { var next = random.Next(0, internalList.Count - selected.Count); selected.Add(internalList[next]); internalList[next] = internalList[internalList.Count - selected.Count]; } return selected; } A: Using LINQ with large lists (when costly to touch each element) AND if you can live with the possibility of duplicates: new int[5].Select(o => (int)(rnd.NextDouble() * maxIndex)).Select(i => YourIEnum.ElementAt(i)) For my use i had a list of 100.000 elements, and because of them being pulled from a DB I about halfed (or better) the time compared to a rnd on the whole list. Having a large list will reduce the odds greatly for duplicates. A: I'd like to share my method. Reading other answers I was wondering if we really need to keep track of chosen items to uphold uniqueness of the results. Usually it slows down the algorithm because you need to repeat the draw if you happen to choose the same item again. So I came up with something different. If you don't care about modifying the input list you can shuffle the items in one go so that chosen items end up at the beginning of the list. So in each iteration you choose an item and then you switch it to the front of the list. As a result you end up with random items at the start of the input list. The downside of this is that the input list order was modified, but you don't need to repeat the drawing, the results are unique. No need of any additional memory allocation etc. And it works really quick even for edge cases like selecting all items from the list at random. Here is the code: public IEnumerable<T> Random_Switch<T>(IList<T> list, int needed) { for (int i = 0; i < needed; i++) { var index = _rnd.Next(i, list.Count); var item = list[index]; list[index] = list[i]; list[i] = item; } return list.Take(needed); } I also did some benchmarks benefiting from @Leaky answer and here are the results: | Method | ListSize | SelectionSize | Mean | Error | StdDev | Median | Gen0 | Allocated | |------------------- |--------- |-------------- |------------:|------------:|------------:|------------:|-------:|----------:| | Test_IterateSelect | 50 | 5 | 662.2 ns | 13.19 ns | 27.54 ns | 660.9 ns | 0.0477 | 200 B | | Test_RandomIndices | 50 | 5 | 256.6 ns | 5.12 ns | 12.86 ns | 254.0 ns | 0.0992 | 416 B | | Test_FisherYates | 50 | 5 | 405.4 ns | 8.05 ns | 17.33 ns | 401.7 ns | 0.1407 | 590 B | | Test_RandomSwitch | 50 | 5 | 152.8 ns | 2.91 ns | 4.87 ns | 153.4 ns | 0.0305 | 128 B | | Test_IterateSelect | 50 | 10 | 853.8 ns | 17.07 ns | 29.44 ns | 853.9 ns | 0.0687 | 288 B | | Test_RandomIndices | 50 | 10 | 530.8 ns | 10.63 ns | 28.93 ns | 523.7 ns | 0.1812 | 760 B | | Test_FisherYates | 50 | 10 | 862.8 ns | 17.09 ns | 38.92 ns | 859.2 ns | 0.2527 | 1057 B | | Test_RandomSwitch | 50 | 10 | 267.4 ns | 5.28 ns | 13.81 ns | 266.4 ns | 0.0343 | 144 B | | Test_IterateSelect | 50 | 25 | 1,195.6 ns | 23.58 ns | 46.54 ns | 1,199.1 ns | 0.1049 | 440 B | | Test_RandomIndices | 50 | 25 | 1,455.8 ns | 28.81 ns | 58.20 ns | 1,444.0 ns | 0.3510 | 1472 B | | Test_FisherYates | 50 | 25 | 2,066.7 ns | 41.35 ns | 85.40 ns | 2,049.0 ns | 0.4463 | 1869 B | | Test_RandomSwitch | 50 | 25 | 610.0 ns | 11.90 ns | 20.83 ns | 610.5 ns | 0.0496 | 208 B | | Test_IterateSelect | 50 | 50 | 1,436.7 ns | 28.51 ns | 61.37 ns | 1,430.1 ns | 0.1717 | 720 B | | Test_RandomIndices | 50 | 50 | 6,478.1 ns | 122.70 ns | 247.86 ns | 6,488.7 ns | 0.7248 | 3048 B | | Test_FisherYates | 50 | 50 | 3,428.5 ns | 68.49 ns | 118.15 ns | 3,424.5 ns | 0.5455 | 2296 B | | Test_RandomSwitch | 50 | 50 | 1,186.8 ns | 23.38 ns | 48.81 ns | 1,179.4 ns | 0.0725 | 304 B | | Test_IterateSelect | 500 | 5 | 4,374.6 ns | 80.43 ns | 107.37 ns | 4,362.9 ns | 0.0458 | 200 B | | Test_RandomIndices | 500 | 5 | 252.3 ns | 5.05 ns | 13.21 ns | 251.3 ns | 0.0992 | 416 B | | Test_FisherYates | 500 | 5 | 398.0 ns | 7.97 ns | 18.48 ns | 399.3 ns | 0.1411 | 592 B | | Test_RandomSwitch | 500 | 5 | 155.4 ns | 3.10 ns | 7.24 ns | 155.0 ns | 0.0305 | 128 B | | Test_IterateSelect | 500 | 10 | 4,950.1 ns | 96.72 ns | 150.58 ns | 4,942.7 ns | 0.0687 | 288 B | | Test_RandomIndices | 500 | 10 | 490.0 ns | 9.70 ns | 20.66 ns | 490.6 ns | 0.1812 | 760 B | | Test_FisherYates | 500 | 10 | 805.2 ns | 15.70 ns | 20.96 ns | 808.2 ns | 0.2556 | 1072 B | | Test_RandomSwitch | 500 | 10 | 254.1 ns | 5.09 ns | 13.31 ns | 253.6 ns | 0.0343 | 144 B | | Test_IterateSelect | 500 | 25 | 5,785.1 ns | 115.19 ns | 201.74 ns | 5,800.2 ns | 0.0992 | 440 B | | Test_RandomIndices | 500 | 25 | 1,123.6 ns | 22.31 ns | 53.03 ns | 1,119.6 ns | 0.3510 | 1472 B | | Test_FisherYates | 500 | 25 | 1,959.1 ns | 38.82 ns | 91.51 ns | 1,971.1 ns | 0.4807 | 2016 B | | Test_RandomSwitch | 500 | 25 | 601.1 ns | 11.83 ns | 23.63 ns | 599.8 ns | 0.0496 | 208 B | | Test_IterateSelect | 500 | 50 | 6,570.5 ns | 127.03 ns | 190.13 ns | 6,599.8 ns | 0.1678 | 720 B | | Test_RandomIndices | 500 | 50 | 2,199.6 ns | 43.23 ns | 73.41 ns | 2,198.6 ns | 0.7286 | 3048 B | | Test_FisherYates | 500 | 50 | 3,830.0 ns | 76.33 ns | 159.33 ns | 3,809.9 ns | 0.9842 | 4128 B | | Test_RandomSwitch | 500 | 50 | 1,150.7 ns | 22.60 ns | 34.52 ns | 1,156.7 ns | 0.0725 | 304 B | | Test_IterateSelect | 5000 | 5 | 42,833.1 ns | 779.35 ns | 1,463.80 ns | 42,758.9 ns | - | 200 B | | Test_RandomIndices | 5000 | 5 | 248.9 ns | 4.95 ns | 9.29 ns | 248.8 ns | 0.0992 | 416 B | | Test_FisherYates | 5000 | 5 | 388.9 ns | 7.79 ns | 17.90 ns | 387.0 ns | 0.1411 | 592 B | | Test_RandomSwitch | 5000 | 5 | 153.8 ns | 3.10 ns | 6.41 ns | 154.7 ns | 0.0305 | 128 B | | Test_IterateSelect | 5000 | 10 | 46,814.2 ns | 914.35 ns | 1,311.33 ns | 46,822.7 ns | 0.0610 | 288 B | | Test_RandomIndices | 5000 | 10 | 498.9 ns | 10.01 ns | 28.56 ns | 491.1 ns | 0.1812 | 760 B | | Test_FisherYates | 5000 | 10 | 800.1 ns | 14.44 ns | 29.83 ns | 796.3 ns | 0.2556 | 1072 B | | Test_RandomSwitch | 5000 | 10 | 271.6 ns | 5.45 ns | 15.63 ns | 269.2 ns | 0.0343 | 144 B | | Test_IterateSelect | 5000 | 25 | 50,900.4 ns | 1,000.71 ns | 1,951.81 ns | 51,068.5 ns | 0.0610 | 440 B | | Test_RandomIndices | 5000 | 25 | 1,112.7 ns | 20.06 ns | 30.63 ns | 1,114.6 ns | 0.3510 | 1472 B | | Test_FisherYates | 5000 | 25 | 1,965.9 ns | 38.82 ns | 62.68 ns | 1,953.2 ns | 0.4807 | 2016 B | | Test_RandomSwitch | 5000 | 25 | 610.7 ns | 12.23 ns | 20.76 ns | 613.6 ns | 0.0496 | 208 B | | Test_IterateSelect | 5000 | 50 | 52,062.6 ns | 1,031.59 ns | 1,694.93 ns | 51,882.6 ns | 0.1221 | 720 B | | Test_RandomIndices | 5000 | 50 | 2,203.7 ns | 43.90 ns | 87.67 ns | 2,197.9 ns | 0.7286 | 3048 B | | Test_FisherYates | 5000 | 50 | 3,729.2 ns | 73.08 ns | 124.10 ns | 3,701.8 ns | 0.9842 | 4128 B | | Test_RandomSwitch | 5000 | 50 | 1,185.1 ns | 23.29 ns | 39.54 ns | 1,186.5 ns | 0.0725 | 304 B | Also I guess if you really need to keep the input list unmodified you could store the indices that were switched and revert the order before returning from the function, but that of course would cause additional allocations. A: This will solve your issue var entries=new List<T>(); var selectedItems = new List<T>(); for (var i = 0; i !=10; i++) { var rdm = new Random().Next(entries.Count); while (selectedItems.Contains(entries[rdm])) rdm = new Random().Next(entries.Count); selectedItems.Add(entries[rdm]); } A: When N is very large, the normal method that randomly shuffles the N numbers and selects, say, first k numbers, can be prohibitive because of space complexity. The following algorithm requires only O(k) for both time and space complexities. http://arxiv.org/abs/1512.00501 def random_selection_indices(num_samples, N): modified_entries = {} seq = [] for n in xrange(num_samples): i = N - n - 1 j = random.randrange(i) # swap a[j] and a[i] a_j = modified_entries[j] if j in modified_entries else j a_i = modified_entries[i] if i in modified_entries else i if a_i != j: modified_entries[j] = a_i elif j in modified_entries: # no need to store the modified value if it is the same as index modified_entries.pop(j) if a_j != i: modified_entries[i] = a_j elif i in modified_entries: # no need to store the modified value if it is the same as index modified_entries.pop(i) seq.append(a_j) return seq A: public static IEnumerable<Element> GetRandomElements(this IList<Element> list, int n) { var count = list.Count(); if (count < n) { throw new Exception("n cannot be bigger than the list size."); } var indexes = new HashSet<int>(); while (set.Count < n) { indexes.Add(Random.Next(count)); } return indexes.Select(x => list[x]); } I use this as reference : Performance of Arrays vs. Lists The implementation is ok because the list is fast enough to get an element by id. "Random" is define outside of the method scope Hashset ensure the unicity of each index Limits : the algorithm work better if the list is big and n small. Else it could be that due to collision, the while loop takes a lot of time In this case using public static IEnumerable<Element> GetRandomElements(this IList<Element> list, int n) { return list.OrderBy(x => Random.Next()).Take(n); } could be an available option
{ "language": "en", "url": "https://stackoverflow.com/questions/48087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "201" }
Q: Returning from a finally block in Java I was surprised recently to find that it's possible to have a return statement in a finally block in Java. It seems like lots of people think it's a bad thing to do as described in 'Don't return in a finally clause'. Scratching a little deeper, I also found 'Java's return doesn't always' which shows some pretty horrible examples of other types of flow control in finally blocks. So, my question is, can anyone give me an example where a return statement (or other flow control) in a finally block produces better / more readable code? A: The examples you provided are reason enough to not use flow-control from finally. Even if there's a contrived example where it's "better," consider the developer who has to maintain your code later and who might not be aware of the subtleties. That poor developer might even be you.... A: A simple Groovy Test: public class Instance { List<String> runningThreads = new ArrayList<String>() void test(boolean returnInFinally) { println "\ntest(returnInFinally: $returnInFinally)" println "--------------------------------------------------------------------------" println "before execute" String result = execute(returnInFinally, false) println "after execute -> result: " + result println "--------------------------------------------------------------------------" println "before execute" try { result = execute(returnInFinally, true) println "after execute -> result: " + result } catch (Exception ex) { println "execute threw exception: " + ex.getMessage() } println "--------------------------------------------------------------------------\n" } String execute(boolean returnInFinally, boolean throwError) { String thread = Thread.currentThread().getName() println "...execute(returnInFinally: $returnInFinally, throwError: $throwError) - thread: $thread" runningThreads.add(thread) try { if (throwError) { println "...error in execute, throw exception" throw new Exception("as you liked :-)") } println "...return 'OK' from execute" return "OK" } finally { println "...pass finally block" if (returnInFinally) return "return value from FINALLY ^^" // runningThreads.remove(thread) } } } Instance instance = new Instance() instance.test(false) instance.test(true) Output: test(returnInFinally: false) ----------------------------------------------------------------------------- before execute ...execute(returnInFinally: false, throwError: false) - thread: Thread-116 ...return 'OK' from execute ...pass finally block after execute -> result: OK ----------------------------------------------------------------------------- before execute ...execute(returnInFinally: false, throwError: true) - thread: Thread-116 ...error in execute, throw exception ...pass finally block execute threw exception: as you liked :-) ----------------------------------------------------------------------------- test(returnInFinally: true) ----------------------------------------------------------------------------- before execute ...execute(returnInFinally: true, throwError: false) - thread: Thread-116 ...return 'OK' from execute ...pass finally block after execute -> result: return value from FINALLY ^^ ----------------------------------------------------------------------------- before execute ...execute(returnInFinally: true, throwError: true) - thread: Thread-116 ...error in execute, throw exception ...pass finally block after execute -> result: return value from FINALLY ^^ ----------------------------------------------------------------------------- Question: One interesting point for me was to see how Groovy deals with implicit returns. In Groovy it is possible to "return" from a method simply leaving a value at the end (without return). What do you think happens, if you uncomment the runningThreads.remove(..) line in the finally statement - will this overwrite the regular return value ("OK") and cover the exception?! A: javac will warn of return in finally if you use the -Xlint:finally. Originally javac emitted no warnings - if something is wrong with the code, it should fail to compile. Unfortunately backwards compatibility means that unanticipated ingenious foolishness cannot be prohibited. Exceptions can be thrown from finally blocks, but in that case the exhibited behaviour is almost certainly what you want. A: Returning from inside a finally block will cause exceptions to be lost. A return statement inside a finally block will cause any exception that might be thrown in the try or catch block to be discarded. According to the Java Language Specification: If execution of the try block completes abruptly for any other reason R, then the finally block is executed, and then there is a choice: If the finally block completes normally, then the try statement completes abruptly for reason R. If the finally block completes abruptly for reason S, then the try statement completes abruptly for reason S (and reason R is discarded). Note: As per JLS 14.17 - a return statement always completes abruptly. A: I had a REALLY hard time to track down a bug years ago that was caused by this. The code was something like: Object problemMethod() { Object rtn = null; try { rtn = somethingThatThrewAnException(); } finally { doSomeCleanup(); return rtn; } } What happened is that the exception was thrown down in some other code. It was being caught and logged and rethrown within the somethingThatThrewAnException() method. But the exception wasn't being propagated up past problemMethod(). After a LONG time of looking at this we finally tracked it down to the return method. The return method in the finally block was basically stopping the exception that happened in the try block from propagating up even though it wasn't caught. Like others have said, while it is legal to return from a finally block according to the Java spec, it is a BAD thing and shouldn't be done. A: Adding control structures and returns to finally{} blocks are just another example of "just because you can" abuses which are scattered throughout virtually all development languages. Jason was right in suggesting it could easily become a maintenance nightmare - the arguments against early returns from functions apply more-so to this case of "late returns". Finally blocks exist for one purpose, to allow you to completely tidy up after yourself, no matter what happened in all the preceeding code. Principally this is closing / releasing file pointers, database connections etc., though I could see it being stretched to say adding in bespoke auditing. Anything that affects the return of the function should lie in the try{} block. Even if you had a method whereby you checked an external state, did a time consuming operation, then checked that state again in case it became invalid, you would still want the second check inside the try{} - if it sat inside finally{} and the long operation failed, you would then be checking that state a second time needlessly.
{ "language": "en", "url": "https://stackoverflow.com/questions/48088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "187" }
Q: C++ deleting a pointer to a pointer So I have a pointer to an array of pointers. If I delete it like this: delete [] PointerToPointers; Will that delete all the pointed to pointers as well? If not, do I have to loop over all of the pointers and delete them as well, or is there an easier way to do it? My google-fu doesn't seem to give me any good answers to this question. (And yeah, I know I need to use a vector. This is one of those "catch up on C++" type assignments in school.) A: See boost pointer container for a container that does the automatic deletion of contained pointers for you, while maintaining a syntax very close to ordinary STL containers. A: Yes you have to loop over the pointers, deleting individually. Reason: What if other code had pointers to the objects in your array? The C++ compiler doesn't know if that's true or not, so you have to be explicit. For an "easier way," two suggestions: (1) Make a subroutine for this purpose so at least you won't have to write the code more than once. (2) Use the "smart pointer" design paradigm where you hold an array of objects with reference-counters, then the objects are deleted when the objects are no longer referenced by any code. A: Pointers are pretty much just memory references and not spiffy little self-cleaning .net objects. Creating proper destructors for each class will make the deletion a little cleaner than massive loops throughout the code. A: Let's take a (pseudocoded) real world example .Imagine that you had a class like this: class Street { public: Street(); ~Street(); private: int HouseNumbers_[]; } typedef *Street StreetSign; If you have an array of street signs, and you delete your array of streetsigns, that doesn't mean that you automatically delete the sreets. They re still there, bricks and mortar, they just don't have those signs pointing to them any more. You have got rid of those specific instances of pointers to the streets. An array of pointers is (conceptually) a bit like an array of integers, it's an array of numbers representing the memory locations of various objects. It isn't the objects themselves. If you delete[] the array of pointers, all you have done is delete an array of integers. A: I agree with Jason Cohen though we can be a bit clearer on the reason for needing to delete your pointers with the loop. For every "new" or dynamic memory allocation there needs to be a "delete" a memory de-allocation. Some times the "delete" can be hidden, as with smartpointers but it is still there. int main() { int *pI = new int; int *pArr = new int[10]; so far in the code we have allocated two chunks of dynamic memory. The first is just a general int the second is an array of ints. delete pI; delete [] pArr; these delete statements clear the memory that was allocated by the "new"s int ppArr = new int *[10]; for( int indx = 0; indx < 10; ++indx ) { ppArr[indx] = new int; } This bit of code is doing both of the previous allocations. First we are creating space for our int in a dynamic array. We then need to loop through and allocate an int for each spot in the array. for( int indx = 0; indx < 10; ++indx ) { delete ppArr[indx]; } delete [] ppArr; Note the order that I allocated this memory and then that I de-allocated it in the reverse order. This is because if we were to do the delete [] ppArr; first we would lose the array that tells us what our other pointers are. That chunk or memory would be given back to the system and so can no longer be reliably read. int a=0; int b=1; int c=2; ppArr = new int *[3]; ppArr[0] = &a; ppArr[1] = &b; ppArr[2] = &c; This I think should be mentioned as well. Just because you are working with pointers does not mean that the memory those pointers point to was dynamically allocated. That is to say just because you have a pointer doesn't mean it necessarily needs to be delete. The array I created here is dynamically allocated but the pointers point to local instances of ints When we delete this we only need to delete the array. delete [] ppArr; return 0; } In the end dynamically allocated memory can be tricky and anyway you can wrap it up safely like in a smart pointer or by using stl containers rather then your own can make your life much more pleasant. A: I think you're going to have to loop over I'm afraid. A: I don't know why this was answered so confusingly long. If you delete the array of pointers, you will free the memory used for an array of usually ints. a pointer to an object is an integer containing the adress. You deleted a bunch of adresses, but no objects. delete does not care about the content of a memory space, it calls a destructor(s) and marks the mem as free. It does not care that it just deleted a bunch of adresses of objects, it merely sees ints. That's why you have to cycle through the array first! and call delete on every element, then you can delete the storage of the array itself. Well, now my answer got somewhat long... .... strange... ;) Edit: Jason's answer is not wrong, it just fails to hit the spot. Neither the compiler nor anything else in c(++) cares about you deleting stuff that is elsewhere pointed to. You can just do it. Other program parts trying to use the deleted objects will segfault on you. But no one will hinder you. Neither will it be a problem to destroy an array of pointers to objects, when the objects are referenced elsewhere.
{ "language": "en", "url": "https://stackoverflow.com/questions/48094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Free text search integrated with code coverage Is there any tool which will allow me to perform a free text search over a system's code, but only over the code which was actually executed during a particular invocation? To give a bit of background, when learning my way around a new system, I frequently find myself wanting to discover where some particular value came from, but searching the entire code base turns up far more matches than I can reasonably assess individually. For what it's worth, I've wanted this in Perl and Java at one time or another, but I'd love to know if any languages have a system supporting this feature. A: You can generally twist a code coverage tool's arm and get a report that shows the paths that have been executed during a given run. This report should show the code itself, with the first few columns marked up according to the coverage tool's particular notation on whether a given path was executed. You might be able to use this straight up, or you might have to preprocess it and either remove the code that was not executed, or add a new notation on each line that tells whether it was executed (most tools will only show path information at control points): So from a coverage tool you might get a report like this: T- if(sometest) { x somecode; } else { - someother_code; } The notation T- indicates that the if statement only ever evaluated to true, and so only the first part of the code executed. The later notation 'x' indicates that this line was executed. You should be able to form a regex that matches only when the first column contains a T, F, or x so you can capture all the control statements executed and lines executed. Sometimes you'll only get coverage information at each control point, which then requires you to parse the C file and mark the execute lines yourself. Not as easy, but not impossible either. Still, this sounds like an interesting question where the solution is probably more work than it's worth... -Adam
{ "language": "en", "url": "https://stackoverflow.com/questions/48110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: DHCP overwrites Cisco VPN resolv.conf on Linux I'm using an Ubuntu 8.04 (x86_64) machine to connect to my employer's Cisco VPN. (The client didn't compile out of the box, but I found patches to update the client to compile on kernels released in the last two years.) This all works great, until my DHCP client decides to renew its lease and updates /etc/resolv.conf, replacing the VPN-specific name servers with my general network servers. Is there a good way to prevent my DHCP client from updating /etc/resolv.conf while my VPN is active? A: If you are running without NetworkManager handling the connections, use the resolvconf package to act as an intermediary to programs tweaking /etc/resolv.conf: sudo apt-get install resolvconf If you are using NetworkManager it will handle this for you, so get rid of the resolvconf package: sudo apt-get remove resolvconf I found out about this when setting up vpnc on Ubuntu last week. A search for vpn resolv.conf on ubuntuforums.org has 250 results, many of which are very related! A: If you are using the Ubuntu default with NetworkManager, try removing the CiscoVPN client and use the NetworkManager vpnc plugin to connect to the Cisco VPN. This should avoid all problems, since NetworkManager then knows about your VPN connection. A: I would advice following the advice from @Sean, but if that fails for whatever reason, it should be possible to configure dhclient to not request DNS servers in /etc/dhcp3/dhclient.conf A: chattr +i /etc/resolv.conf should work. ( -i to undo ) But the better thing is to configure your dhclient.conf: https://calomel.org/dhclient.html Look at superceding domain-name-servers, and domain-name. Also look at "send hostname;" If it works at your work place, you will have a cool hostname for your PC and not some weird name that DHCP servers assign. A: vpnc seems to be doing the right thing for my employer's cisco concentrator. I jump on and off the vpn, and it seems to update everything smoothly. A: The DHCPclient daemon can be told not to update resolv.conf with a command line switch. (-r I think, depending on the client) That's less dynamic, because you'd have to restart/reconfigure DHCP when you connect, but not too hard. Similarly, you could just stop the service, but you might lose your IP in the meantime, so I wouldn't really recommend that. Alternatively, you could run the dhcpclient from within a cron job, adding the appropriate process checks. A: This problem is much more noticeable on networks with low DHCP lease ages. There is a bug filed in Ubuntu's dhcp3 package launchpad: https://bugs.launchpad.net/ubuntu/+source/dhcp3/+bug/90681 Which includes this patch in the description: --- /sbin/dhclient-script.orig 2007-03-08 19:19:56.000000000 +0000 +++ /sbin/dhclient-script 2007-03-08 19:19:46.000000000 +0000 @@ -13,6 +13,10 @@ # The alias handling in here probably still sucks. -mdz make_resolv_conf() { + # don't overwrite resolv.conf at RENEW time, since a VPN/PPTP tunnel may + # have updated it with remote DNS servers + [ "$reason" = "RENEW" ] && return + if [ -n "$new_domain_name" -o -n "$new_domain_name_servers" ]; then # Find out whether we are going to mount / rw exec 9>&0 </etc/fstab This change to /sbin/dhcp-script stops DHCP client from overwriting /etc/resolv.conf when it renews its lease.
{ "language": "en", "url": "https://stackoverflow.com/questions/48115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Glade or no glade: What is the best way to use PyGtk? I've been learning python for a while now with some success. I even managed to create one or two (simple) programs using PyGtk + Glade. The thing is: I am not sure if the best way to use GTK with python is by building the interfaces using Glade. I was wondering if the more experienced ones among us (remember, I'm just a beginner) could point out the benefits and caveats of using Glade as opposed to creating everything in the code itself (assuming that learning the correct gtk bindings wouldn't exactly be a problem). A: Glade is very useful for creating interfaces, it means you can easily change the GUI without doing much coding. You'll find that if you want to do anything useful (e.g. build a treeview) you will have to get familiar with various parts of the GTK documentation - in practice finding a good tutorial/examples. A: I started out using glade, but soon moved to just doing everything in code. Glade is nice for simple things, and it's good when you're learning how GTK organizes the widgets (how things are packed, etc). Constructing everything in code, however, you have much more flexibility. Plus, you don't have the glade dependency. A: I usually start with Glade until I come to a point where it doesn't have the features I need, e.g. creating a wizard. As long as I'm using the standard widgets that Glade provides, there's really no reason to hand-code the GUI. The more comfortable I become with how Glade formats the code, the better my hand-coding becomes. Not to mention, it's real easy to use Glade to make the underlying framework so you don't have to worry about all the initializations. A: If you're writing a traditional GUI application which reuses a lot of standard components from GTK+ (buttons, labels, containers etc.) I'd personally go with Glade + Kiwi (a convenience framework for building GTK+ GUI applications). The single greatest advantage to using Glade is that it greatly reduces layout/packing code. Here's an extremely simply example which already shows the issues with manually laying out a GUI (without using any helper functions): container = gtk.HBox() label = gtk.Label(str="test") container.add(label) For more examples take a look here. Even if you're writing a complicated custom widget you can always create a placeholder in Glade and replace that after instantiation. It shouldn't be all too long now for the Glade team to release a new version of the designer (3.6.0). This new version will add support for GtkBuilder, which replaces libglade (the actual library that transforms the Glade XML files into a widget tree). The new Glade designer also once again adds support for defining catalogs (sets of widgets) in Python, so you can easily add your own custom widgets. A: First, start to put this in perspective. You will be using GTK. This is a huge C library built in 1993 using the best traditions of 1970s coding style. It was built to help implement the GIMP, a Photoshop competitor wanna-be with user interface blunders of legend. A typical gui field might have forty or more parameters, mostly repetitive, having getters and setters. There will be pain. The GTK itself manages a complete dynamic type system in C using GObject. This makes debugging a special joy that requires manually walking through arrays of pointers to methods full of generic argument lists with implicit inheritance. You will also be jumping through Pango libraries when you least expect it, e.g., using a Pango constant for where in a label the ellipsis go when the page is small. Expect more pain. By now, you are probably vowing to wrap all your GTK interactions in a Model-View-Controller architecture specific to your application. This is good. Using Glade, or gtkBuilder, or Stetic, will help coral the huge coupling problem of forty parameters to a function. Glade provides a basic GUI builder to drag and drop components together. The parameters and inherited parameters are somewhat separated out. The output of Glade is .glade XML file which you will then read in, attach your callbacks ("signal handlers") to identically named functions, and query or update the in-memory version of that XML to get widgets that you then use pyGTK to manipulate. Glade itself is a creaky and not well maintained. Using pyGTK gives you annoyingly fine grained control in order to build your GUI. This will be verbose, copy-and-paste code. Each attribute will be a separate function call. The attribute setter does not return anything, so chaining the calls is out of the question. Usually, your IDE will give only minimal help on what functions mean and you will be constantly referring to DevHelp or some other tool. One would almost expect GTK GUIs were meant to fail. A: I would say that it depends: if you find that using Glade you can build the apps you want or need to make than that's absolutely fine. If however you actually want to learn how GTK works or you have some non-standard UI requirements you will have to dig into GTK internals (which are not that complicated). Personally I'm usually about 5 minutes into a rich client when I need some feature or customization that is simply impossible through a designer such as Glade or Stetic. Perhaps it's just me. Nevertheless it is still useful for me to bootstrap window design using a graphical tool. My recommendation: if making rich clients using GTK is going to be a significant part of your job/hobby then learn GTK as well since you will need to write that code someday. P.S. I personally find Stetic to be superior to Glade for design work, if a little bit more unstable. A: I recommend using Glade for rapid development, but not for learning. Why? because some times you will need to tune up some widgets in order to work as you want they to work, and if you don't really know/understand the properties attributes of every widget then you will be in troubles. A: Use GtkBuilder instead of Glade, it's integrated into Gtk itself instead of a separate library. The main benefit of Glade is that it's much, much easier to create the interface. It's a bit more work to connect signal handlers, but I've never felt that matters much. A: For quick and simple screens I use Glade. But for anything that needs finer levels of control, I create a custom classes for what I actually need (this is important, because it's too easy to get carried away with generalisations). With a skinny applications specific classes, I can rapidly change the look and feel application wide from a single place. Rather like using CSS to mantain consistency for web sites. A: Personally I would recommend coding it out instead of using Glade. I'm still learning python and pyGtk but I will say that writing out the UI by hand gave me a lot of insight on how things work under the hood. Once you have it learned I'd say to give glade, or other UI designers a try but definitely learn how to do it the "hard" way first. A: You may use glade-2 to design, and use glade2py.py to generating the pure pygtk code, it use pygtkcompat to support gtk3
{ "language": "en", "url": "https://stackoverflow.com/questions/48123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Generating (pseudo)random alpha-numeric strings How can I generate a (pseudo)random alpha-numeric string, something like: 'd79jd8c' in PHP? A: One line solution: echo substr( str_shuffle( str_repeat( 'abcdefghijklmnopqrstuvwxyz0123456789', 10 ) ), 0, 7 ); You can change the substr parameter in order to set a different length for your string. A: Use the ASCII table to pick a range of letters, where the: $range_start , $range_end is a value from the decimal column in the ASCII table. I find that this method is nicer compared to the method described where the range of characters is specifically defined within another string. // range is numbers (48) through capital and lower case letters (122) $range_start = 48; $range_end = 122; $random_string = ""; $random_string_length = 10; for ($i = 0; $i < $random_string_length; $i++) { $ascii_no = round( mt_rand( $range_start , $range_end ) ); // generates a number within the range // finds the character represented by $ascii_no and adds it to the random string // study **chr** function for a better understanding $random_string .= chr( $ascii_no ); } echo $random_string; See More: * *chr function *mt_rand function A: I know it's an old post but I'd like to contribute with a class I've created based on Jeremy Ruten's answer and improved with suggestions in comments: class RandomString { private static $characters = 'abcdefghijklmnopqrstuvwxyz0123456789'; private static $string; private static $length = 8; //default random string length public static function generate($length = null) { if($length){ self::$length = $length; } $characters_length = strlen(self::$characters) - 1; for ($i = 0; $i < self::$length; $i++) { self::$string .= self::$characters[mt_rand(0, $characters_length)]; } return self::$string; } } A: Simple guys .... but remember each byte is random between 0 and 255 which for a random string will be fine. Also remember you'll have two characters to represent each byte. $str = bin2hex(random_bytes(32)); // 64 character string returned A: You can use the following code. It is similar to existing functions except that you can force special character count: function random_string() { // 8 characters: 7 lower-case alphabets and 1 digit $character_sets = [ ["count" => 7, "characters" => "abcdefghijklmnopqrstuvwxyz"], ["count" => 1, "characters" => "0123456789"] ]; $temp_array = array(); foreach ($character_sets as $character_set) { for ($i = 0; $i < $character_set["count"]; $i++) { $random = random_int(0, strlen($character_set["characters"]) - 1); $temp_array[] = $character_set["characters"][$random]; } } shuffle($temp_array); return implode("", $temp_array); } A: Maybe I missed something here, but here's a way using the uniqid() function. A: I have made the following quick function just to play around with the range() function. It just might help someone sometime. Function pseudostring($length = 50) { // Generate arrays with characters and numbers $lowerAlpha = range('a', 'z'); $upperAlpha = range('A', 'Z'); $numeric = range('0', '9'); // Merge the arrays $workArray = array_merge($numeric, array_merge($lowerAlpha, $upperAlpha)); $returnString = ""; // Add random characters from the created array to a string for ($i = 0; $i < $length; $i++) { $character = $workArray[rand(0, 61)]; $returnString .= $character; } return $returnString; } A: I like this function for the job function randomKey($length) { $pool = array_merge(range(0,9), range('a', 'z'),range('A', 'Z')); for($i=0; $i < $length; $i++) { $key .= $pool[mt_rand(0, count($pool) - 1)]; } return $key; } echo randomKey(20); A: First make a string with all your possible characters: $characters = 'abcdefghijklmnopqrstuvwxyz0123456789'; You could also use range() to do this more quickly. Then, in a loop, choose a random number and use it as the index to the $characters string to get a random character, and append it to your string: $string = ''; $max = strlen($characters) - 1; for ($i = 0; $i < $random_string_length; $i++) { $string .= $characters[mt_rand(0, $max)]; } $random_string_length is the length of the random string. A: Generate cryptographically strong, random (potentially) 8-character string using the openssl_random_pseudo_bytes function: echo bin2hex(openssl_random_pseudo_bytes(4)); Procedural way: function randomString(int $length): string { return bin2hex(openssl_random_pseudo_bytes($length)); } Update: PHP7 introduced the random_x() functions which should be even better. If you come from PHP 5.X, use excellent paragonie/random_compat library which is a polyfill for random_bytes() and random_int() from PHP 7. function randomString($length) { return bin2hex(random_bytes($length)); } A: function generateRandomString($length = 10) { $characters = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'; $charactersLength = strlen($characters); $randomString = ''; for ($i = 0; $i < $length; $i++) { $randomString .= $characters[rand(0, $charactersLength - 1)]; } return $randomString; } echo generateRandomString(); A: If you want a very easy way to do this, you can lean on existing PHP functions. This is the code I use: substr( sha1( time() ), 0, 15 ) time() gives you the current time in seconds since epoch, sha1() encrypts it to a string of 0-9a-f, and substr() lets you choose a length. You don't have to start at character 0, and whatever the difference is between the two numbers will be the length of the string. A: First list the desired characters $chars = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'; Use the str_shuffle($string) function. This function will provide you a randomly shuffled string. $alpha=substr(str_shuffle($chars), 0, 50); 50 is the Length of string. A: Jeremy's answer is great. If, like me, you're unsure of how to implement range(), you can see my version using range(). <?php $character_array = array_merge(range('a', 'z'), range(0, 9)); $string = ""; for($i = 0; $i < 6; $i++) { $string .= $character_array[rand(0, (count($character_array) - 1))]; } echo $string; ?> This does the exact same thing as Jeremy's but uses merged arrays where he uses a string, and uses count() where he uses strlen(). A: This is something I use: $cryptoStrong = true; // can be false $length = 16; // Any length you want $bytes = openssl_random_pseudo_bytes($length, $cryptoStrong); $randomString = bin2hex($bytes); You can see the Docs for openssl_random_pseudo_bytes here, and the Docs for bin2hex here A: 1 line: $FROM = 0; $TO = 'zzzz'; $code = base_convert(rand( $FROM ,base_convert( $TO , 36,10)),10,36); echo $code; A: The modern way to do that with type hint / rand_int for real randomeness function random_string(int $size): string { $characters = array_merge( range(0, 9), range('A', 'Z') ); $string = ''; $max = count($characters) - 1; for ($i = 0; $i < $size; $i++) { $string .= $characters[random_int(0, $max)]; } return $string; } A: public function randomString($length = 8) { $characters = implode([ 'ABCDEFGHIJKLMNOPORRQSTUWVXYZ', 'abcdefghijklmnoprqstuwvxyz', '0123456789', //'!@#$%^&*?' ]); $charactersLength = strlen($characters) - 1; $string = ''; while ($length) { $string .= $characters[mt_rand(0, $charactersLength)]; --$length; } return $string; }
{ "language": "en", "url": "https://stackoverflow.com/questions/48124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "87" }
Q: PHP + MySql + Stored Procedures, how do I get access an "out" value? Documentation is severely lacking on anything to do with stored procedures in mysql with PHP. I currently have a stored procedure that I call via PHP, how can I get the value of an out parameter? A: Here's an example of how to do this with mysql, mysqli, and pdo: http://www.joeyrivera.com/2009/using-mysql-stored-procedures-with-php-mysqlmysqlipdo/ A: it looks like it's answered in this post: http://forums.mysql.com/read.php?52,198596,198717#msg-198717 With mysqli PHP API: Assume sproc myproc( IN i int, OUT j int ): $mysqli = new mysqli( "HOST", "USR", "PWD", "DBNAME" ); $ivalue=1; $res = $mysqli->multi_query( "CALL myproc($ivalue,@x);SELECT @x" ); if( $res ) { $results = 0; do { if ($result = $mysqli->store_result()) { printf( "<b>Result #%u</b>:<br/>", ++$results ); while( $row = $result->fetch_row() ) { foreach( $row as $cell ) echo $cell, "&nbsp;"; } $result->close(); if( $mysqli->more_results() ) echo "<br/>"; } } while( $mysqli->next_result() ); } $mysqli->close();
{ "language": "en", "url": "https://stackoverflow.com/questions/48126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Monitoring CPU Core Usage on Terminal Servers I have windows 2003 terminal servers, multi-core. I'm looking for a way to monitor individual CPU core usage on these servers. It is possible for an end-user to have a run-away process (e.g. Internet Explorer or Outlook). The core for that process may spike to near 100% leaving the other cores 'normal'. Thus, the overall CPU usage on the server is just the total of all the cores or if 7 of the cores on a 8 core server are idle and the 8th is running at 100% then 1/8 = 12.5% usage. What utility can I use to monitor multiple servers ? If the CPU usage for a core is "high" what would I use to determine the offending process and then how could I automatically kill that process if it was on the 'approved kill process' list? A product from http://www.packettrap.com/ called PT360 would be perfect except they use SMNP to get data and SMNP appears to only give total CPU usage, it's not broken out by an individual core. Take a look at their Dashboard option with the CPU gauge 'gadget'. That's exactly what I need if only it worked at the core level. Any ideas? A: Individual CPU usage is available through the standard windows performance counters. You can monitor this in perfmon. However, it won't give you the result you are looking for. Unless a thread/process has been explicitly bound to a single CPU then a run-away process will not spike one core to 100% while all the others idle. The run-away process will bounce around between all the processors. I don't know why windows schedules threads this way, presumably because there is no gain from forcing affinity and some loss due to having to handle interrupts on particular cores. You can see this easily enough just in task manager. Watch the individual CPU graphs when you have a single compute bound process running. A: You can give Spotlight on Windows a try. You can graphically drill into all sorts of performance and load indicators. Its freeware. A: perfmon from Microsoft can monitor each individual CPU. perfmon also works remote and you can monitor farious aspects of Windows. I'm not sure if it helps to find run-away processes because the Windows scheduler dos not execute a process always on the same CPU -> on your 8 CPU machine you will see 12.5 % usage on all CPU's if one process runs away.
{ "language": "en", "url": "https://stackoverflow.com/questions/48132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I allow incoming connections to a server inside of VirtualBox? I have a NAT configured to run when loading up my favorite Linux distribution in VitualBox. This allows outgoing connections to work successfully. How do I allow incoming connections to this box, like, say, Web traffic? The IP address is 10.0.2.15. A ping request from my main box results in a Timeout. A: VirtualBox (after version 1.3.8, anyway) will let you map incoming connections in the NAT configuration. There's an excellent tutorial on Aviran's Place that describes the steps to configure port mapping.
{ "language": "en", "url": "https://stackoverflow.com/questions/48135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What are advantages of bytecode over native code? It seems like anything you can do with bytecode you can do just as easily and much faster in native code. In theory, you could even retain platform and language independence by distributing programs and libraries in bytecode then compiling to native code at installation, rather than JITing it. So in general, when would you want to execute bytecode instead of native? A: Bytecode creates an extra level of indirection. The advantages of this extra level of indirection are: * *Platform independence *Can create any number of programming languages (syntax) and have them compile down to the same bytecode. *Could easily create cross language converters *x86, x64, and IA64 no longer need to be compiled as seperate binaries. Only the proper virtual machine needs to be installed. *Each OS simply needs to create a virtual machine and it will have support for the same program. *Just in time compilation allows you to update a program just by replacing a single patched source file. (Very beneficial for web pages) Some of the disadvantages: * *Performance *Easier to decompile A: Hank Shiffman from SGI said (a long time ago, but it's till true): There are three advantages of Java using byte code instead of going to the native code of the system: * *Portability: Each kind of computer has its unique instruction set. While some processors include the instructions for their predecessors, it's generally true that a program that runs on one kind of computer won't run on any other. Add in the services provided by the operating system, which each system describes in its own unique way, and you have a compatibility problem. In general, you can't write and compile a program for one kind of system and run it on any other without a lot of work. Java gets around this limitation by inserting its virtual machine between the application and the real environment (computer + operating system). If an application is compiled to Java byte code and that byte code is interpreted the same way in every environment then you can write a single program which will work on all the different platforms where Java is supported. (That's the theory, anyway. In practice there are always small incompatibilities lying in wait for the programmer.) *Security: One of Java's virtues is its integration into the Web. Load a web page that uses Java into your browser and the Java code is automatically downloaded and executed. But what if the code destroys files, whether through malice or sloppiness on the programmer's part? Java prevents downloaded applets from doing anything destructive by disallowing potentially dangerous operations. Before it allows the code to run it examines it for attempts to bypass security. It verifies that data is used consistently: code that manipulates a data item as an integer at one stage and then tries to use it as a pointer later will be caught and prevented from executing. (The Java language doesn't allow pointer arithmetic, so you can't write Java code to do what we just described. However, there is nothing to prevent someone from writing destructive byte code themselves using a hexadecimal editor or even building a Java byte code assembler.) It generally isn't possible to analyze a program's machine code before execution and determine whether it does anything bad. Tricks like writing self-modifying code mean that the evil operations may not even exist until later. But Java byte code was designed for this kind of validation: it doesn't have the instructions a malicious programmer would use to hide their assault. *Size: In the microprocessor world RISC is generally preferable over CISC. It's better to have a small instruction set and use many fast instructions to do a job than to have many complex operations implemented as single instructions. RISC designs require fewer gates on the chip to implement their instructions, allowing for more room for pipelines and other techniques to make each instruction faster. In an interpreter, however, none of this matters. If you want to implement a single instruction for the switch statement with a variable length depending on the number of case clauses, there's no reason not to do so. In fact, a complex instruction set is an advantage for a web-based language: it means that the same program will be smaller (fewer instructions of greater complexity), which means less time to transfer across our speed-limited network. So when considering byte code vs native, consider which trade-offs you want to make between portability, security, size, and execution speed. If speed is the only important factor, go native. If any of the others are more important, go with bytecode. I'll also add that maintaining a series of OS and architecture-targeted compilations of the same code base for every release can become very tedious. It's a huge win to use the same Java bytecode on multiple platforms and have it "just work." A: All good answers, but my hot-button has been hit - performance. If the code being run spends all its time calling library/system routines - file operations, database operations, sending windows messages, then it doesn't matter very much if it's JITted, because most of the clock time is spent waiting for those lower-level operations to complete. However, if the code contains things we usually call "algorithms", that have to be fast and don't spend much time calling functions, and if those are used often enough to be a performance problem, then JIT is very important. A: I think you just answered your own question: platform independence. Platform-independent bytecode is produced and distributed to its target platform. When executed it's quickly compiled to native code either before execution begins, or simultaneously (Just In Time). The Java JVM and presumably the .NET runtimes operate on this principle. A: Here: http://slashdot.org/developers/02/01/31/013247.shtml Go see what the geeks of Slashdot have to say about it! Little dated, but very good comments! A: The performance of essentially any program will improve if it is compiled, executed with profiling, and the results fed back into the compiler for a second pass. The code paths which are actually used will be more aggressively optimized, loops unrolled to exactly the right degree, and the hot instruction paths arranged to maximize I$ hits. All good stuff, yet it is almost never done because it is annoying to go through so many steps to build a binary. This is the advantage of running the bytecode for a while before compiling it to native code: profiling information is automatically available. The result after Just-In-Time compilation is highly optimized native code for the specific data the program is processing. Being able to run the bytecode also enables more aggressive native optimization than a static compiler could safely use. For example if one of the arguments to a function is noted to always be NULL, all handling for that argument can simply be omitted from the native code. There will be a brief validity check of the arguments in the function prologue, if that argument is not NULL the VM aborts back to the bytecode and starts profiling again. A: Ideally you would have portable bytecode that compiles Just In Time to native code. I think the reason bytecode interpreters exist without JIT is due primarily to the practical fact that native code compilation adds complexity to a virtual machine. It takes time to build, debug, and maintain that additional component. Not everyone has the time or resources to make that commitment. A secondary factor is safety. It's much easier to verify an interpreter won't crash than to guarantee the same for native code. Third is performance. It can often take more time to generate machine code than to interpret bytecode for small pieces of code that only run once. A: Portability and platform independence are probably the most notable advantages of bytecode over native code.
{ "language": "en", "url": "https://stackoverflow.com/questions/48144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Tool for analyzing .Net app memory dumps Can somebody suggest a good free tool for analyzing .Net memory dumps other than Adplus/windbg/sos ? A: You can try out DebugDiag 1.1 A: I found MemoScope.Net - an excellent GUI for WinDbg and ClrMd. A: You can load sos and your memory dump into Visual Studio to at least insulate you from the 'interesting' ui that WinDbg presents. A: Take a look at SOS Assist, it provides a GUI around SOS. A: I fully recommend .Net Memory Profiler. Beside being a great live memory profiler for .Net applications, it can also load memory dumps, and let you traverse the objects in the dump in a very intuitive an easy way. Opening big dump (> 1 GB) can take a few hours though, but for us it's worth the wait. I don't know if they have trial version, but if they do you should definitely give them a shot. A: You could give a look at sosnet which is small opensource winforms application which wraps windbg/sos. https://bitbucket.org/grozeille/sosnet It's handy and straight forward to use. Please try it out, and contribute to it by submitting ideas / patches
{ "language": "en", "url": "https://stackoverflow.com/questions/48148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Configure static routes on Windows There is a netsh and a route command on Windows. From their help text it looks like both can be used to configure static routes. When should you use one and not the other? Is IPv6 a distinguishing factor here? A: route is a very old and basic tool for displaying and modifying the entries in the local IP routing table while netsh is the newer, more robust command-line scripting utility that allows you to, either locally or remotely, manipulate the network configuration. netsh has a zillion more features than route; it can even save your current settings as a script that another instance of netsh can parse. Check out Using netsh to see the giant feature set and compare it to how very basic and simple routes is.
{ "language": "en", "url": "https://stackoverflow.com/questions/48157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Embedding a remote Python shell in an application You can embed the IPython shell inside of your application so that it launches the shell in the foreground. Is there a way to embed a telnet server in a python app so that you can telnet to a certain port and launch a remote IPython shell? Any tips for redirecting the input/output streams for IPython or how to hook it up to a telnet server library or recommendations for other libraries that could be used to implement this are much appreciated. A: Python includes a telnet client, but not a telnet server. You can implement a telnet server using Twisted. Here's an example. As for hooking these things together, that's up to you. A: Use Twisted Manhole. Docs are a bit lacking, but it's easy enough to set up a telnet-based remote server and it comes with a GTK-based GUI. * *Main Twisted site *twisted.manhole API docs A: I think you should base your server class on the SocketServer class from the standard library. You'll need to write a RequestHandler to read and echo input but a lot of the heavy lifting is already done for you. You can use the ThreadingMixIn to make the server multi-threaded very easily. A: Try to use xmlrpc namespace
{ "language": "en", "url": "https://stackoverflow.com/questions/48176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Video Thumbnails in Java I want to generate a thumbnail preview of videos in Java. I'm mostly JMF and video manipulation alienated. * *Is there an easy way to do it? *What about codecs? Will I have to deal with it? *Any video type is suported? (including Quicktime) A: Well, since you're not stuck with JMF, have you considered Xuggler? Xuggler is a Java API that uses FFmpeg under the covers to do all video decoding and encoding. It's free and LGPL licensed. In fact, we have a tutorial that shows How to Make Thumbnails of an Existing File A: There seems to be a few examples out there that are far better than what I was going to send you. See http://krishnabhargav.blogspot.com/2008/02/processing-videos-in-java.html. I'd agree with Stu, however. If you can find a way to get what you want using some command-line tools (and run them using Commons-Exec), you might have a better overall solution than depending on what is essentially the Sanskrit of Java extensions. A: Are you sure that JMF is right for you? Unfortunately, it is not in particularly good shape. Unless you are already committed to JMF, you very well may want to investigate alternatives. Wikipedia has a decent overview at en.wikipedia.org/wiki/Java_Media_Framework Many JMF developers have complained that it supports few codecs and formats in modern use. Its all-Java version, for example, cannot play MPEG-2, MPEG-4, Windows Media, RealMedia, most QuickTime movies, Flash content newer than Flash 2, and needs a plug-in to play the ubiquitous MP3 format. While the performance packs offer the ability to use the native platform's media library, they're only offered for Linux, Solaris and Windows. Furthermore, Windows-based JMF developers can unwittingly think JMF provides support for more formats than it does, and be surprised when their application is unable to play those formats on other platforms. Another knock against JMF is Sun's seeming abandonment of it. The API has not been touched since 1999, and the last news item on JMF's home page was posted in November 2004. While JMF is built for extensibility, there are few such third-party extensions. Furthermore, editing functionality in JMF is effectively non-existent, which makes a wide range of potential applications impractical. A: My own server-side app shells out to FFmpeg to do the encoding. I'm 98.42% sure FFmpeg does snapshots, too. (It is an all singing, all dancing beast of a program. The command line options alone could fill a book.) Check it out: ffmpeg.mplayerhq.hu A: There is a relatively newer option called JThumbnailer that you find here: https://github.com/makbn/JThumbnail JThumbnail is a Java library for creating Thumbnails of common types of file including .doc, .docx, .pdf , .mp4 and etc. full list
{ "language": "en", "url": "https://stackoverflow.com/questions/48179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How do I find out which process is listening on a TCP or UDP port on Windows? How do I find out which process is listening on a TCP or UDP port on Windows? A: Follow these tools: From cmd: C:\> netstat -anob with Administrator privileges. Process Explorer Process Dump Port Monitor All from sysinternals.com. If you just want to know process running and threads under each process, I recommend learning about wmic. It is a wonderful command-line tool, which gives you much more than you can know. Example: c:\> wmic process list brief /every:5 The above command will show an all process list in brief every 5 seconds. To know more, you can just go with /? command of windows , for example, c:\> wmic /? c:\> wmic process /? c:\> wmic prcess list /? And so on and so forth. :) A: To find pid who using port 8000 netstat -aon | findstr '8000' To Kill that Process in windows taskkill /pid pid /f where pid is the process id which you get form first command A: * *Open a command prompt window (as Administrator) From "Start\Search box" Enter "cmd" then right-click on "cmd.exe" and select "Run as Administrator" *Enter the following text then hit Enter. netstat -abno -a Displays all connections and listening ports. -b Displays the executable involved in creating each connection or listening port. In some cases well-known executables host multiple independent components, and in these cases the sequence of components involved in creating the connection or listening port is displayed. In this case the executable name is in [] at the bottom, on top is the component it called, and so forth until TCP/IP was reached. Note that this option can be time-consuming and will fail unless you have sufficient permissions. -n Displays addresses and port numbers in numerical form. -o Displays the owning process ID associated with each connection. *Find the Port that you are listening on under "Local Address" *Look at the process name directly under that. NOTE: To find the process under Task Manager * *Note the PID (process identifier) next to the port you are looking at. *Open Windows Task Manager. *Select the Processes tab. *Look for the PID you noted when you did the netstat in step 1. * *If you don’t see a PID column, click on View / Select Columns. Select PID. *Make sure “Show processes from all users” is selected. A: Use: netstat -a -o This shows the PID of the process running on a particular port. Keep in mind the process ID and go to Task Manager and services or details tab and end the process which has the same PID. Thus you can kill a process running on a particular port in Windows. A: You can also check the reserved ports with the command below. Hyper-V reserve some ports, for instance. netsh int ipv4 show excludedportrange protocol=tcp A: Get PID and Image Name Use only one command: for /f "tokens=5" %a in ('netstat -aon ^| findstr 9000') do tasklist /FI "PID eq %a" where 9000 should be replaced by your port number. The output will contain something like this: Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ java.exe 5312 Services 0 130,768 K Explanation: * *it iterates through every line from the output of the following command: netstat -aon | findstr 9000 *from every line, the PID (%a - the name is not important here) is extracted (PID is the 5th element in that line) and passed to the following command tasklist /FI "PID eq 5312" If you want to skip the header and the return of the command prompt, you can use: echo off & (for /f "tokens=5" %a in ('netstat -aon ^| findstr 9000') do tasklist /NH /FI "PID eq %a") & echo on Output: java.exe 5312 Services 0 130,768 K A: First we find the process id of that particular task which we need to eliminate in order to get the port free: Type netstat -n -a -o After executing this command in the Windows command line prompt (cmd), select the pid which I think the last column. Suppose this is 3312. Now type taskkill /F /PID 3312 You can now cross check by typing the netstat command. NOTE: sometimes Windows doesn’t allow you to run this command directly on CMD, so first you need to go with these steps: From the start menu -> command prompt (right click on command prompt, and run as administrator) A: Using PowerShell... ...this would be your friend (replace 8080 with your port number): netstat -abno | Select-String -Context 0,1 -Pattern 8080 Sample output > TCP 0.0.0.0:8080 0.0.0.0:0 LISTENING 2920 [tnslsnr.exe] > TCP [::]:8080 [::]:0 LISTENING 2920 [tnslsnr.exe] So in this example tnslsnr.exe (OracleXE database) is listening on port 8080. Quick explanation * *Select-String is used to filter the lengthy output of netstat for the relevant lines. *-Pattern tests each line against a regular expression. *-Context 0,1 will output 0 leading lines and 1 trailing line for each pattern match. A: Programmatically, you need stuff from iphlpapi.h, for example GetTcpTable2(). Structures like MIB_TCP6ROW2 contain the owner PID. A: With PowerShell 5 on Windows 10 or Windows Server 2016, run the Get-NetTCPConnection cmdlet. I guess that it should also work on older Windows versions. The default output of Get-NetTCPConnection does not include Process ID for some reason and it is a bit confusing. However, you could always get it by formatting the output. The property you are looking for is OwningProcess. * *If you want to find out the ID of the process that is listening on port 443, run this command: PS C:\> Get-NetTCPConnection -LocalPort 443 | Format-List LocalAddress : :: LocalPort : 443 RemoteAddress : :: RemotePort : 0 State : Listen AppliedSetting : OwningProcess : 4572 CreationTime : 02.11.2016 21:55:43 OffloadState : InHost *Format the output to a table with the properties you look for: PS C:\> Get-NetTCPConnection -LocalPort 443 | Format-Table -Property LocalAddress, LocalPort, State, OwningProcess LocalAddress LocalPort State OwningProcess ------------ --------- ----- ------------- :: 443 Listen 4572 0.0.0.0 443 Listen 4572 *If you want to find out a name of the process, run this command: PS C:\> Get-Process -Id (Get-NetTCPConnection -LocalPort 443).OwningProcess Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName ------- ------ ----- ----- ------ -- -- ----------- 143 15 3448 11024 4572 0 VisualSVNServer A: To get a list of all the owning process IDs associated with each connection: netstat -ao |find /i "listening" If want to kill any process have the ID and use this command, so that port becomes free Taskkill /F /IM PID of a process A: It is very simple to get the port number from a PID in Windows. The following are the steps: * *Go to run → type cmd → press Enter. *Write the following command... netstat -aon | findstr [port number] (Note: Don't include square brackets.) *Press Enter... *Then cmd will give you the detail of the service running on that port along with the PID. *Open Task Manager and hit the service tab and match the PID with that of the cmd, and that's it. A: PowerShell TCP Get-Process -Id (Get-NetTCPConnection -LocalPort YourPortNumberHere).OwningProcess UDP Get-Process -Id (Get-NetUDPEndpoint -LocalPort YourPortNumberHere).OwningProcess cmd netstat -a -b (Add -n to stop it trying to resolve hostnames, which will make it a lot faster.) Note Dane's recommendation for TCPView. It looks very useful! -a Displays all connections and listening ports. -b Displays the executable involved in creating each connection or listening port. In some cases well-known executables host multiple independent components, and in these cases the sequence of components involved in creating the connection or listening port is displayed. In this case the executable name is in [] at the bottom, on top is the component it called, and so forth until TCP/IP was reached. Note that this option can be time-consuming and will fail unless you have sufficient permissions. -n Displays addresses and port numbers in numerical form. -o Displays the owning process ID associated with each connection. A: For Windows: netstat -aon | find /i "listening" A: For Windows, if you want to find stuff listening or connected to port 1234, execute the following at the cmd prompt: netstat -na | find "1234" A: Use TCPView if you want a GUI for this. It's the old Sysinternals application that Microsoft bought out. A: There's a native GUI for Windows: * *Start menu → All Programs → Accessories → System Tools → Resource Monitor *or run resmon.exe, *or from TaskManager → Performance tab. A: netstat -aof | findstr :8080 (Change 8080 for any port) A: To find out which specific process (PID) is using which port: netstat -anon | findstr 1234 Where 1234 is the PID of your process. [Go to Task Manager → Services/Processes tab to find out the PID of your application.] A: The -b switch mentioned in most answers requires you to have administrative privileges on the machine. You don't really need elevated rights to get the process name! Find the pid of the process running in the port number (e.g., 8080) netstat -ano | findStr "8080" Find the process name by pid tasklist /fi "pid eq 2216" A: In case someone need an equivalent for macOS like I did, here is it: lsof -i tcp:8080 After you get the PID of the process, you can kill it with: kill -9 <PID> A: Just open a command shell and type (saying your port is 123456): netstat -a -n -o | find "123456" You will see everything you need. The headers are: Proto Local Address Foreign Address State PID TCP 0.0.0.0:37 0.0.0.0:0 LISTENING 1111 This is as mentioned here. A: Use the below batch script which takes a process name as an argument and gives netstat output for the process. @echo off set procName=%1 for /f "tokens=2 delims=," %%F in ('tasklist /nh /fi "imagename eq %1" /fo csv') do call :Foo %%~F goto End :Foo set z=%1 echo netstat for : "%procName%" which had pid "%1" echo ---------------------------------------------------------------------- netstat -ano |findstr %z% goto :eof :End A: Based on answers with info and kill, for me it is useful to combine them in one command. And you can run this from cmd to get information about process that listen on given port (example 8080): for /f "tokens=3 delims=LISTENING" %i in ('netstat -ano ^| findStr "8080" ^| findStr "["') do @tasklist /nh /fi "pid eq %i" Or if you want to kill it: for /f "tokens=3 delims=LISTENING" %i in ('netstat -ano ^| findStr "8080" ^| findStr "["') do @Taskkill /F /IM %i You can also put those command into a bat file (they will be slightly different - replace %i for %%i): File portInfo.bat for /f "tokens=3 delims=LISTENING" %%i in ( 'netstat -ano ^| findStr "%1" ^| findStr "["' ) do @tasklist /nh /fi "pid eq %%i" File portKill.bat for /f "tokens=3 delims=LISTENING" %%i in ( 'netstat -ano ^| findStr "%1" ^| findStr "["' ) do @Taskkill /F /IM %%i Then you from cmd you can do this: portInfo.bat 8080 or portKill.bat 8080 A: If you'd like to use a GUI tool to do this there's Sysinternals' TCPView. A: * *Open the command prompt - start → Run → cmd, or start menu → All Programs → Accessories → Command Prompt. *Type netstat -aon | findstr '[port_number]' Replace the [port_number] with the actual port number that you want to check and hit Enter. *If the port is being used by any application, then that application’s detail will be shown. The number, which is shown at the last column of the list, is the PID (process ID) of that application. Make note of this. *Type tasklist | findstr '[PID]' Replace the [PID] with the number from the above step and hit Enter. *You’ll be shown the application name that is using your port number. A: Netstat: * *-a displays all connection and listening ports *-b displays executables *-n stop resolve hostnames (numerical form) *-o owning process netstat -bano | findstr "7002" netstat -ano > ano.txt The Currports tool helps to search and filter A: Type in the command: netstat -aon | findstr :DESIRED_PORT_NUMBER For example, if I want to find port 80: netstat -aon | findstr :80 This answer was originally posted to this question. A: netstat -ao and netstat -ab tell you the application, but if you're not a system administrator you'll get "The requested operation requires elevation". It's not ideal, but if you use Sysinternals' Process Explorer you can go to specific processes' properties and look at the TCP tab to see if they're using the port you're interested in. It is a bit of a needle and haystack thing, but maybe it'll help someone... A: PowerShell If you want to have a good overview, you can use this: Get-NetTCPConnection -State Listen | Select-Object -Property *, ` @{'Name' = 'ProcessName';'Expression'={(Get-Process -Id $_.OwningProcess).Name}} ` | select ProcessName,LocalAddress,LocalPort Then you get a table like this: ProcessName LocalAddress LocalPort ----------- ------------ --------- services :: 49755 jhi_service ::1 49673 svchost :: 135 services 0.0.0.0 49755 spoolsv 0.0.0.0 49672 For UDP, it is: Get-NetUDPEndpoint | Select-Object -Property *, ` @{'Name' = 'ProcessName';'Expression'={(Get-Process -Id $_.OwningProcess).Name}} ` | select ProcessName,LocalAddress,LocalPort A: You can get more information if you run the following command: netstat -aon | find /i "listening" |find "port" using the 'Find' command allows you to filter the results. find /i "listening" will display only ports that are 'Listening'. Note, you need the /i to ignore case, otherwise you would type find "LISTENING". | find "port" will limit the results to only those containing the specific port number. Note, on this it will also filter in results that have the port number anywhere in the response string. A: I recommend CurrPorts from NirSoft. CurrPorts can filter the displayed results. TCPView doesn't have this feature. Note: You can right click a process's socket connection and select "Close Selected TCP Connections" (You can also do this in TCPView). This often fixes connectivity issues I have with Outlook and Lync after I switch VPNs. With CurrPorts, you can also close connections from the command line with the "/close" parameter. A: Using Windows' default shell (PowerShell) and without external applications For those using PowerShell, try Get-NetworkStatistics: > Get-NetworkStatistics | where Localport -eq 8000 ComputerName : DESKTOP-JL59SC6 Protocol : TCP LocalAddress : 0.0.0.0 LocalPort : 8000 RemoteAddress : 0.0.0.0 RemotePort : 0 State : LISTENING ProcessName : node PID : 11552 A: A single-line solution that helps me is this one. Just substitute 3000 with your port: $P = Get-Process -Id (Get-NetTCPConnection -LocalPort 3000).OwningProcess; Stop-Process $P.Id Edit: Changed kill to Stop-Process for more PowerShell-like language
{ "language": "en", "url": "https://stackoverflow.com/questions/48198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3138" }
Q: Linux distros for Java Development Simply, are there any Java Developer specific Linux distros? A: A real Sun geek would chime in here about the virtues of using Solaris as a Java development platform, but I am much more ambivalent. Developing with Java is about the same on any linux distro; you are going to wind up having to install the JDK and tools of your choosing (Eclipse, Sun Studio, Tomcat, etc) so you may as well choose a distro on other criteria... perhaps how comfortable you are with it, how easy package management is, and if the look & feel suit your development habits are all big factors. So, to answer your question more directly, a Java developer would do well with any major linux distro that they are comfortable with using in general. If you want some Java goodness out of the box, Fedora 9 and Ubuntu 8.04 have OpenJDK (and NetBeans) according to a recent announcement. A: I am very heavy into Java development and I personally use Ubuntu, so I agree with Sean on this one. The package manager allows you to easily install the various SDKs (the SUN one, or even the upcoming OpenJDK 7). Regards, Arjen A: I have used Ubuntu 8.04 and Fedora 9 with success. For Ubuntu, the community forums were very helpful and if I remember correctly one of the repositories provided apt packages for Sun's Java6 distribution. On Fedora 9, the Sun rpms work alright. In either case, alternative/galternative is your friend to make sure that you point "java" and "javac" at the Sun install. I've been using Netbeans 6.1 and Eclipse 3.4 both on Fedora 9_x64 with no problems. A: Just be careful with your distro's java installation. Most install gcj by default. For whatever reason, typing "java" into bash on most linux distros will not invoke a Sun JVM without some futzing. Usually, there needs to be a bunch of soft-linking from /usr/local/bin -> $JDK_HOME/bin/* to get things working as I typically expect them. A: I think the motive for this question is focused on the convenience of setup: Is there any distro that has Eclipse and the full Sun Java package (JRE, JDK, and DOCS) already "baked in" so that a manual install process (and deinstall of OpenJDK) is not required? Having an "out-of-the-box" standardized environment for a development team is a huge time saver. If you don't already have access to a Java-experienced Linux SysAdmin to guide you through the process of rolling your own automated install, learning enough to do it yourself is definitely frustrating. Few Developers enjoy spending their time wrenching around with OS internals to get tools like Glassfish, Derby, Groovy, Grails, GWT, etc. all working together. They prefer to go directly to writing code and inventing stuff inside a personal sandbox that exploits a pre-existing ecosystem of built-in services... On the deployment side, having a common Linux install that requires no system-level configuration for end-users except for installing their favorite Java applications' .JAR file would be another big win. There's definitely a market for someone to provide this, but most folks are simply gritting their teeth and doing it for themselves. A: Dont listen to any of these noobs suggesting one distro over another. Java is Java and just about all distros can install java as such: [package manager command to install] jdk If the question was about creating RPM's, then obviously RH/CentOS/Fedora would be desirable over deb distros, source distros, or whatever other format you love. However, due to the nature of Java, a specific distro to use is only relevant if the OP cant formulate their own opinion and must follow whatever other people are doing. To reiterate There is no java distro , use whatever will have you hit the ground running. // begin hypocritical personal recomendation ... that being said ... I personally use Archlinux. Archlinux works on rolling releases so it is more likely to have a more recent JDK version then the "sudo apt-get dist-upgrade && sleep 6 months" distros of the world. // end hypocritical personal recomendation Also, I am fully prepared to get downvoted, but please, leave me above 50 so i can comment still, thanks! A: Either SUSE or RH, both have official support. http://www.java.com/en/download/help/5000010500.xml A: I have never heard of a Java-developer-specific Linux distro. If you need a Linux distro for work purposes (not for personal home use) then the choice of distro is not really affected by the fact that you need to install a JDK, but other factors: * *how quickly can it be installed? *how easy is it to maintain (updates etc)? *how fully-featured is it out-of-the-box? *how well supported is it? (commercial support if you need it, otherwise how good is community support?) My suggestions for work-purposes: Ubuntu and Suse have been good for me. I have no experience with the others mentioned (eg: Fedora). Basically, get a distro that "just plain works". Everything you need (JDK, IDE, etc) will almost certainly be easily installed from there. A: Solaris :) On a serious note, there is no Linux distro dedicated to Java, so it would be about the same. OpenSolaris on the other hand (in my very humble experience) would be a bit faster, and you would have bonus of Dtrace as a tool. (Not that you can't find similar tools in Linux, but Dtrace should be somewhat more advanced). A: I had a pleasant experience with Mandriva power pack 2008. Select something like development->"java tools" and everything is installed for you. Everything being Sun JRE, JDK, and eclispe. Solaris did install a 64bit kernel by default though..... A: Latest Ubuntu version. It is easy enough and have packaged Sun Java, Eclipse, NetBeans, GlassFish, TomCat and other Java development related software so you have no worries installing and configuring it from scratch. A: You can choose any of the distro available bcoz dere is no linux distro specifically for java development. Personally I have worked on RHEL 5, Fedora 9, Mandriva with considerable success. Working on java is same on any linux distribution after the installation of jdk, tomcat, eclipse, etc. A: As Nick Stinemates mentioned, Gentoo is an excellent distro for developing Java. It is one of the few Distros that I know of that has a very active Java maintainer group and almost everything that people use regularly is already packaged. Be warned, Gentoo is not a drop dead simple distro to use Ubuntu -- you have to understand a bit about how the OS works -- but it does provide an excellent developerment environment. A: The distro which is most developer friendly, in my opinion, is Gentoo. Since you compile everything from scratch, you choose exactly what makes up your system. Java can be installed very easily, so you could potentially just have a window environment and Java installed (aside from the standard tool chain.) A: For a start: most -if not all- linux distribution allow you to "easily" install (that is: using the distribution's package manager) jdk's and jre's. The choice essentialy is more about what aspect of the distribution is most relevant to your personal taste. Personnaly, I've come to value overall distribution stability (as in: upgrades to the base system are more or less guaranteed-not-to-hose-my-workstation-one-day-before-delivery-date) more, which made me stick with debian for the past few years. The price to pay for that is either "sudo apt-get dist-upgrade && sleep 6 months" as theman_on_vista points out, or just install relevant stuff yourself in /opt. After all, installing some jdk's, maven or ant and eclipse|netbeans is easy enough (hell, there even is documentation somewhere I'm sure :) ) A: Oracle JVM from their website is going to have the same speed on debian, archlinux and slackware (and probably their derivatives). Your best bet is to tweak the JVM arguments for the web servers/IDEs you might be using. Remember that java for x64 architectures will consume more memory due to larger native pointers, so you would do well to tweak your heap size accordingly. Especially if you don't want your server to stop with an OutOfMemoryError exception. A: if you use ubuntu or other debian family you can try this command sudo apt-get install default-jdk default-jre A: While not really a distribution, there is a virtual machine available for Tomcat. It could be adapted pretty easily for other Java based deployments (available for several virtualization technologies). There are also several Java App Server VMs available from VMWare (VMWare only, of course).
{ "language": "en", "url": "https://stackoverflow.com/questions/48203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: jQuery & Objects, trying to make a lightweight widget Trying to make a make generic select "control" that I can dynamically add elements to, but I am having trouble getting functions to work right. This is what I started with. $select = $("<select></select>"); $select.addOption = function(value,text){ $(this).append($("<option></option>").val(value).text(text)); }; This worked fine alone but anytime $select is .clone(true)'ed the addOption() function is lost. This is my object approach but still the function does not work. function $selectX() { return $("<select></select>"); } $selectX.prototype.addOption() = function(value,text){ $(this).append($("<option></option>").val(value).text(text)); }; Hack solution is to add the function manually after creation: $nameSelect= new $selectX; $nameSelect.addOption = function(value,text){ $(this).append($("<option></option>").val(value).text(text)); }; Am I barking up the wrong tree? A: To add new method to jQuery You need to use jQuery.fn.methodName attribute, so in this case it will be: jQuery.fn.addOption = function (value, text) { jQuery(this).append(jQuery('<option></option>').val(value).text(text)); }; But keep in mind that this addOption will be accessible from result of any $() call.
{ "language": "en", "url": "https://stackoverflow.com/questions/48215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How to use webclient in a secure site? I need to automate a process involving a website that is using a login form. I need to capture some data in the pages following the login page. I know how to screen-scrape normal pages, but not those behind a secure site. * *Can this be done with the .NET WebClient class? * *How would I automatically login? *How would I keep logged in for the other pages? A: One way would be through automating a browser -- you mentioned WebClient, so I'm guessing you might be referring to WebClient in .NET.Two main points: * *There's nothing special about https related to WebClient - it just works *Cookies are typically used to carry authentication -- you'll need to capture and replay them Here's the steps I'd follow: * *GET the login form, capture the the cookie in the response. *Using Xpath and HtmlAgilityPack, find the "input type=hidden" field names and values. *POST to login form's action with user name, password, and hidden field values in the request body. Include the cookie in the request headers. Again, capture the cookie in the response. *GET the pages you want, again, with the cookie in the request headers. On step 2, I mention a somewhat complicated method for automating the login. Usually, you can post with username and password directly to the known login form action without getting the initial form or relaying the hidden fields. Some sites have form validation (different from field validation) on their forms which makes this method not work.HtmlAgilityPack is a .NET library that allows you to turn ill-formed html into an XmlDocument so you can XPath over it. Quite useful.Finally, you may run into a situation where the form relies on client script to alter the form values before submitting. You may need to simulate this behavior.Using a tool to view the http traffic for this type of work is extremely helpful - I recommend ieHttpHeaders, Fiddler, or FireBug (net tab). A: You can easily simulate user input. You can submit form on the web page from you program by sending post\get request to a website. Typical login form looks like: <form name="loginForm" method="post" Action="target_page.html"> <input type="Text" name="Username"> <input type="Password" name="Password"> </form> You can send a post request to the website providing values for Username & Password fields. What happens after you send your request is largely depends on a website, usually you will be redirected to some page. You authorization info will be stored in the sessions\cookie. So if you scrape client can maintain web session\understands cookies you will be able to access protected pages. It's not clear from your question what language\framework you're going to use. For example there is a framework for screen scraping (including login functionality) written in perl - WWW::Mechanize Note, that you can face some problems if site you're trying to login to uses java scripts or some kind of CAPTCHA. A: Can you please clarify? Is the WebClient class you speak of the one in HTTPUnit/Java? If so, your session should be saved automatically. A: It isn't clear from your question which WebClient class (or language) you are referring to. If have a Java Runtime you can use the Apache HttpClient class; here's an example I wrote using Groovy that accesses the delicious API over SSL: def client = new HttpClient() def credentials = new UsernamePasswordCredentials( "username", "password" ) def authScope = new AuthScope("api.del.icio.us", 443, AuthScope.ANY_REALM) client.getState().setCredentials( authScope, credentials ) def url = "https://api.del.icio.us/v1/posts/get" def method = new PostMethod( url ) method.addParameter( "tag", tag ) client.executeMethod( method )
{ "language": "en", "url": "https://stackoverflow.com/questions/48224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Play button in browser I want to put songs on a web page and have a little play button, like you can see on Last.fm or Pandora. There can be multiple songs listed on the site, and if you start playing a different song with one already playing, it will pause the first track and begin playing the one you just clicked on. I think they use Flash for this, and I could probably implement it in a few hours, but is there already code I could use for this? Maybe just a flash swf file that you stick hidden on a web page with a basic Javascript API that I can use to stream mp3 files? Also, what about WMA or AAC files? Is there a universal solution that will play these 3 file types? http://musicplayer.sourceforge.net/ A: There are many flash mp3 players that you can use that do this. Usually, you just have to edit a text file to point at the mp3s you want to have available. Here is the first one that showed up on a google search for flash mp3 player: http://www.flashmp3player.org/demo.html A: This is fairly simple if you want to embed the WMP you can use all the controls via JavaScript. There is a great MSDN section on it but I cant seem to find it now.Edit: I found this on MSDN it contains the properties that an embeded WMP will accept then all you have to do is call the methods via javascript. <OBJECT id="VIDEO" width="320" height="240" style="position:absolute; left:0;top:0;" CLASSID="CLSID:6BF52A52-394A-11d3-B153-00C04F79FAA6" type="application/x-oleobject"> <PARAM NAME="URL" VALUE="your file or url"> <PARAM NAME="SendPlayStateChangeEvents" VALUE="True"> <PARAM NAME="AutoStart" VALUE="True"> <PARAM name="uiMode" value="none"> <PARAM name="PlayCount" value="9999"> </OBJECT> Then for the javascript <script type="javascript"> obj = document.getElementById("VIDEO"); //Where video is the id of the object above. obj.URL="filename"; //You can use this to both start and change the current file. obj.controls.stop(); //Will stop obj.controls.Pause(); //Pause </script> Somewhere around here I have code to even control the volume. A while ago I built a custom (looking) player for a client purely in HTML and JavaScript. A: Something I bookmarked long ago, but never got to test so far: http://www.schillmania.com/projects/soundmanager2/ A: I second superjoe30's suggestion: I had great success with musicplayer. The only (slight) negative is that it's a little older project and not as well skinnable as some of the alternatives (although you have the full source code, so - given some time - you can make it look exactly as you need it to).
{ "language": "en", "url": "https://stackoverflow.com/questions/48225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I help port Google Chrome to Linux? I really enjoy Chrome, and the sheer exercise of helping a port would boost my knowledge-base. Where do I start? What are the fundamental similarities and differences between the code which will operated under Windows and Linux? What skills and software do I need? Note: The official website is Visual Studio oriented! Netbeans or Eclipse are my only options. I will not pay Microsoft to help an Open Source project. A: Read this article on Chrome and Open Source on Linux: http://arstechnica.com/journals/linux.ars/2008/09/02/google-unveils-chrome-source-code-and-linux-port The Google V8 JavaScript Engine is also open source and available here if you want to contribute; http://code.google.com/p/v8/ If you want to contribute on Chronium, here are the instructions: http://dev.chromium.org/developers/contributing-code Chromium is an open-source browser project that aims to build a safer, faster, and more stable way for all Internet users to experience the web. This site contains design documents, architecture overviews, testing information, and more to help you learn to build and work with the Chromium source code. Here is how you can get started: http://dev.chromium.org/developers/how-tos/getting-started EDIT: Two more questions was added to the original question. Building on Linux requires the following software: * *Subversion >= 1.4 *pkg-config >= 0.20 *Python >= 2.4 *Perl >= 5.x *gcc/g++ >= 4.2 *bison >= 2.3 *flex >= 2.5.34 *gperf >= 3.0.3 *libnss3-dev >= 3.12 On Ubuntu 8.04, you can fetch all of the above as follows: $ sudo apt-get install subversion pkg-config python perl g++ bison flex gperf libnss3-dev Note: There is no working Chromium-based browser on Linux. Although many Chromium submodules build under Linux and a few unit tests pass, all that runs is a command-line "all tests pass" executable. A: EDIT: (2/6/10) A Beta version of Chrome has been released for Linux. Although it is labeled beta, it works great on my Ubuntu box. You can download it from Google: http://www.google.com/chrome?platform=linux EDIT: (5/31/09) Since I answered this question, there have been more new developments in Chrome (actually "Chromium") for Linux: An alpha build has been released. This means it's not fully functional. If you use Ubuntu, you're in luck: add the following lines to your /etc/apt/sources.list deb http://ppa.launchpad.net/chromium-daily/ppa/ubuntu jaunty main deb-src http://ppa.launchpad.net/chromium-daily/ppa/ubuntu jaunty main Then, at the command line: aptitude update aptitude install chromium-browser Don't forget to s/jaunty/yourUbuntuVersion/ if necessary. Also, you can s/aptitude/apt-get/, if you insist. And.... Yes, it works. I'm typing this in my freshly installed Chromium browser right now! The build is hosted by launchpad, and gave me some security warnings upon install, which I promptly ignored. Here's the website: https://launchpad.net/~chromium-daily/+archive/ppa The original answer: Linux Build Instructions
{ "language": "en", "url": "https://stackoverflow.com/questions/48235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Getting the ID of the element that fired an event Is there any way to get the ID of the element that fires an event? I'm thinking something like: $(document).ready(function() { $("a").click(function() { var test = caller.id; alert(test.val()); }); }); <script type="text/javascript" src="starterkit/jquery.js"></script> <form class="item" id="aaa"> <input class="title"></input> </form> <form class="item" id="bbb"> <input class="title"></input> </form> Except of course that the var test should contain the id "aaa", if the event is fired from the first form, and "bbb", if the event is fired from the second form. A: You can try to use: $('*').live('click', function() { console.log(this.id); return false; }); A: Use can Use .on event $("table").on("tr", "click", function() { var id=$(this).attr('id'); alert("ID:"+id); }); A: In the case of delegated event handlers, where you might have something like this: <ul> <li data-id="1"> <span>Item 1</span> </li> <li data-id="2"> <span>Item 2</span> </li> <li data-id="3"> <span>Item 3</span> </li> <li data-id="4"> <span>Item 4</span> </li> <li data-id="5"> <span>Item 5</span> </li> </ul> and your JS code like so: $(document).ready(function() { $('ul').on('click li', function(event) { var $target = $(event.target), itemId = $target.data('id'); //do something with itemId }); }); You'll more than likely find that itemId is undefined, as the content of the LI is wrapped in a <span>, which means the <span> will probably be the event target. You can get around this with a small check, like so: $(document).ready(function() { $('ul').on('click li', function(event) { var $target = $(event.target).is('li') ? $(event.target) : $(event.target).closest('li'), itemId = $target.data('id'); //do something with itemId }); }); Or, if you prefer to maximize readability (and also avoid unnecessary repetition of jQuery wrapping calls): $(document).ready(function() { $('ul').on('click li', function(event) { var $target = $(event.target), itemId; $target = $target.is('li') ? $target : $target.closest('li'); itemId = $target.data('id'); //do something with itemId }); }); When using event delegation, the .is() method is invaluable for verifying that your event target (among other things) is actually what you need it to be. Use .closest(selector) to search up the DOM tree, and use .find(selector) (generally coupled with .first(), as in .find(selector).first()) to search down it. You don't need to use .first() when using .closest(), as it only returns the first matching ancestor element, while .find() returns all matching descendants. A: You can use (this) to reference the object that fired the function. 'this' is a DOM element when you are inside of a callback function (in the context of jQuery), for example, being called by the click, each, bind, etc. methods. Here is where you can learn more: http://remysharp.com/2007/04/12/jquerys-this-demystified/ A: This works on a higher z-index than the event parameter mentioned in above answers: $("#mydiv li").click(function(){ ClickedElement = this.id; alert(ClickedElement); }); This way you will always get the id of the (in this example li) element. Also when clicked on a child element of the parent.. A: $(".classobj").click(function(e){ console.log(e.currentTarget.id); }) A: var buttons = document.getElementsByTagName('button'); var buttonsLength = buttons.length; for (var i = 0; i < buttonsLength; i++){ buttons[i].addEventListener('click', clickResponse, false); }; function clickResponse(){ // do something based on button selection here... alert(this.id); } Working JSFiddle here. A: Just use the this reference $(this).attr("id") or $(this).prop("id") A: I generate a table dynamically out a database, receive the data in JSON and put it into a table. Every table row got a unique ID, which is needed for further actions, so, if the DOM is altered you need a different approach: $("table").delegate("tr", "click", function() { var id=$(this).attr('id'); alert("ID:"+id); }); A: this.element.attr("id") works fine in IE8. A: Pure JS is simpler aaa.onclick = handler; bbb.onclick = handler; function handler() { var test = this.id; console.log(test) } aaa.onclick = handler; bbb.onclick = handler; function handler() { var test = this.id; console.log(test) } <form class="item" id="aaa"> <input class="title"/> </form> <form class="item" id="bbb"> <input class="title"/> </form> A: Element which fired event we have in event property event.currentTarget We get DOM node object on which was set event handler. Most nested node which started bubbling process we have in event.target Event object is always first attribute of event handler, example: document.querySelector("someSelector").addEventListener(function(event){ console.log(event.target); console.log(event.currentTarget); }); More about event delegation You can read in http://maciejsikora.com/standard-events-vs-event-delegation/ A: Both of these work, jQuery(this).attr("id"); and alert(this.id); A: For reference, try this! It works! jQuery("classNameofDiv").click(function() { var contentPanelId = jQuery(this).attr("id"); alert(contentPanelId); }); A: In jQuery event.target always refers to the element that triggered the event, where event is the parameter passed to the function. http://api.jquery.com/category/events/event-object/ $(document).ready(function() { $("a").click(function(event) { alert(event.target.id); }); }); Note also that this will also work, but that it is not a jQuery object, so if you wish to use a jQuery function on it then you must refer to it as $(this), e.g.: $(document).ready(function() { $("a").click(function(event) { // this.append wouldn't work $(this).append(" Clicked"); }); }); A: The source element as a jQuery object should be obtained via var $el = $(event.target); This gets you the source of the click, rather than the element that the click function was assigned too. Can be useful when the click event is on a parent object EG.a click event on a table row, and you need the cell that was clicked $("tr").click(function(event){ var $td = $(event.target); }); A: this works with most types of elements: $('selector').on('click',function(e){ log(e.currentTarget.id); }); A: Though it is mentioned in other posts, I wanted to spell this out: $(event.target).id is undefined $(event.target)[0].id gives the id attribute. event.target.id also gives the id attribute. this.id gives the id attribute. and $(this).id is undefined. The differences, of course, is between jQuery objects and DOM objects. "id" is a DOM property so you have to be on the DOM element object to use it. (It tripped me up, so it probably tripped up someone else) A: For all events, not limited to just jQuery you can use var target = event.target || event.srcElement; var id = target.id Where event.target fails it falls back on event.srcElement for IE. To clarify the above code does not require jQuery but also works with jQuery. A: You can use the function to get the id and the value for the changed item(in my example, I've used a Select tag. $('select').change( function() { var val = this.value; var id = jQuery(this).attr("id"); console.log("value changed" + String(val)+String(id)); } ); A: I'm working with jQuery Autocomplete I tried looking for an event as described above, but when the request function fires it doesn't seem to be available. I used this.element.attr("id") to get the element's ID instead, and it seems to work fine. A: In case of Angular 7.x you can get the native element and its id or properties. myClickHandler($event) { this.selectedElement = <Element>$event.target; console.log(this.selectedElement.id) this.selectedElement.classList.remove('some-class'); } html: <div class="list-item" (click)="myClickHandler($event)">...</div> A: There's plenty of ways to do this and examples already, but if you need take it a further step and need to prevent the enter key on forms, and yet still need it on a multi-line textarea, it gets more complicated. The following will solve the problem. <script> $(document).ready(function() { $(window).keydown(function(event){ if(event.keyCode == 13) { //There are 2 textarea forms that need the enter key to work. if((event.target.id=="CommentsForOnAir") || (event.target.id=="CommentsForOnline")) { // Prevent the form from triggering, but allowing multi-line to still work. } else { event.preventDefault(); return false; } } }); }); </script> <textarea class="form-control" rows="10" cols="50" id="CommentsForOnline" name="CommentsForOnline" type="text" size="60" maxlength="2000"></textarea> It could probably be simplified more, but you get the concept. A: Simply you can use either: $(this).attr("id"); Or $(event.target).attr("id"); But $(this).attr("id") will return the ID of the element to which the Event Listener is attached to. Whereas when we use $(event.target).attr("id") this will return the ID of the element that was clicked. For example in a <div> if we have a <p> element then if we click on 'div' $(event.target).attr("id") will return the ID of <div>, if we click on 'p' then $(event.target).attr("id") will return ID of <p>. So use it as per your need.
{ "language": "en", "url": "https://stackoverflow.com/questions/48239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1083" }
Q: How can I upsert a bunch of ActiveRecord objects and relationships in Rails? I am working with an API that provides bus arrival data. For every request, I get back (among other things) a list of which routes serve the stop in question. For example, if the list includes result for bus route #1, 2, and 5, then I know that those serve this stop. I have a many-to-many relationship set up between Route and Stop, and I want to dynamically check and update these associations on every request. There is no "master list" of which routes serve which stops, so this seems like the best way to get this data. I believe that the way I'm doing it now is very inefficient: # routes is an array of [number, destination] that I build while iterating over the data routes.uniq.each do |route| number = route[0] destination = route[1] r = Route.find_by_number_and_destination(number, destination) if !r r = Route.new :number => number, :destination => destination r.save end # I have to check if it already exists because I can't find a way # to create a uniqueness constraint on the join table with 2 foreign keys r.stops << stop unless r.stops.include? stop end Basically, I have to do 2 things for every route I find: 1) Create it if it doesn't already exist, 2) Add a relationship to the current stop if it doesn't already exist. Is there a better way to do this, for example by getting a bunch of the data in memory and doing some of the processing on the app server side, in order to avoid the multitude of database calls I'm currently doing? A: If I get it right, you (should) have 2 models. A Route model, and a Stop model. Here's how I would define these models: class Route < ActiveRecord::Base has_and_belongs_to_many :stops belongs_to :stop, :foreign_key => 'destination_id' end class Stop < ActiveRecorde::Base has_and_belongs_to_many :routes end And here's how I would set up my tables: create_table :routes do |t| t.integer :destination_id # Any other information you want to store about routes end create_table :stops do |t| # Any other information you want to store about stops end create_table :routes_stops, :primary_key => [:route_id, :stop_id] do |t| t.integer :route_id t.integer :stop_id end Finally, here's the code I'd use: # First, find all the relevant routes, just for caching. Route.find(numbers) r = Route.find(number) r.destination_id = destination r.stops << stop This should use only a few SQL queries. A: Try this gem: https://github.com/seamusabshere/upsert Docs say its 80% faster than find_or_create_by A: There's likely a good way to cleanup the stops call, but this cleans it up quite a bit assuming I'm picturing properly how routes is structured. routes.uniq.each do |number, destination| r = Route.find_or_create_by_number_and_destination(route[0], destination) r.stops << stop unless r.stops.include? stop end
{ "language": "en", "url": "https://stackoverflow.com/questions/48240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Is there a way to embed a browser in Java? Is there a way to embed a browser in Java? more specifically, is there a library that can emulate a browser? A: You could use SWT for your GUI. Its Browser control allows you to embed IE, Mozilla or Safari (depending on the platform you're running in) with little pain. A: Since JavaFX 2.0 you can use now webview A: By far the most robust embeddable browser I am familiar with is the one in SWT. In fact, it is so flexible that the JavaDoc hover you can see in Eclipse is actually a browser, and the JavaDoc view actually supports things like animation! The only risk with using SWT is that there are different versions of the SWT library for different platforms. I'm not sure if there is a singl jar you could include to cover everyone. A: Take a look at https://xhtmlrenderer.dev.java.net/ A: JxBrowser has not been mentionned yet. It embed either Mozilla Firefox (Gecko), Apple Safari (WebKit) or Internet Explorer. Programmer's Guide A: You could also try the JWebBrowser from DJ Native Swing: http://djproject.sourceforge.net/ns A: I believe JWebPane is going to be the official way to embed a browser into a java app. Its based on the open sourced engine - WebKit, which is used in Apples Safari and Googles Chrome browsers.See this blog for details. A: I have successfully opened a browser from Java using SWT. You can find code examples of how to use SWT to open a Browser window. It's very easy to do. A: You can embed a browser in a Swing/AWT GUI using the JDIC API. I don't see any mention of OS X, so it may not be of use to you. A: You may try this: https://jdic.dev.java.net/ (source: java.net) Or this: http://lobobrowser.org/java-browser.jsp (source: lobobrowser.org) A: You can try Webrenderer or Ice Browser A: If you need a pure Java solution then you can try JWebEngine. It render HTML 4 very good. You can use it in an applet, Java webstart and on any platform. The using is very simple. A: You could try a JEditorPane, it doesn't interpret advanced HTML, nor Javascript, nor advanced CSS, but you can write that part yourself, called the EditorKit. That is the class/object that is consulted by the JEditorPane or how it has to display its content. I know its possible, because I tried and failed (:P), but it could be outdated or deprecated by now, I don't know. A: Maybe Chromium Embedded Framework is an option for you. Specific to Java there is javacef for SWT: https://github.com/wjywbs/javacef java-cef for AWT: https://bitbucket.org/chromiumembedded/java-cef A: If you look at The Minecraft launcher (the old one), look through LoginForm or LauncherFrame, you may be able to find out that method. There is a tutorial by kippykip on youtube on how to decompile and edit it: here
{ "language": "en", "url": "https://stackoverflow.com/questions/48249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "75" }
Q: Free JSP plugin for eclipse? I was looking out for a free plugin for developing/debugging JSP pages in eclipse. Any suggestions? A: The Eclipse Web Tools Platform Project includes a JSP debugger. I have only ever needed to use it with Tomcat so I cannot say how well it works with other servlet containers. A: BEA seems to have a free one BEA JSP plugin - not used it, so not sure how good it is. Oracle now owns BEA, and they have this plugin which might do a similar job. A: The former BEA Workshop is now Oracle Workshop. It is the best JSP editor with WYSIWYG support and it is free. It is not specific to WebLogic. Basic JSP editing is server neutral anyway. However, it supports launching and debugging on many servers. You can read my blog post about it.
{ "language": "en", "url": "https://stackoverflow.com/questions/48250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Compile a PHP script in Linux I know PHP scripts don't actually compile until they are run. However, say I want to create a small simple program and compile it to a binary without requiring the PHP binary. How could I do this? I've seen a few IDE's out there that would do this, but either they are all for windows or the Linux versions don't actually build properly. What I would like is something like py2exe that does it in the script itself. A: Check out phc: the PHP compiler If you just want to run it like a script, you may not need to compile it per se, but just run it via the command line. Read running PHP via the command line. A: There is this: http://www.bambalam.se/bamcompile/ but that compiles to Windows bytecode. There are a few others, but all I have seen will compile for windows only.Few More: http://www.nusphere.com/products/phpdock.htm Edit: I almost forgot if your looking to make it work on linux without regard for windows you can just add #!/usr/bin/php to the top of the script and you should be able to run it from the command line. Don't forget to chmod +x the file first. A: Have a look at Facebook's Hiphop-PHP. It's able to convert PHP code into C++ then compile it with g++. Apparently, they've even gotten it to successfully compile entire WordPress installations.
{ "language": "en", "url": "https://stackoverflow.com/questions/48253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: SharePoint Infrastructure Upgrade - whoops I applied the MOSS infrastructure upgrade w/o applying the WSS one before it -- uh, help! A: I believe that is a supported, but unrecommended configuration. You should be able to get help from microsoft :) A: Quoting: Infrastructure Update for Microsoft Office Servers (KB951297) Other Relevant Updates It is strongly recommended that you install the Infrastructure Update for Windows SharePoint Services 3.0 (KB951695) before installing this update on any of the Office Servers listed in the system requirements section above. Therefore not applying first Infrastructure Update for WSS seem to be not recommended but not unsupported A: I am assuming that you have also run the Configuration wizard after you applied this and brought your system online? If you have not, you are in a much better position, as you can apply the WSS upgrade - then run the wizard and you should be fine. If you have run through the wizard - and brought the system back online - its not the end of the world. What you will want to do - is go back and follow the steps to upgrade your system just as if you had not done anything. The infrastucture update makes some significant changes and improvements to portal search - so once you start trying to configure that, you'll see some errors in crawling etc - as the indexer (which has been updated) tries to crawl content (which has not). Apply the WSS bits, then reapply the bits for MOSS, then run the Config wizard and bring everything back. You should be okay at that point. Obviously, before you do anything, backup all systems and take them offline. Hope this helps. A: Sounds like time for a full restore. The MOSS upgrade steps did explicitly ask for a restore, didn't it? A: The TechNet article Install the Infrastructure Update for Microsoft Office Servers (Office SharePoint Server 2007) has a dicussion on this in the community content section. Someone commented that the WSS update must be run first. There is no suggestion for what to do if you don't or what the consquences are.
{ "language": "en", "url": "https://stackoverflow.com/questions/48257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Setting DataGridView.DefaultCellStyle.NullValue to null at designtime raises error at adding rows runtime In Visual Studio 2008 * *add a new DataGridView to a form *Edit Columns *Add a a new DataGridViewImageColumn *Open the CellStyle Builder of this column (DefaultCellStyle property) *Change the NullValue from System.Drawing.Bitmap to null *Try to add a new Row to the DataGridView at runtime (dataGridView1.Rows.Add();) *You get this error: System.FormatException: Formatted value of the cell has a wrong type. If you change back the NullValue to System.Drawing.Bitmap (as it was) you still get the same error at adding a row. If you set the NullValue at runtime instead of designtime you don't get anny error. (dataGridView1.Columns[0].DefaultCellStyle.NullValue = null;) Could you tell me why is that? A: This may well be a bug in the designer; if you take a look around at the .designer.cs file (maybe doing a diff from before and after you set NullValue to null) you should be able to see the code it generates. A: Kronoz is right. After setting it at designtime it adds this to the .designer.cs: dataGridViewCellStyle1.NullValue = "null"; If I modify "null" to null then it works fine. I checked the DataGridViewCellStyle.NullValue set_NullValue(Object) and get_NullValue with reflector and I think that a string value shouldn't raise any error here. Anyway be careful with this and if you want to set it designtime then don't forget to modify the .design.cs. A: Change the NullValue from System.Drawing.Bitmap to null When you enter 'null' into the field for NullValue in the Designer, you are specifing the string value "null". The only way to set NullValue to a non-string value is to set it programmatically or by modifing the designer code yourself. A: Checkbox can't have a String value. Don't set any default value in the IDE properties dialog. I had "Empty" written into the RowsDefaultCellStyle.Format property and that caused the error. It was self inflicted. As a fix I was trying to set the checkbox state to unchecked but I just needed to delete the string value. A: I found that its better if you just delete the item from the designer all together from the Format area and the default null value area. Then it sets it back to the real null. I'm going to try to set it in the init section away from the designer generated crap.
{ "language": "en", "url": "https://stackoverflow.com/questions/48271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to print css applied background images with WebBrowser control I am using the webbrowser control in winforms and discovered now that background images which I apply with css are not included in the printouts. Is there a way to make the webbrowser print the background of the displayed document too? Edit: Since I wanted to do this programatically, I opted for this solution: using Microsoft.Win32; ... RegistryKey regKey = Registry.CurrentUser .OpenSubKey("Software") .OpenSubKey("Microsoft") .OpenSubKey("Internet Explorer") .OpenSubKey("Main"); //Get the current setting so that we can revert it after printjob var defaultValue = regKey.GetValue("Print_Background"); regKey.SetValue("Print_Background", "yes"); //Do the printing //Revert the registry key to the original value regKey.SetValue("Print_Background", defaultValue); Another way to handle this might be to just read the value, and notify the user to adjust this himself before printing. I have to agree that tweaking with the registry like this is not a good practice, so I am open for any suggestions. Thanks for all your feedback A: Another registry key would be : HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\PageSetup\Print_Background HKEY_LOCAL_MACHINE\Software\Microsoft\Internet Explorer\PageSetup\Print_Background A: If you're going to go and change an important system setting, make sure to first read the current setting and restore it when you are done. I consider this very bad practice in the first place, but if you must do it then be kind. Registry.LocalMachine Also, try changing LocalUser instead of LocalMachine - that way if your app crashes (and it will), then you'll only confounded the user, not everyone who uses the machine. A: The corresponding HKCU key for this setting is: HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main\Print_Background A: By default, the browser does not print background images at all. In Firefox * File > Page Setup > Check Off "Print Background" * File > Print Preview In IE * Tools > Internet Options > Advanced > Printing * Check Off "Print Background Images and Colors" In Opera * File > Print Options > Check Off "Print Page Background" * File > Print Preview (You may have to scroll down/up to see it refresh) A: var sh = new ActiveXObject("WScript.Shell"); key = "HKEY_CURRENT_USER\\Software\\Microsoft\\Internet Explorer\\Main\\Print_Background"; var defaultValue = sh.RegRead(key); sh.RegWrite(key,"yes","REG_SZ"); document.frames['detailFrame'].focus(); document.frames['detailFrame'].print(); sh.RegWrite(key,defaultValue,"REG_SZ"); return false;
{ "language": "en", "url": "https://stackoverflow.com/questions/48278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Unexpected behaviour of Process.MainWindowHandle I've been trying to understand Process.MainWindowHandle. According to MSDN; "The main window is the window that is created when the process is started. After initialization, other windows may be opened, including the Modal and TopLevel windows, but the first window associated with the process remains the main window." (Emphasis added) But while debugging I noticed that MainWindowHandle seemed to change value... which I wasn't expecting, especially after consulting the documentation above. To confirm the behaviour I created a standalone WinForms app with a timer to check the MainWindowHandle of the "DEVENV" (Visual Studio) process every 100ms. Here's the interesting part of this test app... IntPtr oldHWnd = IntPtr.Zero; void GetMainwindowHandle() { Process[] processes = Process.GetProcessesByName("DEVENV"); if (processes.Length!=1) return; IntPtr newHWnd = processes[0].MainWindowHandle; if (newHWnd != oldHWnd) { oldHWnd = newHWnd; textBox1.AppendText(processes[0].MainWindowHandle.ToString("X")+"\r\n"); } } private void timer1Tick(object sender, EventArgs e) { GetMainwindowHandle(); } You can see the value of MainWindowHandle changing when you (for example) click on a drop-down menu inside VS. Perhaps I've misunderstood the documentation. Can anyone shed light? A: Actually Process.MainWindowHandle is a handle of top-most window, it's not really the "Main Window Handle" A: @edg, I guess it's an error in MSDN. You can clearly see in Relfector, that "Main window" check in .NET looks like: private bool IsMainWindow(IntPtr handle) { return (!(NativeMethods.GetWindow(new HandleRef(this, handle), 4) != IntPtr.Zero) && NativeMethods.IsWindowVisible(new HandleRef(this, handle))); } When .NET code enumerates windows, it's pretty obvious that first visible window (i.e. top level window) will match this criteria.
{ "language": "en", "url": "https://stackoverflow.com/questions/48288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Running a regular background event in Java web app In podcast #15, Jeff mentioned he twittered about how to run a regular event in the background as if it was a normal function - unfortunately I can't seem to find that through twitter. Now I need to do a similar thing and are going to throw the question to the masses. My current plan is when the first user (probably me) enters the site it starts a background thread that waits until the alloted time (hourly on the hour) and then kicks off the event blocking the others (I am a Windows programmer by trade so I think in terms of events and WaitOnMultipleObjects) until it completes. How did Jeff do it in Asp.Net and is his method applicable to the Java web-app world? A: As mentioned, Quartz is one standard solution. If you don't care about clustering or persistence of background tasks across restarts, you can use the built in ThreadPool support (in Java 5,6). If you use a ScheduledExecutorService you can put Runnables into the background thread pool that wait a specific amount of time before executing. If you do care about clustering and/or persistence, you can use JMS queues for asynchronous execution, though you will still need some way of delaying background tasks (you can use Quartz or the ScheduledExecutorService to do this). A: Jeff's mechanism was to create some sort of cached object which ASP.Net would automatically recreate at some sort of interval - It seemed to be an ASP.Net specific solution, so probably won't help you (or me) much in Java world. See https://stackoverflow.fogbugz.com/default.asp?W13117 Atwood: Well, I originally asked on Twitter, because I just wanted something light weight. I really didn't want to like write a windows service. I felt like that was out of band code. Plus the code that actually does the work is a web page in fact, because to me that is a logical unit of work on a website is a web page. So, it really is like we are calling back into the web site, it's just like another request in the website, so I viewed it as something that should stay inline, and the little approach that we came up that was recommended to me on Twitter was to essentially to add something to the application cache with a fixed expiration, then you have a call back so when that expires it calls a certain function which does the work then you add it back in to the cache with the same expiration. So, it's a little bit, maybe "ghetto" is the right word. My approach has always been to have to OS (i.e. Cron or the Windows task scheduler) load a specific URL at some interval, and then setup a page at that URL to check it's queue, and perform whatever tasks were required, but I'd be interested to hear if there's a better way. From the transcript, it looks like FogBugz uses the windows service loading a URL approach also. Spolsky: So we have this special page called heartbeat.asp. And that page, whenever you hit it, and anybody can hit it at anytime: doesn't hurt. But when that page runs it checks a queue of waiting tasks to see if there's anything that needs to be done. And if there's anything that needs to be done, it does one thing and then looks in that queue again and if there's anything else to be done it returns a plus, and the entire web page that it returns is just a single character with a plus in it. And if there's nothing else to be done, the queue is now empty, it returns a minus. So, anybody can call this and hit it as many times, you can load up heartbeat.asp in your web browser you hit Ctrl-R Ctrl-R Ctrl-R Ctrl-R until you start getting minuses instead of pluses. And when you've done that FogBugz will have completed all of its maintenance work that it needs to do. So that's the first part, and the second part is a very, very simple Windows service which runs, and its whole job is to call heartbeat.asp and if it gets a plus, call it again soon, and if it gets a minus call it again, but not for a while. So basically there's this Windows service that's always running, that has a very, very, very simple task of just hitting a URL, and looking to see if it gets a plus or a minus and, and then scheduling when it runs again based on whether it got a plus or a minus. And obviously you can do any kind of variation you want on this theme, like for example, uh you could actually, instead of returning just a plus or minus you could say "Okay call me back in 60 seconds" or "Call me back right away I have more work to be done." And that's how it works... so that maintenance service it just runs, you know, it's like, you know, a half page of code that runs that maintenance service, and it never has to change, and it doesn't have any of the logic in there, it just contains the tickling that causes these web pages to get called with a certain guaranteed frequency. And inside that web page at heartbeat.asp there's code that maintains a queue of tasks that need to be done and looks at how much time has elapsed and does, you know, late-night maintenance and every seven days delete all the older messages that have been marked as spam and all kinds of just maintenance background tasks. And uh, that's how that does that. A: We use jtcron for our scheduled background tasks. It works well, and if you understand cron it should make sense to you. A: I think developing a custom solution for running background tasks doesn't always worth, so I recommend to use the Quartz Scheduler in Java. In your situation (need to run background tasks in a web application) you could use the ServletContextListener included in the distribution to initialize the engine at the startup of your web container. After that you have a number of possibilities to start (trigger) your background tasks (jobs), e.g. you can use Calendars or cron-like expressions. In your situation most probably you should settle with SimpleTrigger that lets you run jobs in fixed, regular intervals. The jobs themselves can be described easily too in Quartz, however you haven't provided any details about what you need to run, so I can't provide a suggestion in that area. A: Here is how they do it on StackOverflow.com: https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
{ "language": "en", "url": "https://stackoverflow.com/questions/48293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: C++ UI resources Now that I know C++ I want to get into desktop application that have a UI instead of Command Prompt stuff, where should I start?, and what are some good online resources? A: wxWidgets is a cross platform GUI library for C++ (and other languages). The main site should have enough pointers to resources to get going. You might also want to check out this question/answer here on stack overflow if you are specifically thinking of Windows A: If cross platform support is important then I would second the suggestion to look at Qt. It supports Windows, Linux and the Mac. For free software it is free (there is a GPL version on Unix but not for Windows) but for comercial software it is not particulary cheap. There are now several books on Programming with Qt. It does come with a large number of extra libraries for networking, parsing XML etc. It also has integration with Visual Studio on Windows. One downside with Qt is that there are not as many add on libraries as with some other GUI frameworks. Ot will depend on the type of applications that you wish to write whether this is important to you or not. A: I use Codegear's C++ Builder. It's C++ language support is not 100% but it more than makes up for it by having a great two-way RAD IDE and the ability to use a huge library of existing Delphi components. A: How about QT? Its cross-platform and its is used in a lot of commercial softwares. A: On Linux and maybe Windows, you can use Gtk+ with Glade. Gtk+ is the GUI toolkit. Glade is a GUI drag and drop GUI editor. If you came from Windows or Java and thought GUI programming is hard, this stuff is easy. A: If marketability is a concern, then C++/CLI with WinForms and WPF which really translates to "just learn WinForms and WPF, regardless of what specific language you use". CodeProject has a ton of WinForms/WPF samples/tutorials to get you started. A: The Fox GUI Toolkit Really decent tried-and-true toolkit with a very nice event system. I've used the Ruby port, and my Windows apps had a very native look and feel. A: It might lack some features, but FLTK is an incredibly simple cross-platform GUI library. A: If you are using Windows the traditional place to start is Petzold There is a nice simple framework here which will help you on the way without abstracting too much away. A: Get Visual Studio Express, and start with a MFC "Dialog Based" application. All the window toolkits mentioned are good, but MFC will look the best on a resume!
{ "language": "en", "url": "https://stackoverflow.com/questions/48299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Fundamental Data Structures in C# I would like to know how people implement the following data structures in C# without using the base class library implementations:- * *Linked List *Hash Table *Binary Search Tree *Red-Black Tree *B-Tree *Binomial Heap *Fibonacci Heap and any other fundamental data structures people can think of! I am curious as I want to improve my understanding of these data structures and it'd be nice to see C# versions rather than the typical C examples out there on the internet! A: There's a series of MSDN articles on this subject. However, I haven't really read the text myself. I believe that the collections framework by .NET has a broken interface and cannot be extended very well. There's also C5, a libray that I am investigating right now. For the reason mentioned above, I've had the project to implement my own collections library for .NET but I've stopped this project after the first benchmark revealed that even a straightforward, non-thread-safe generic Vector implementation is slower than the native List<T>. Since I've taken care not to produce any inefficient IL code, this means that .NET is simply not suited (yet) for writing on-par replacements for intrinsic data structures, and that the .NET framework has to use some behind-the-scenes knowledge to optimize the builtin collection classes. A: Here is a generic Binary Search Tree. The only thing I didn't do was implement IEnumerable<T> so you could traverse the tree using a enumerator. However that should be fairly straight forward. Special thanks to Scott Mitchel for his BSTTree article, I used it as a reference on the delete method. The Node Class: class BSTNode<T> where T : IComparable<T> { private BSTNode<T> _left = null; private BSTNode<T> _right = null; private T _value = default(T); public T Value { get { return this._value; } set { this._value = value; } } public BSTNode<T> Left { get { return _left; } set { this._left = value; } } public BSTNode<T> Right { get { return _right; } set { this._right = value; } } } And the actual Tree class: class BinarySearchTree<T> where T : IComparable<T> { private BSTNode<T> _root = null; private int _count = 0; public virtual void Clear() { _root = null; _count = 0; } public virtual int Count { get { return _count; } } public virtual void Add(T value) { BSTNode<T> newNode = new BSTNode<T>(); int compareResult = 0; newNode.Value = value; if (_root == null) { this._count++; _root = newNode; } else { BSTNode<T> current = _root; BSTNode<T> parent = null; while (current != null) { compareResult = current.Value.CompareTo(newNode.Value); if (compareResult > 0) { parent = current; current = current.Left; } else if (compareResult < 0) { parent = current; current = current.Right; } else { // Node already exists throw new ArgumentException("Duplicate nodes are not allowed."); } } this._count++; compareResult = parent.Value.CompareTo(newNode.Value); if (compareResult > 0) { parent.Left = newNode; } else { parent.Right = newNode; } } } public virtual BSTNode<T> FindByValue(T value) { BSTNode<T> current = this._root; if (current == null) return null; // Tree is empty. else { while (current != null) { int result = current.Value.CompareTo(value); if (result == 0) { // Found the corrent Node. return current; } else if (result > 0) { current = current.Left; } else { current = current.Right; } } return null; } } public virtual void Delete(T value) { BSTNode<T> current = this._root; BSTNode<T> parent = null; int result = current.Value.CompareTo(value); while (result != 0 && current != null) { if (result > 0) { parent = current; current = current.Left; } else if (result < 0) { parent = current; current = current.Right; } result = current.Value.CompareTo(value); } if (current == null) throw new ArgumentException("Cannot find item to delete."); if (current.Right == null) { if (parent == null) this._root = current.Left; else { result = parent.Value.CompareTo(current.Value); if (result > 0) { parent.Left = current.Left; } else if (result < 0) { parent.Right = current.Left; } } } else if (current.Right.Left == null) { if (parent == null) this._root = current.Right; else { result = parent.Value.CompareTo(current.Value); if (result > 0) { parent.Left = current.Right; } else if (result < 0) { parent.Right = current.Right; } } } else { BSTNode<T> furthestLeft = current.Right.Left; BSTNode<T> furthestLeftParent = current.Right; while (furthestLeft.Left != null) { furthestLeftParent = furthestLeft; furthestLeft = furthestLeft.Left; } furthestLeftParent.Left = furthestLeft.Right; furthestLeft.Left = current.Left; furthestLeft.Right = current.Right; if (parent != null) { result = parent.Value.CompareTo(current.Value); if (result > 0) { parent.Left = furthestLeft; } else if (result < 0) { parent.Right = furthestLeft; } } else { this._root = furthestLeft; } } this._count--; } } } A: I would recommend two resources for the data structures you mention: First, there is the .NET Framework Source Code (information can be found on ScottGu's blog here). Another useful resource is the Wintellect's Power Collections found on Codeplex here. Hope this helps! A: NGenerics "A class library providing generic data structures and algorithms not implemented in the standard .NET framework." A: Check out Rotor 2 or use reflector too see how Microsoft did it!!! also you can check Microsoft reference source
{ "language": "en", "url": "https://stackoverflow.com/questions/48307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Best Ruby on Rails social networking framework I'm planning on creating a social networking + MP3 lecture downloading / browsing / commenting / discovery website using Ruby on Rails. Partially for fun and also as a means to learn some Ruby on Rails. I'm looking for a social networking framework that I can use as a basis for my site. I don't want to re-invent the wheel. Searching the web I found three such frameworks. Which of these three would you recommend using and why? http://portal.insoshi.com/ http://www.communityengine.org/ http://lovdbyless.com/ A: I've not worked with these but am aware of this comparison: "Unlike Insoshi and Lovd By Less, which are full social networking Rails applications, Community Engine is a plugin that can add social networking features to existing Rails applications" from http://www.rubyinside.com/community-engine-rails-plugin-that-adds-social-networking-to-your-app-901.html A: It depends what your priorities are. If you really want to learn RoR, do it all from scratch. Seriously. Roll your own. It's the best way to learn, far better than hacking through someone else's code. If you do that, sometimes you'll be learning Rails, but sometimes you'll just be learning that specific social network framework. And you won't know which is which... The type of site you're suggesting sounds perfect for a Rails project. If you get stuck, then go browse the repositories of these frameworks. Who cares if you're reinventing the wheel? It's your site, your vision, your rules. If you just want a site up and running, then I would pick Insoshi or LovdbyLess simply because they're out of the box apps so you'll have to do less to do get running. I suggest trying to install them both, and introducing yourself in the Google Groups. That'll give you a good indication of wether you're going to get along. A: Regarding RailsSpace, that's a very nicely built Rails 1.2 application, and I think it was updated for compatibility with Rails 2.x. There's even a terrific book that was written about the RailsSpace application (or rather, RailsSpace and the book were written together). But, RailsSpace became Insoshi, when the authors were so inspired by the amount of interest in a social networking site built in Rails. So while RailsSpace might be an interesting learning exercise, it's dead in terms of development. All of the authors' efforts (for more than a year now, I think) have been going into Insoshi instead, so that's where you should be looking. A: Another option for anyone who wants to create a social site without having to build it from scratch is the EngineY framework. EngineY is a social networking framework written in Ruby and Rails. It provides alot of popular social networking features such as activity streams, groups, photos, message boards, status updates, events, blogs, wall posts, integrated twitter feeds, and more. EngineY is also under active development with new features being added all the time. You can read more about EngineY and download it from: http://www.enginey.com A: Use Rails 3 and roll your own. Don't copy and paste code though, look through the source and try to understand the reasoning or motive behind certain design decisions, only then will you learn. A: Just a quick update, EngineY now supports Rails 2.3.5 and just released this weekend is support for themes. This goes along with existing features including groups, blogs, photos, REST API, status updates, Facebook Connect, forums, private messages, user profiles, activity feeds, wall posts, and more... Check it out at http://www.enginey.com or on GitHub at http://github.com/timothyf/enginey A: Update: Insoshi's license has changed to the MIT license, which means you're basically free to do with it as you please. But still, review the license for any code you are considering before you get too invested in it. Something to keep in mind when deciding is the license for the code. Insoshi is licensed under the GNU Affero General Public License, http://insoshi.com/license. This means that you have to distribute the source code to your Insoshi-based web application to anyone who uses that web application. You might not want to do that, in which case you'll need to pay Insoshi a license fee (they dual license, like MySQL). LovdByLess is distributed under an MIT license, http://github.com/stevenbristol/lovd-by-less/tree/master/LICENSE. This means you can use the source code however you want to. A: One other positive to Community Engine is that it is using Engines which is an advanced type of plugin that is becoming a part of rails in 2.3. So what you learn from using Community Engine (and therefore Engines) will be useful going forward. A: i'm currently testing both lovdbyless and insoshi. i was able to install and get insoshi up and running fairly quickly whereas lovdbyless is giving me a harder time. if you're in novice mode, i suggest getting the book from Head First. http://www.headfirstlabs.com/books/hfrails/ it is probably one of the better books out there for beginners. atleast in my opinion because i went through a few that was just way too confusing.
{ "language": "en", "url": "https://stackoverflow.com/questions/48320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Best practices for integrating third-party modules into your app We have a few projects that involve building an application that is composed of maybe 50% custom functionality, but then pulls in, say, a wiki, a forum, and other components that are "wheels" that have already been invented that we do not wish to re-write from scratch. These third-party applications usually have their own databases, themes, and authentication systems. Getting things to work like a single-sign-on, or a common theme, or tagging/searching across entities in multiple sub-apps are pretty challenging problems, in my experience. What are some of the best practices for this kind of integration project? Our approach, so far, has been to try and pick your components carefully, choosing ones that have a clearly defined API, preferably via HTTP (like REST or SOAP), though that isn't always possible (we haven't found a decent forum that works that way). Are there suggestions folks can give to anyone trying to do this, as I suspect many of us are more and more frequently these days? A: If you are going with open source libraries, pick ones with a good license. I have found out the hard way (when trying to OEM an application) that many companies shy away from licenses like LGPL. I won't go into the details on why but they prefer Apache, BSD or MIT style licenses. Pick tools that have been around for a while. Check out the community and make sure it is active. See what other people are using and use those tools. Pick technologies that work well together. I've put together an application that uses ORM and Web Services. Spring Framework + Apache CXF + JPA for the ORM created a nice technology stack. All of the tools I use easily tie together in Spring making it easy to use them together. The last thing you would want to do is pick tools that you have to write a bunch of code just to use them together. Pick technologies that are based on standards. That way if the library or tool dies, you can easily switch to another that uses the same standard. A: Make sure that the interface between your application and the third-party application or library is such that you can replace it easily with something else just in case. In some cases the third-party software may just be an implementation of an standard API (Java does this a lot with JDBC, JMS, JNDI, ...). In other cases this means wrapping the third-party library in some API that you come up with. Of course there are times to throw that idea out the window and have things tightly integrated with the third-party software. Just be sure that you REALLY want to bind your application to that third-party. Once you go down this road it's REALLY hard to go back and change your mind. A: Donald Knuth said that even better then reusable code is modifiable code, so if there is no API, you should seek for an open source app that is written well and therefore possible to customize. As for databases and login systems and other programming parts (i don't see how e.g. theming could benefit), you can also try, depending on circumstances wrapping stuff so that module believes it's on it's own, but actually talks to your code. A: My approach has been to use third party code for some core functionality. For example, I use Subsonic for my data access, Devexpress components for UI and Peter Blum Data Entry Suite for data entry and validation. Subsonic is open source, Devexpress Peter Blum's controls have source code available for an extra fee. It would be impossible for me to get the functionality of these control if I tried to write them myself. This approach allows me to focus on the custom functionality of my application without having to worry about how I will access the database or how I make an editable treelist that looks pretty. Sure I don't have a fully configured and working forum but I know that I'll be using a SQL database for my app and I won't have to try and get different data storage components to work together. I don't have a wiki but I know how to use the devexpress ui components and formatting and validating data entry is breeze with Peter Blum's controls. In other words, learn tools (and of course choose them carefully) that will speed development of all of your projects then you can focus on portions of your application that have to be customized. I'm not too concerned if it's open source or not as long as source code is available. If it's open source I donate to the project. If it's a commercial component I will pay a fair price. In any case, the tools help to make programming fun and the results have data integrity and are great looking. If I develop a wiki or forum I know that I can get them to work together seamlesslessly. Finally, all of the tools I have mentioned have been around for a long time and are written by outstanding developers with great reputations.
{ "language": "en", "url": "https://stackoverflow.com/questions/48322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Am I allowed to run a javascript runtime (like v8) on the iPhone? According to this discussion, the iphone agreement says that it doesn't allow "loading of plugins or running interpreted code that has been downloaded". Technically, I would like to download scripts from our server (embedded in a proprietary protocol). Does this mean I wouldn't be allowed to run a runtime like v8 in an iphone app? This is probably more of a legal question. A: I think your interpretation is correct - You would not be allowed to download and execute JavaScript code in v8. If there were some way to run the code in an interpreter already on the iPhone (i.e. the javascript engine in MobileSafari) then that would be permitted I think. A: This is partially a technical question too. V8 as currently implemented won't run on the iPhone. No JIT-based VM will. A: Well I embedded Lua into my application already and am programming most of the login in Lua and then downloading it to my iPhone for fast iteration, but this is only intended during development. Once I ship the scripts will be placed in the source and compiled into byte-code shipped along with the app just like any other resource. I'd say this applies to V8 aswell. A: I concur. My reading is also that DOWNLOADED scripts are not allowed. Pre-installed and user-written scripts are fine. But it is a fine distinction and IANAL etc etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/48338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Cannot handle FaultException i have a wcf service that does an operation. and in this operation there could be a fault. i have stated that there could be a fault in my service contract. here is the code below; public void Foo() { try { DoSomething(); // throws FaultException<FooFault> } catch (FaultException) { throw; } catch (Exception ex) { myProject.Exception.Throw<FooFault>(ex); } } in service contract; [FaultException(typeof(FooFault))] void Foo(); when a FaultException was thrown by DoSomething() method while i was running the application, firstly the exception was caught at "catch(Exception ex)" line and breaks in there. then when i pressed f5 again, it does what normally it has to. i wonder why that break exists? and if not could it be problem on publish? A: Are you consuming the WCF service from Silverlight? If so, a special configuration is needed to make the service return a HTTP 200 code instead of 500 in case of error. The details are here: http://msdn.microsoft.com/en-us/library/dd470096%28VS.96%29.aspx A: Actually your exception is caught but you fail to notice it since visual studio highlights the next line, not the line throwing the exception. Replace throw; with some other lines and see them in action. A: Take a closer look at catched exception. Was it FaultException< FooFault> or FaultException ? There are 2 version of FaultException class: generic and non-generic A: @yapiskan, C# is a strong typed language Foo< X> != Foo. So if you need to catch some exception, provide exact type in catch clause. You can learn more on exception handling reading this MSDN article. A: The problem is that exceptions are checked in the order they are declared. Try putting the Exception catch block first and you will see that the compiler complains: other catch blocks will NEVER be evaluated. The following code is generally what .Net is doing in your case: // Begin try DoSomething(); // throws FaultException<FooFault> // End try if (exceptionOccured) { if(exception is FaultException) // FE catch block. { throw; // Goto Exit } if(exception is Exception) // EX catch block { throw new FaultException<FooFault>(); // Goto Exit } } // Exit As you can see your FaultException never re-enters the try-catch-finally (i.e. try-catch-finally is not recursive in nature). Try this instead: try { try { DoSomething(); // throws FaultException<FooFault> } catch (Exception ex) { if (ex is FaultException<FooFault>) throw; else myProject.Exception.Throw<FooFault>(ex); } } catch (FaultException) { throw; } HTH.
{ "language": "en", "url": "https://stackoverflow.com/questions/48340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Useful browser plugins for openid authentication? I've read https://stackoverflow.com/questions/41354/is-the-stackoverflow-login-situation-bearable and must agree to a certain point that openid (for me) makes it more difficult to log in. Not a show stoper but I'm used to opening the front page of the site, there's a small login form, firefox' password manager already filled in the correct values, submit, done. One click. Here - and it's currently the only site with openid I use - the password/form manager doesn't even fill in my "login id". I often close all browser windows and all cookies are erased - and I would like to keep it this way. Are there any firefox plugins you would recommend that make the login process easier? Maybe something that checks my status at myOpenId and performs the login if necessary. Edit: Unfortunately RichQ is right and I can't use Seatbelt. And Sxipper ...not quite what I had in mind ;) Anyway, both solutions would take away some of the "pain", so upvotes for both of you. I've also tried the ssl certificate. But that only adds more steps. Hopefully I did something wrong and some of those steps can be eliminated: * *Click "login" at stackoverflow *Click on the "select provider" Button. *Click on MyOpenId *Enter Username *Click "Login" (Sxipper could reduce the previous 4 steps to a single mouseclick) *MyOpenId login page is loaded *Click "Sign in with an SSL certificate" *Choose Certificate (grrr) *Click "Login" (GRRR) *Back to stackoverflow, finally. What I really would like is: * *Click "login" at stackoverflow *My (only) LoginId is filled in *Click "Login" *If necessary the certificate is chosen automagically, ssl login performed *Back to stackoverflow without any further user interaction. That would be more or less what I'm used to - and I'm a creature of habit :) A: VeriSign (ick)'s SeatBelt plugin: https://pip.verisignlabs.com/seatbelt.do Ideally, the plugin would allow a higher-level of authentication. I know something like this was planned for the OLPC. A: You could try Sxipper. It provides intelligent automatic form-fill, including auto-login. From the Sxipper FAQ: How does Sxipper support OpenID? Sxipper remembers your OpenIDs and presents an overlay. You choose the one you want to use and login with one click. Sxipper also helps protect you against phishing.
{ "language": "en", "url": "https://stackoverflow.com/questions/48344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can't Re-bind a socket to an existing IP/Port Combination Greetings, I'm trying to find a way to 'unbind' a socket from a particular IP/Port combination. My pseudocode looks like this: ClassA a = new ClassA(); //(class A instantiates socket and binds it to 127.0.0.1:4567) //do something //...much later, a has been garbage-collected away. ClassA aa = new ClassA(); //crash here. At this point, .Net informs me that I've already got a socket bound to 127.0.0.1:4567, which is technically true. But no matter what code I put in ClassA's destructor, or no matter what functions I call on the socket (I've tried .Close() and .Disconnect(true)), the socket remains proudly bound to 127.0.0.1:4567. What do I do to be able to 'un-bind' the socket? EDIT: I'm not relying solely on garbage collection (though I tried that approach as well). I tried calling a.Close() or a.Disconnect() and only then instantiating aa; this doesn't solve the problem. EDIT: I've also tried implementing IDisposable, but the code never got there without my calling the method (which was the equivalent of earlier attempts, as the method would simply try .Close and .Disconnect). Let me try calling .Dispose directly and get back to you. EDIT (lots of edits, apologies): Implementing IDisposable and calling a.Dispose() from where 'a' loses scope doesn't work - my Dispose implementation still has to call either .Close or .Disconnect(true) (or .Shutdown(Both)) but none of those unbind the socket. Any help would be appreciated! A: (this is what finally got everything to work for me) Make sure EVERY socket that the socket in A connects to has socket.SetSocketOption(SocketOptionLevel.Socket,SocketOptionName.ReuseAddress, true); set upon being initiated. A: you can't rely on object being garbage collected in C# (i assume you're using c#, based on tagging) if it holds resources like being bound to the network resource like in your example, or holding some other kind of stream, a file stream would be a common example. you have to assure to release the resources that the object is holding, so that it can be garbage collected. otherwise it won't be garbage collected, but remain living somewhere in the memory. your pseudocode example doesn't provide that you're doing the resources releasing, you just state that the object gets (should get) garbage collected. A: The garbage collector doesn't guarantee you that the socket will ever be closed. For a complete example read this MSDN example. The main point is to actually call Socket.Close() as soon as possible. For example, ClassA could implement IDisposable and use it like this: using (ClassA a = new ClassA()) { // code goes here } // 'a' is now disposed and the socket is closed A: The garbage collector runs the finalizer of the object at some indeterminate time. You could implement the IDisposable interface and call the Dispose() method before the object looses Scope - or let the using statement do that for you. see http://msdn.microsoft.com/en-us/library/system.idisposable.aspx and http://msdn.microsoft.com/en-us/library/yh598w02.aspx edit: works fine for me using System; using System.Net.Sockets; using System.Net; namespace ConsoleApplication1 { class Program { class ClassA : IDisposable { protected Socket s; public ClassA() { s = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); s.Bind(new IPEndPoint(IPAddress.Any, 5678)); } public void Dispose() { s.Close(); } } static void Main(string[] args) { using(ClassA a=new ClassA()) { } using (ClassA b = new ClassA()) { } } } } A: The best solution is to retry to bind the socket a few times (2-3). On the first attempt, if it fails, i have found that it will properly (and permanently) close the original socket. HTH, _NT
{ "language": "en", "url": "https://stackoverflow.com/questions/48356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is tagging organizationally superior to discrete subforums? I am interested in choosing a good structure for an online message board-type application. I will use SO as an example, as I think it's an example that we are all familiar with, but my question is more general; it is about how to achieve the right balance between organization and flexibility in online message boards. The questions page is a load of random stuff. It moves quickly (some might say, too quickly) and contains a huge number of questions that I'm not interested in. The idea, I imagine, is that we can use tags to find questions that we're interested in. However, I'm not sure that this works: you can't use tags negatively. I'm not interested in PHP or perl or web development. I want to exclude such posts. But with the tags, I can't. Although discrete subforums are in a sense less flexible, as they generally force you to pick a category even if a question might fit into two (if SO had, say, areas for "Web Development", "Games development", "Computer Science", "Systems Programming", "Databases", etc. then sure, some people might want to post about developing of web-based games, for example) is it worth sacrificing some of that flexibility in order to make it easier to find the content that you are interested in, and hide the content that you are not interested in? Is there any way with a pure tagging system to achieve the greater ease of use that subforums provide? A: The real problem with subforums comes when you guess wrong about which topics have enough interest to get their own subforums. While some topics end up with their own vibrant subcommunities others end up as empty ghettos, with little activity or feeling of community. Topics that might flourish as occasional subjects in a larger forum end up fragmented among many subforums, none of which has the critical mass of people necessary to have an active, vibrant community. A: Though I think that tagging is supperior to grouping, people tend to think hierarchically. In general it depends on the target group for the forum. Maybe you can go with a mixture: use tagging and later use tag groups to order to posts. Delicious uses this, for example, and I find it rather helpful. A: If you're worried about the divide between specific forums and open tag-based systems, like Stack Overflow, consider making a query system that allows you to do a bit more complex queries than just the AND operator, like here on Stack Overflow. I cannot make a query here that will give me all questions in .NET, SQL or C#, combined, and that is the biggest irritation I have with the tags. With such a query system, you can create virtual forums at least. Other than that, I don't really have a good opinion. I like both, and I haven't yet decided which one is best. A: The idea, I imagine, is that we can use tags to find questions that we're interested in. However, I'm not sure that this works: you can't use tags negatively. I'm not interested in PHP or perl or web development. I want to exclude such posts. But with the tags, I can't. While it's currently the case that you can't use tags to hide content, it shouldn't be impossible. Using SO as an example again, there's no reason that a system similar to the ignore function on a forum couldn't be made for the tag system. By adding a right-click context menu or a small "X" link somewhere in the tag display, tags could be marked as ignored. This would also allow the current tag feature to function; Seeing everything (minus your ignore list), or clicking a tag to see only questions with that tag. Ignored tags could be managed in your profile if you should later develop an interest in PHP or INTERCAL that you lacked before. The real question is that of performance. In my head it's as simple as replacing a SELECT [stuff] WHERE Tag = 'buffer-overflow' with SELECT [stuff] WHERE Tag NOT IN ('php','offtopic','funny-hat-friday') but I've not put together any DB backed sites that get absolutely pounded on by thousands people.
{ "language": "en", "url": "https://stackoverflow.com/questions/48365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Sprint velocity calculations Need some advice on working out the team velocity for a sprint. Our team normally consists of about 4 developers and 2 testers. The scrum master insists that every team member should contribute equally to the velocity calculation i.e. we should not distinguish between developers and testers when working out how much we can do in a sprint. The is correct according to Scrum, but here's the problem. Despite suggestions to the contrary, testers never help with non-test tasks and developers never help with non-dev tasks, so we are not cross functional team members at all. Also, despite various suggestions, testers normally spend the first few days of each sprint waiting for something to test. The end result is that typically we take on far more dev work than we actually have capacity for in the sprint. For example, the developers might contribute 20 days to the velocity calculation and the testers 10 days. If you add up the tasks after sprint planning though, dev tasks add up to 25 days and test tasks add up to 5 days. How do you guys deal with this sort of situation? A: We struggle with this issue too. Here is what we do. When we add up capacity and tasks we add them up together and separately. That way we know that we have not exceeded total time for each group. (I know that is not truly scrum, but we have QA folks that don't program and so, to maximize our resources, they end up testing and we (the developers) end up deving.) The second think we do is we really focus on working in slices. We try to pick tasks to get done first that can go to the QA folks fast. The trick to this is that you have to focus on getting the least testable amount done and moved to the testers. If you try to get a whole "feature" done then you are missing the point. While they wait for us they usually put together test plans. It is still a work in progress for us, but that is how we try to do it. A: Since Agile development is about transparency and accountability it sounds like the testers should have assigned tasks that account for their velocity. Even if that means they have a task for surfing the web waiting for testing (though I would think they would be better served developing test plans for the dev team's tasks). This will show the inefficiencies in your organization which isn't popular but that is what Agile is all about. The bad part of that is that your testers may be penalized for something that is a organizational issue. The company I worked for had two separate (dev and qa) teams with two different iteration cycles. The qa cycle was offset by a week. That unfortunatey led to complexity when it came to task acceptance, since a product wasn't really ready for release until the end of the qa's iteration. That isn't a properly integrated team but neither is yours from the sound of it. Unfortunately the qa team never really followed scrum practices (No real planning, stand up, or retrospective) so I can't really tell if that is a good solution or not. A: FogBugz uses EBS (Evidence Based Scheduling) to create a probability curve of when you will ship a given project based on existing performance data and estimates. I guess you could do the same thing with this, just you would need to enter for the testers: "Browsing Internet waiting for developers (1 week)" A: This might be slightly off what you were asking, but here it goes: I really don't like using velocity as a measure of how much work to do in the next sprint/iteration. To me velocity is more of a tool for projections. The team lead/project manager/scrum master can look at the average velocity of the last few iterations and have a fairly good trend line to project the end of the project. The team should be building iterations by commitment as a team. Keep picking stories until the iteration has a good amount of work that the team will commit to complete. It's your responsibility as a team to make sure you aren't slacking by picking to few and not over committing by picking to many. Different skill levels and specialties work themselves out as the team commits to the iteration. Under this model, everything balances out. The team has a reasonable work load to accomplish and the project manager has a commitment for completion. A: Make the testers pair-program as passive peers. If they have nothing to test, at least they can watch out for bugs on the field. When they have something to test, in the second part of the week, they move to the functionality/"user story compliance" level of testing. This way, you have both groups productive, and basically the testers "comb" the code as it goes on. A: Sounds to me like your system is working, just not as well as you'd like. Is this a paid project? If it is, you could make pay be a meritocracy. Pay people based on how much of the work they get done. This would encourage cross discipline work. Although, it might also encourage people to work on pieces that weren't theirs to begin with, or internal sabotage. Obviously, you'd have to be on the lookout for people trying to game the system, but it might work. Surely testers wouldn't want to earn half of what devs do. A: First answer for velocity, than my personal insight about testers in scrum non cross functional team and early days of every sprint. I see there inconsistency. If team is not cross functional you distinguish testers and developers. In this case you must distinguish them also in velocity calculation. If the team is not cross functional testers don’t really increase your velocity. Your velocity will be max of what developers can implement but no more than what testers can test (if everything must be tested). Talk to your scrum master, otherwise there will always be problems with velocity and estimation. Now as for testers and early days of sprint. I work as tester in not cross functional team with 5 devs, so this answer may be bit personal. You could solve this in two ways: a)change work organization by adding separate test sprint or b) change the way testers work. In a) you crate separate testing sprint. It can happen in parallel to devs sprint (just shifted those few days) or you can make it happen once every two or three dev sprints. I have heard about this solutions but I had never worked this way. In b) you must ask testers to review their approach to testing activities. Maybe it depends on practices and tools you use, or process you follow but how can they have nothing to do in this early days? As I mentioned before I work as tester with 5 developers in not cross functional team. If I would wait with my work until developer ends his task, I would never test all features in given sprint. Unless your testers perform only exploratory testing they should have things to do before feature is released to test environment. There are some activities that can be done (or must be done) before tester gets feature/code into his hands. The following is what I do before features are released to test environment: - go through requirements for features to be implemented - design test scripts (high level design) - prepare draft test cases - go through possible test data (if change that is being implemented is manipulating data in the system you need to make snapshot of this data, to compare it later with what given feature will do to this data) - wrap up everything in test suites - communicate with developer as feature is being developed - this way you can get better understanding of implemented solution (instead of asking when he has his mind already on other feature) - you can make any necessary changes to test cases as feature evolves Then when feature is complete you: - flesh out test cases with any details that not known to you earlier (it is trivial, but button name can change, or some additional step in wizard appears) - perform tests - rise issues . Actually I find my self to spend more time on first part (designing tests, and preparing test scripts in appropriate tool) than actually performing those tests. If they to all they can right away instead of waiting for code to be released to test environment it should help with this initial gap and it will minimize risk of testers not finishing their activities before end of sprint. Of course there will always be less for testers to do in the beginning and more in the end, but you can try to minimize this difference. And even when above still leaves them lots of time to waste at the beginning you can give them no coding involved tasks. Some configuration, some maintenance, documentation update, other. A: The solution is never black and white as each sprint may contain stories that require testing and others that dont. There is no problem in Agile of apportioning a tester for example; for 50% of their time during a sprint and 20% in the next sprint. There is no sense in trying to apportion a tester 100% of their time to a sprint and trying to justify it. Time management is the key. A: Testers and developers estimate story points together. The velocity of a sprint is always a combined effort. QA / testers cannot have their separate velocity calculations. That is fundamentally wrong. If you have 3 dev and 2 testers and you inclue the testers capacity and relate it to your output then the productivity will always show low. Testers take part in test case design, defect management and testing which is not directly attributed to Development. You can have efforts tracked against each of these testing tasks but cannot assign velocity points.
{ "language": "en", "url": "https://stackoverflow.com/questions/48386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Eclipse spelling engine does not exist I'm using Eclipse 3.4 (Ganymede) with CDT 5 on Windows. When the integrated spell checker doesn't know some word, it proposes (among others) the option to add the word to a user dictionary. If the user dictionary doesn't exist yet, the spell checker offers then to help configuring it and shows the "General/Editors/Text Editors/Spelling" preference pane. This preference pane however states that "The selected spelling engine does not exist", but has no control to add or install an engine. How can I put a spelling engine in existence? Update: What solved my problem was to install also the JDT. This solution was brought up on 2008-09-07 and was accepted, but is now missing. A: The CDT version of Ganymede apparently shipped improperly configured. After playing around for a while, I have come up with the following steps that fix the problem. * *Export your Eclipse preferences (File > Export > General > Preferences). *Open the exported file in a text editor. *Find the line that says /instance/org.eclipse.ui.editors/spellingEngine=org.eclipse.jdt.internal.ui.text.spelling.DefaultSpellingEngine *Change it to /instance/org.eclipse.ui.editors/spellingEngine=org.eclipse.cdt.internal.ui.text.spelling.CSpellingEngine *Save the preferences file. *Import the preferences back into Eclipse (File > Import > General > Preferences). You should now be able to access the Spelling configuration page as seen above. Note: if you want to add a custom dictionary, Eclipse must be able to access and open the file (i.e. it must exist - an empty file will work) A: Are you using the C/C++ Development Tools exclusively?The Spellcheck functionality is dependent upon the Java Development Tools being installed also.The spelling engine is scheduled to be pushed down from JDT to the Platform,so you can get rid of the Java related bloat soon enough. :) A: Just a word of warning: If you follow the advice to replace the preference as above, it will affect spell checking if you also use Java. I think all I needed to do was change the "Select spelling engine to use" to the C++ engine (near the top of the preference setting on the preference page General->Editors->Text Editors->Spelling).
{ "language": "en", "url": "https://stackoverflow.com/questions/48390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: .Net 3.5 silent installer? Is there a redistributable .Net 3.5 installation package that is a silent installer? Or alternatively, is there a switch that can be passed to the main redistributable .Net 3.5 installer to make it silent? A: dotnetfx35setup.exe /q /norestart see the .net deployment guide at: http://msdn.microsoft.com/en-us/library/cc160716.aspx A: For Windows 10 you need to do following DISM /Online /Enable-Feature /FeatureName:NetFx3 /All /LimitAccess /Source:"Path\To\microsoft-windows-netfx3-ondemand-package" You can find thoose packages under sources\sxs of a Windows DVD
{ "language": "en", "url": "https://stackoverflow.com/questions/48397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What are the best resources to get started with Eclipse plugin development? I'm interested in writing eclipse plugins where do I start? What resources have helped you? I'm looking for: 1. Tutorials 2. Sites devoted to plugin development 3. Books A: I have done quite a bit with an RCP application that made use of multiple plug-ins. This book helped me tremendously in all fronts: RCP framework and plug-in development: http://www.amazon.com/Eclipse-Rich-Client-Platform-Applications/dp/0321334612 The book walks you through the development of a IM chat client using RCP and plug-in development. Also the eclipse site and IBM have some pretty good tutorials, here is one: http://www.ibm.com/developerworks/library/os-ecplug/ A: You can find a good step by step detailed tutorial here: http://www.eclipsepluginsite.com/ Other tutorials: http://www.ibm.com/developerworks/opensource/library/os-eclipse-snippet/index.html?ca=dgr-lnxw16RichEclipse http://www.vogella.de/articles/EclipsePlugIn/article.html A decent book, that I've used is "Eclipse: Building Commercial-Quality Plug-Ins". A: The RCP book mentioned above is great Also there are some older online articles on the eclipse site starting with http://www.eclipse.org/articles/Article-RCP-1/tutorial1.html. Unfortunately they are a bit out of date. A: Eclipse's own Help contains a Platform Plug-in Developer Guide. It is a suggested reading in IBM's site. I'm trying to build a plugin myself. After reading a little bit of "Eclipse Plug-ins" book I missed a more tutorial style writing. Vogella tutorial is quite good. After doing it, I started reading some Eclipse code (as described by Vogella in his tutorial). And now I found the Eclipse's Help resource. A: Here's all the books available for developing Eclipse Plugins: http://www.eclipseplugincentral.com/books-index-req-view_subcat-sid-4.html A: As Eclipse RCP is also based on plug-ins this might also help: Eclipse RCP Introduction
{ "language": "en", "url": "https://stackoverflow.com/questions/48425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How could I graphically display the memory layout from a .map file? My gcc build toolchain produces a .map file. How do I display the memory map graphically? A: I've written a C# program to display the information in a Map file along with information not usually present in the map file (like static symbols provided you can use binutils). The code is available here. In short it parses the map file and also uses BINUTILS (if available) to gather more information. To run it you need to download the code and run the project under visual studio, browse to the map file path and click Analyze. Note: Only works for GCC/LD map files Screenshot: [ A: Here's the beginnings of a script in Python. It loads the map file into a list of Sections and Symbols (first half). It then renders the map using HTML (or do whatever you want with the sections and symbols lists). You can control the script by modifying these lines: with open('t.map') as f: colors = ['9C9F84', 'A97D5D', 'F7DCB4', '5C755E'] total_height = 32.0 map2html.py from __future__ import with_statement import re class Section: def __init__(self, address, size, segment, section): self.address = address self.size = size self.segment = segment self.section = section def __str__(self): return self.section+"" class Symbol: def __init__(self, address, size, file, name): self.address = address self.size = size self.file = file self.name = name def __str__(self): return self.name #=============================== # Load the Sections and Symbols # sections = [] symbols = [] with open('t.map') as f: in_sections = True for line in f: m = re.search('^([0-9A-Fx]+)\s+([0-9A-Fx]+)\s+((\[[ 0-9]+\])|\w+)\s+(.*?)\s*$', line) if m: if in_sections: sections.append(Section(eval(m.group(1)), eval(m.group(2)), m.group(3), m.group(5))) else: symbols.append(Symbol(eval(m.group(1)), eval(m.group(2)), m.group(3), m.group(5))) else: if len(sections) > 0: in_sections = False #=============================== # Gererate the HTML File # colors = ['9C9F84', 'A97D5D', 'F7DCB4', '5C755E'] total_height = 32.0 segments = set() for s in sections: segments.add(s.segment) segment_colors = dict() i = 0 for s in segments: segment_colors[s] = colors[i % len(colors)] i += 1 total_size = 0 for s in symbols: total_size += s.size sections.sort(lambda a,b: a.address - b.address) symbols.sort(lambda a,b: a.address - b.address) def section_from_address(addr): for s in sections: if addr >= s.address and addr < (s.address + s.size): return s return None print "<html><head>" print " <style>a { color: black; text-decoration: none; font-family:monospace }</style>" print "<body>" print "<table cellspacing='1px'>" for sym in symbols: section = section_from_address(sym.address) height = (total_height/total_size) * sym.size font_size = 1.0 if height > 1.0 else height print "<tr style='background-color:#%s;height:%gem;line-height:%gem;font-size:%gem'><td style='overflow:hidden'>" % \ (segment_colors[section.segment], height, height, font_size) print "<a href='#%s'>%s</a>" % (sym.name, sym.name) print "</td></tr>" print "</table>" print "</body></html>" And here's a bad rendering of the HTML it outputs:
{ "language": "en", "url": "https://stackoverflow.com/questions/48426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: Pros & cons between LINQ and traditional collection based approaches Being relatively new to the .net game, I was wondering, has anyone had any experience of the pros / cons between the use of LINQ and what could be considered more traditional methods working with lists / collections? For a specific example of a project I'm working on : a list of unique id / name pairs are being retrieved from a remote web-service. * *this list will change infrequently (once per day), *will be read-only from the point of view of the application where it is being used *will be stored at the application level for all requests to access Given those points, I plan to store the returned values at the application level in a singleton class. My initial approach was to iterate through the list returned from the remote service and store it in a NameValueCollection in a singleton class, with methods to retrieve from the collection based on an id: sugarsoap soapService = new sugarsoap(); branch_summary[] branchList = soapService.getBranches(); foreach (branch_summary aBranch in branchList) { branchNameList.Add(aBranch.id, aBranch.name); } The alternative using LINQ is to simply add a method that works on the list directly once it has been retrieved: public string branchName (string branchId) { //branchList populated in the constructor branch_summary bs = from b in branchList where b.id == branchId select b; return branch_summary.name; } Is either better than the other - is there a third way? I'm open to all answers, for both approaches and both in terms of solutions that offer elegance, and those which benefit performance. A: i dont think the linq you wrote would compile, it'd have to be public string branchName (string branchId) { //branchList populated in the constructor branch_summary bs = (from b in branchList where b.id == branchId select b).FirstOrDefault(); return branch_summary == null ? null : branch_summary.name; } note the .FirstsOrDefault() I'd rather use LINQ for the reason that it can be used in other places, for writing more complex filters on your data. I also think it's easier to read than NameValueCollection alternative. that's my $0.02 A: In general, your simple one-line for/foreach loop will be faster than using Linq. Also, Linq doesn't [always] offer significant readability improvements in this case. Here is the general rule I code by: If the algorithm is simple enough to write and maintain without Linq, and you don't need delayed evaluation, and Linq doesn't offer sufficient maintainability improvements, then don't use it. However, there are times where Linq immensely improves the readability and correctness of your code, as shown in two examples I posted here and here. A: I'm not sure a singleton class is absolutely necessary, do you absolutely need global access at all times? Is the list large? I assume you will have a refresh method on the singleton class for when the properties need to change and also that you have some way of notifying the singleton to update when the list changes. Both solutions are viable. I think LINQ will populate the collection faster in the constructor (but not noticeably faster). Traditional collection based approaches are fine. Personally, I would choose the LINQ version if only because it is new tech and I like to use it. Assuming your deployment environment has .NET 3.5... Do you have a method on your webservice for getting branches by Id? That would be the third option if the branch info is needed infrequently. A: Shortened and workified: public string BranchName(string branchId) { var bs = branchList.FirstOrDefault(b => b.Id == branchId); return bs == null ? null : bs.Name; }
{ "language": "en", "url": "https://stackoverflow.com/questions/48432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How Much Time Should be Allotted for Testing & Bug Fixing Every time I have to estimate time for a project (or review someone else's estimate), time is allotted for testing/bug fixing that will be done between the alpha and production releases. I know very well that estimating so far into the future regarding a problem-set of unknown size is not a good recipe for a successful estimate. However for a variety of reasons, a defined number of hours invariably gets assigned at the outset to this segment of work. And the farther off this initial estimate is from the real, final value, the more grief those involved with the debugging will have to take later on when they go "over" the estimate. So my question is: what is the best strategy you have seen with regards to making estimates like this? A flat percentage of the overall dev estimate? Set number of hours (with the expectation that it will go up)? Something else? Something else to consider: how would you answer this differently if the client is responsible for testing (as opposed to internal QA) and you have to assign an amount of time for responding to the bugs that they may or may not find (so you need to figure out time estimates for bug fixing but not for testing) A: It really depends on a lot of factors. To mention but a few: the development methodology you are using, the amount of testing resource you have, the number of developers available at this stage in the project (many project managers will move people onto something new at the end). As Rob Rolnick says 1:1 is a good rule of thumb- however in cases where a specification is bad the client may push for "bugs" which are actually badly specified features. I was recently involved in a project which used many releases but more time was spent on bug fixing than actual development due to the terrible specification. Ensure a good specification/design and your testing/bug fixing time will be reduced because it will be easier for testers to see what and how to test and any clients will have less lee-way to push for extra features. A: Maybe I just write buggy code, but I like having a 1:1 ratio between devs and tests. I don't wait until alpha to test, but rather do it throughout the whole project. The logic? Depending on your release schedule, there could be a good deal of time between when development starts and when your alpha, beta, and ship dates are. Furthermore, the earlier you catch bugs, the easier (and cheaper) they are to fix. A good tester, who find bugs soon after each check-in, is invaluable. (Or, better yet, before a check-in from a PR or DPK) Simply put, I am still extremely familiar with my code, so most bug fixes become super simple. With this approach, I tend to leave roughly 15% of my dev time to bug fixing. At least when I do estimates. So in a 16 week run I'd leave around 2-3 weeks. A: Only a good amount of accumulated statistics from previous projects can help you to give precise estimates. If you have a well defined set of requirements, you can make a rough calculation of how many use cases you have. As I said you need to have some statistics for your team. You need to know average bugs-per-loc number to estimate total bugs count. If you don't have such numbers for your team, you can use industry average numbers. After you have estimated LOC (number of use cases * NLOC) and average bugs-per-lines, you can give more or less accurate estimation on time required to release project. From my practical experience, time spent on bug-fixing is equal to or more (in 99% cases :) ) than time spent on original implementation. A: From the testing Bible: Testing Computer Software p. 31: "Testing [...] accounts for 45% of initial development of a product." A good rule of thumb is thus to allocate about half of your total effort to testing during initial development. A: Use a language with Design-by-Contract or "Code-contracts" (preconditions, check assertions, post-conditions, class-invariants, etc) to get "testing" as close to your classes and class features (methods and properties) as possible. Then use TDD to test your code with its contracts. Use as much self-built code-generation as you possibly can. Generated code is proven, predictable, easier to debug, and easier/faster to fix than all-hand-coded code. Why write what you can generate? However, do not use OPG (other-peoples-generators)! Code YOU generate is code you control and know. You can expect to spend an inverting ratio over the course of your project--that is--you will write lots of hand-code and contracts in the start (1:1) of your project. As you see patterns, teach a code generator YOU WRITE to generate the code for you and reuse it. The more you generate, the less you design, write, debug, and test. By the end of the project, you will find that your equation has inverted: You're writing less of your core-code, and your focus shifts to your "leaf-code" (last-mile) or specialized (vs generalized and generated) code. Finally--get a code analyzer. A good, automated code analysis rule system and engine will save you oodles of time finding "stupid-bugs" because there are well-known gotchas in how people write code in particular languages. In Eiffel, we now have Eiffel Inspector, where we not only use the 90+ rules coming with it, but are learning to write our own rules for our own discovered "gotchas". Such analyzers not only save you in terms of bugs, but enhance your design--even GREEN programmers "get it" rather quickly and stop making rookie mistakes earlier and learn faster! The rule of thumb for rewriting existing systems is this: "If it took 10 years to write, it will take 10 years to re-write." In our case, using Eiffel, Design-by-Contract, Code Analysis, and Code Generation, we have re-written a 14 year system in 4 years and will fully deliver in 4 1/2. The new system is about 4x to 5x more complex than the old system, so this is saying a lot!
{ "language": "en", "url": "https://stackoverflow.com/questions/48439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Rule of thumb for choosing an implementation of a Java Collection? Anyone have a good rule of thumb for choosing between different implementations of Java Collection interfaces like List, Map, or Set? For example, generally why or in what cases would I prefer to use a Vector or an ArrayList, a Hashtable or a HashMap? A: About your first question... List, Map and Set serve different purposes. I suggest reading about the Java Collections Framework at http://java.sun.com/docs/books/tutorial/collections/interfaces/index.html. To be a bit more concrete: * *use List if you need an array-like data structure and you need to iterate over the elements *use Map if you need something like a dictionary *use a Set if you only need to decide if something belongs to the set or not. About your second question... The main difference between Vector and ArrayList is that the former is synchronized, the latter is not synchronized. You can read more about synchronization in Java Concurrency in Practice. The difference between Hashtable (note that the T is not a capital letter) and HashMap is similiar, the former is synchronized, the latter is not synchronized. I would say that there are no rule of thumb for preferring one implementation or another, it really depends on your needs. A: For non-sorted the best choice, more than nine times out of ten, will be: ArrayList, HashMap, HashSet. Vector and Hashtable are synchronised and therefore might be a bit slower. It's rare that you would want synchronised implementations, and when you do their interfaces are not sufficiently rich for thier synchronisation to be useful. In the case of Map, ConcurrentMap adds extra operations to make the interface useful. ConcurrentHashMap is a good implementation of ConcurrentMap. LinkedList is almost never a good idea. Even if you are doing a lot of insertions and removal, if you are using an index to indicate position then that requires iterating through the list to find the correct node. ArrayList is almost always faster. For Map and Set, the hash variants will be faster than tree/sorted. Hash algortihms tend to have O(1) performance, whereas trees will be O(log n). A: I'll assume you know the difference between a List, Set and Map from the above answers. Why you would choose between their implementing classes is another thing. For example: List: * *ArrayList is quick on retrieving, but slow on inserting. It's good for an implementation that reads a lot but doesn't insert/remove a lot. It keeps its data in one continuous block of memory, so every time it needs to expand, it copies the whole array. *LinkedList is slow on retrieving, but quick on inserting. It's good for an implementation that inserts/removes a lot but doesn't read a lot. It doesn't keep the entire array in one continuous block of memory. Set: * *HashSet doesn't guarantee the order of iteration, and therefore is fastest of the sets. It has high overhead and is slower than ArrayList, so you shouldn't use it except for a large amount of data when its hashing speed becomes a factor. *TreeSet keeps the data ordered, therefore is slower than HashSet. Map: The performance and behavior of HashMap and TreeMap are parallel to the Set implementations. Vector and Hashtable should not be used. They are synchronized implementations, before the release of the new Collection hierarchy, thus slow. If synchronization is needed, use Collections.synchronizedCollection(). A: Lists allow duplicate items, while Sets allow only one instance. I'll use a Map whenever I'll need to perform a lookup. For the specific implementations, there are order-preserving variations of Maps and Sets but largely it comes down to speed. I'll tend to use ArrayList for reasonably small Lists and HashSet for reasonably small sets, but there are many implementations (including any that you write yourself). HashMap is pretty common for Maps. Anything more than 'reasonably small' and you have to start worrying about memory so that'll be way more specific algorithmically. This page has lots of animated images along with sample code testing LinkedList vs. ArrayList if you're interested in hard numbers. EDIT: I hope the following links demonstrate how these things are really just items in a toolbox, you just have to think about what your needs are: See Commons-Collections versions of Map, List and Set. A: As suggested in other answers, there are different scenarios to use correct collection depending on use case. I am listing few points, ArrayList: * *Most cases where you just need to store or iterate through a "bunch of things" and later iterate through them. Iterating is faster as its index based. *Whenever you create an ArrayList, a fixed amount of memory is allocated to it and once exceeded, it copies the whole array LinkedList: * *It uses doubly linked list so insertion and deletion operation will be fast as it will only add or remove a node. *Retrieving is slow as it will have to iterate through the nodes. HashSet: * *Making other yes-no decisions about an item, e.g. "is the item a word of English", "is the item in the database?" , "is the item in this category?" etc. *Remembering "which items you've already processed", e.g. when doing a web crawl; HashMap: * *Used in cases where you need to say "for a given X, what is the Y"? It is often useful for implementing in-memory caches or indexes i.e key value pairs For example: For a given user ID, what is their cached name/User object?. *Always go with HashMap to perform a lookup. Vector and Hashtable are synchronized and therefore bit slower and If synchronization is needed, use Collections.synchronizedCollection(). Check This for sorted collections. Hope this hepled. A: Well, it depends on what you need. The general guidelines are: List is a collection where data is kept in order of insertion and each element got index. Set is a bag of elements without duplication (if you reinsert the same element, it won't be added). Data doesn't have the notion of order. Map You access and write your data elements by their key, which could be any possible object. Attribution: https://stackoverflow.com/a/21974362/2811258 For more information about Java Collections, check out this article. A: I've always made those decisions on a case by case basis, depending on the use case, such as: * *Do I need the ordering to remain? *Will I have null key/values? Dups? *Will it be accessed by multiple threads *Do I need a key/value pair *Will I need random access? And then I break out my handy 5th edition Java in a Nutshell and compare the ~20 or so options. It has nice little tables in Chapter five to help one figure out what is appropriate. Ok, maybe if I know off the cuff that a simple ArrayList or HashSet will do the trick I won't look it all up. ;) but if there is anything remotely complex about my indended use, you bet I'm in the book. BTW, I though Vector is supposed to be 'old hat'--I've not used on in years. A: Theoretically there are useful Big-Oh tradeoffs, but in practice these almost never matter. In real-world benchmarks, ArrayList out-performs LinkedList even with big lists and with operations like "lots of insertions near the front." Academics ignore the fact that real algorithms have constant factors that can overwhelm the asymptotic curve. For example, linked-lists require an additional object allocation for every node, meaning slower to create a node and vastly worse memory-access characteristics. My rule is: * *Always start with ArrayList and HashSet and HashMap (i.e. not LinkedList or TreeMap). *Type declarations should always be an interface (i.e. List, Set, Map) so if a profiler or code review proves otherwise you can change the implementation without breaking anything. A: I really like this cheat sheet from Sergiy Kovalchuk's blog entry, but unfortunately it is offline. However, the Wayback Machine has a historical copy: More detailed was Alexander Zagniotov's flowchart, also offline therefor also a historical copy of the blog: Excerpt from the blog on concerns raised in comments: "This cheat sheet doesn't include rarely used classes like WeakHashMap, LinkedList, etc. because they are designed for very specific or exotic tasks and shouldn't be chosen in 99% cases." A: I found Bruce Eckel's Thinking in Java to be very helpful. He compares the different collections very well. I used to keep a diagram he published showing the inheritance heirachy on my cube wall as a quick reference. One thing I suggest you do is keep in mind thread safety. Performance usually means not thread safe. A: Use Map for key-value pairing For key-value tracking, use Map implementation. For example, tracking which person is covering which day of the weekend. So we want to map a DayOfWeek object to an Employee object. Map < DayOfWeek , Employee > weekendWorker = Map.of( DayOfWeek.SATURDAY , alice , DayOfWeek.SUNDAY , bob ) ; When choosing one of the Map implementations, there are several aspects to consider. These include: concurrency, tolerance for NULL values in key and/or value, order when iterating keys, tracking by reference versus content, and convenience of literals syntax. Here is a chart I made showing the various aspects of each of the ten Map implementations bundled with Java 11.
{ "language": "en", "url": "https://stackoverflow.com/questions/48442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "66" }
Q: Scheduling Windows Mobile apps to run How do you schedule a Windows Mobile application to periodically start up to perform some background processing. For example, assume I'm writing an email client and want to check for email every hour, regardless of whether my app is running at the time. The app is a native C/C++ app on Windows Mobile 5.0 or later. A: the function you need is: CeRunAppAtTime( appname, time ) that isn't the exact signature, there is also CeRunAppAtEvent, they should both be in the MSDN docs (but linking is useless the way MSDN urls always change) The normal way to use these (and RunAppAtTime in the managed world via OpenNETCF.Win32.Notify ) is that for periodic execution, every time your app runs, it will rescedule itself for its next run-time. If your app is running, the new instance should bring up the already running process. If it isn't running, then it is just like starting up normally - from mmory it passes some argument to the process so it can tell it is being scheduled and not started some other way. A: Use CeSetUserNotificationEx instead of CeRunAppAtTime (as this is deprecated).
{ "language": "en", "url": "https://stackoverflow.com/questions/48446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Project structure for Google App Engine I started an application in Google App Engine right when it came out, to play with the technology and work on a pet project that I had been thinking about for a long time but never gotten around to starting. The result is BowlSK. However, as it has grown, and features have been added, it has gotten really difficult to keep things organized - mainly due to the fact that this is my first python project, and I didn't know anything about it until I started working. What I have: * *Main Level contains: * *all .py files (didn't know how to make packages work) *all .html templates for main level pages *Subdirectories: * *separate folders for css, images, js, etc. *folders that hold .html templates for subdirecty-type urls Example: http://www.bowlsk.com/ maps to HomePage (default package), template at "index.html" http://www.bowlsk.com/games/view-series.html?series=7130 maps to ViewSeriesPage (again, default package), template at "games/view-series.html" It's nasty. How do I restructure? I had 2 ideas: * *Main Folder containing: appdef, indexes, main.py? * *Subfolder for code. Does this have to be my first package? *Subfolder for templates. Folder heirarchy would match package heirarchy *Individual subfolders for css, images, js, etc. *Main Folder containing appdef, indexes, main.py? * *Subfolder for code + templates. This way I have the handler class right next to the template, because in this stage, I'm adding lots of features, so modifications to one mean modifications to the other. Again, do I have to have this folder name be the first package name for my classes? I'd like the folder to be "src", but I don't want my classes to be "src.WhateverPage" Is there a best practice? With Django 1.0 on the horizon, is there something I can do now to improve my ability to integrate with it when it becomes the official GAE templating engine? I would simply start trying these things, and seeing which seems better, but pyDev's refactoring support doesn't seem to handle package moves very well, so it will likely be a non-trivial task to get all of this working again. A: I think the first option is considered the best practice. And make the code folder your first package. The Rietveld project developed by Guido van Rossum is a very good model to learn from. Have a look at it: http://code.google.com/p/rietveld With regard to Django 1.0, I suggest you start using the Django trunk code instead of the GAE built in django port. Again, have a look at how it's done in Rietveld. A: I like webpy so I've adopted it as templating framework on Google App Engine. My package folders are typically organized like this: app.yaml application.py index.yaml /app /config /controllers /db /lib /models /static /docs /images /javascripts /stylesheets test/ utility/ views/ Here is an example. A: My usual layout looks something like this: * *app.yaml *index.yaml *request.py - contains the basic WSGI app *lib * *__init__.py - common functionality, including a request handler base class *controllers - contains all the handlers. request.yaml imports these. *templates * *all the django templates, used by the controllers *model * *all the datastore model classes *static * *static files (css, images, etc). Mapped to /static by app.yaml I can provide examples of what my app.yaml, request.py, lib/init.py, and sample controllers look like, if this isn't clear. A: I implemented a google app engine boilerplate today and checked it on github. This is along the lines described by Nick Johnson above (who used to work for Google). Follow this link gae-boilerplate A: First, I would suggest you have a look at "Rapid Development with Python, Django, and Google App Engine" GvR describes a general/standard project layout on page 10 of his slide presentation. Here I'll post a slightly modified version of the layout/structure from that page. I pretty much follow this pattern myself. You also mentioned you had trouble with packages. Just make sure each of your sub folders has an __init__.py file. It's ok if its empty. Boilerplate files * *These hardly vary between projects *app.yaml: direct all non-static requests to main.py *main.py: initialize app and send it all requests Project lay-out * *static/*: static files; served directly by App Engine *myapp/*.py: app-specific python code * *views.py, models.py, tests.py, __init__.py, and more *templates/*.html: templates (or myapp/templates/*.html) Here are some code examples that may help as well: main.py import wsgiref.handlers from google.appengine.ext import webapp from myapp.views import * application = webapp.WSGIApplication([ ('/', IndexHandler), ('/foo', FooHandler) ], debug=True) def main(): wsgiref.handlers.CGIHandler().run(application) myapp/views.py import os import datetime import logging import time from google.appengine.api import urlfetch from google.appengine.ext.webapp import template from google.appengine.api import users from google.appengine.ext import webapp from models import * class IndexHandler(webapp.RequestHandler): def get(self): date = "foo" # Do some processing template_values = {'data': data } path = os.path.join(os.path.dirname(__file__) + '/../templates/', 'main.html') self.response.out.write(template.render(path, template_values)) class FooHandler(webapp.RequestHandler): def get(self): #logging.debug("start of handler") myapp/models.py from google.appengine.ext import db class SampleModel(db.Model): I think this layout works great for new and relatively small to medium projects. For larger projects I would suggest breaking up the views and models to have their own sub-folders with something like: Project lay-out * *static/: static files; served directly by App Engine * *js/*.js *images/*.gif|png|jpg *css/*.css *myapp/: app structure * *models/*.py *views/*.py *tests/*.py *templates/*.html: templates A: I am not entirely up to date on the latest best practices, et cetera when it comes to code layout, but when I did my first GAE application, I used something along your second option, where the code and templates are next to eachother. There was two reasons for this - one, it kept the code and template nearby, and secondly, I had the directory structure layout mimic that of the website - making it (for me) a bit easier too remember where everything was.
{ "language": "en", "url": "https://stackoverflow.com/questions/48458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "119" }
Q: How to disable Visual Studio macro "tip" balloon? Whenever I use a macro in Visual Studio I get an annoying tip balloon in the system tray and an accompanying "pop" sound. It says: Visual Studio .NET macros To stop the macro from running, double-click the spinning cassette. Click here to not show this balloon again. I have trouble clicking the balloon because my macro runs so quickly. Is this controllable by some dialog box option? (I found someone else asking this question on some other site but it's not answered there. I give credit here because I've copied and pasted some pieces from there.) A: Okay, I found a way to make the balloon clickable, and clicking it does indeed stop it from popping up again. (On the other site I referenced in the original question the question asker claims that this is not the case. Though he was in VS2005 and I'm using VS2008.) Anyway, I inserted a pause line in the macro so it would run for long enough for me to click the balloon: System.Threading.Thread.Sleep(2000) It would still be nice to know if there's a dialog somewhere for turning this back on, in case I have a crazy change of heart. A: This will disable the pop up: For Visual Studio 2008: HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0 DWORD DontShowMacrosBalloon=6 For Visual Studio 2010 (the DWORD won't be there by default, use New | DWORD value to create it): HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\10.0 DWORD DontShowMacrosBalloon=6 Delete the same key to re-enable it.
{ "language": "en", "url": "https://stackoverflow.com/questions/48470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I position one image on top of another in HTML? I'm a beginner at rails programming, attempting to show many images on a page. Some images are to lay on top of others. To make it simple, say I want a blue square, with a red square in the upper right corner of the blue square (but not tight in the corner). I am trying to avoid compositing (with ImageMagick and similar) due to performance issues. I just want to position overlapping images relative to one another. As a more difficult example, imagine an odometer placed inside a larger image. For six digits, I would need to composite a million different images, or do it all on the fly, where all that is needed is to place the six images on top of the other one. A: This is a barebones look at what I've done to float one image over another. img { position: absolute; top: 25px; left: 25px; } .imgA1 { z-index: 1; } .imgB1 { z-index: 3; } <img class="imgA1" src="https://via.placeholder.com/200/333333"> <img class="imgB1" src="https://via.placeholder.com/100"> Source A: You can absolutely position pseudo elements relative to their parent element. This gives you two extra layers to play with for every element - so positioning one image on top of another becomes easy - with minimal and semantic markup (no empty divs etc). markup: <div class="overlap"></div> css: .overlap { width: 100px; height: 100px; position: relative; background-color: blue; } .overlap:after { content: ''; position: absolute; width: 20px; height: 20px; top: 5px; left: 5px; background-color: red; } Here's a LIVE DEMO A: It may be a little late but for this you can do: HTML <!-- html --> <div class="images-wrapper"> <img src="images/1" alt="image 1" /> <img src="images/2" alt="image 2" /> <img src="images/3" alt="image 3" /> <img src="images/4" alt="image 4" /> </div> SASS // In _extra.scss $maxImagesNumber: 5; .images-wrapper { img { position: absolute; padding: 5px; border: solid black 1px; } @for $i from $maxImagesNumber through 1 { :nth-child(#{ $i }) { z-index: #{ $maxImagesNumber - ($i - 1) }; left: #{ ($i - 1) * 30 }px; } } } A: Inline style only for clarity here. Use a real CSS stylesheet. <!-- First, your background image is a DIV with a background image style applied, not a IMG tag. --> <div style="background-image:url(YourBackgroundImage);"> <!-- Second, create a placeholder div to assist in positioning the other images. This is relative to the background div. --> <div style="position: relative; left: 0; top: 0;"> <!-- Now you can place your IMG tags, and position them relative to the container we just made --> <img src="YourForegroundImage" style="position: relative; top: 0; left: 0;"/> </div> </div> A: Ok, after some time, here's what I landed on: .parent { position: relative; top: 0; left: 0; } .image1 { position: relative; top: 0; left: 0; border: 1px red solid; } .image2 { position: absolute; top: 30px; left: 30px; border: 1px green solid; } <div class="parent"> <img class="image1" src="https://via.placeholder.com/50" /> <img class="image2" src="https://via.placeholder.com/100" /> </div> As the simplest solution. That is: Create a relative div that is placed in the flow of the page; place the base image first as relative so that the div knows how big it should be; place the overlays as absolutes relative to the upper left of the first image. The trick is to get the relatives and absolutes correct. A: The easy way to do it is to use background-image then just put an <img> in that element. The other way to do is using css layers. There is a ton a resources available to help you with this, just search for css layers. A: Here's code that may give you ideas: <style> .containerdiv { float: left; position: relative; } .cornerimage { position: absolute; top: 0; right: 0; } </style> <div class="containerdiv"> <img border="0" src="https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png" alt="""> <img class="cornerimage" border="0" src="http://www.gravatar.com/avatar/" alt=""> <div> JSFiddle I suspect that Espo's solution may be inconvenient because it requires you to position both images absolutely. You may want the first one to position itself in the flow. Usually, there is a natural way to do that is CSS. You put position: relative on the container element, and then absolutely position children inside it. Unfortunately, you cannot put one image inside another. That's why I needed container div. Notice that I made it a float to make it autofit to its contents. Making it display: inline-block should theoretically work as well, but browser support is poor there. EDIT: I deleted size attributes from the images to illustrate my point better. If you want the container image to have its default sizes and you don't know the size beforehand, you cannot use the background trick. If you do, it is a better way to go. A: You could use CSS-Grid, which is a very convenient solution if you want to stack, overlap content. First you need to define your grid. Inside that grid, you "tell" your img-tags where to be places within that grid. If you define them to be at the same start of the grid, they will be overlapped. In the following example two images are overlapped, one is below them. <div style="display: grid; grid-template-columns: [first-col] 100%; grid-template-rows: [first-row] 300px"> <img src="https://fakeimg.pl/300/" style="grid-column-start: first-col; grid-row-start: first-row"> <img src="https://fakeimg.pl/300/" style="grid-column-start: first-col; grid-row-start: first-row"> <img src="https://fakeimg.pl/300/"> </div> You can find a very good explanation of CSS-Grid here. A: One issue I noticed that could cause errors is that in rrichter's answer, the code below: <img src="b.jpg" style="position: absolute; top: 30; left: 70;"/> should include the px units within the style eg. <img src="b.jpg" style="position: absolute; top: 30px; left: 70px;"/> Other than that, the answer worked fine. Thanks. A: Set background-size cover. Then wrap your div with another div now set max-width on parent div. <div style="max-width:100px"> <div style="background-image:url('/image.png'); background-size: cover; height:100px; width:100px; "></div> </div> A: Here is a solution that worked for me. Assuming all the images to be stacked are inside a div container, all you need to do is to set the display property of the div to flex. Don't set any position for the first image but for every other image, set the position property to absolute. Finally, use z-index to control the layers. You can set the first image's z-index to 1, the second image's z-index to 2, and so on (In my own case, I set the z-index of every other image apart from the first image to 2). If you want to center the images, you can set the justify-content property of the div to center to align the images horizontally to the center and adjust the top property to align the images vertically to the center. The values you use for the justify-content and top properties depend on the size of your images and whether the sizes are responsive on different devices or not. Here's my example: img { border: 2px solid red; } .img1n2 { display: flex; justify-content:center; } .img1 { z-index: 1; } .img2 { position: absolute; z-index: 2; top: 52.5%; } <div class="img1n2"> <img class="img1" src="https://fakeimg.pl/400/"> <img class="img2" src="https://fakeimg.pl/300/" width="100"> <img class="img2" src="https://fakeimg.pl/200/" width="50"> <img class="img2" src="https://fakeimg.pl/50/" width="30"> </div> You can actually stack a thousand or a million images with this method. You can play around with the CSS to suit your specific needs. Happy coding! A: @buti-oxa: Not to be pedantic, but your code is invalid. The HTML width and height attributes do not allow for units; you're likely thinking of the CSS width: and height: properties. You should also provide a content-type (text/css; see Espo's code) with the <style> tag. <style type="text/css"> .containerdiv { float: left; position: relative; } .cornerimage { position: absolute; top: 0; right: 0; } </style> <div class="containerdiv"> <img border="0" src="http://www.gravatar.com/avatar/" alt="" width="100" height="100"> <img class="cornerimage" border="0" src="http://www.gravatar.com/avatar/" alt="" width="40" height="40"> <div> Leaving px; in the width and height attributes might cause a rendering engine to balk. A: Create a relative div that is placed in the flow of the page; place the base image first as relative so that the div knows how big it should be; place the overlays as absolutes relative to the upper left of the first image. The trick is to get the relatives and absolutes correct.
{ "language": "en", "url": "https://stackoverflow.com/questions/48474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "314" }
Q: Database Design for Tagging How would you design a database to support the following tagging features: * *items can have a large number of tags *searches for all items that are tagged with a given set of tags must be quick (the items must have ALL tags, so it's an AND-search, not an OR-search) *creating/writing items may be slower to enable quick lookup/reading Ideally, the lookup of all items that are tagged with (at least) a set of n given tags should be done using a single SQL statement. Since the number of tags to search for as well as the number of tags on any item are unknown and may be high, using JOINs is impractical. Any ideas? Thanks for all the answers so far. If I'm not mistaken, however, the given answers show how to do an OR-search on tags. (Select all items that have one or more of n tags). I am looking for an efficient AND-search. (Select all items that have ALL n tags - and possibly more.) A: Here's a good article on tagging Database schemas: http://howto.philippkeller.com/2005/04/24/Tags-Database-schemas/ along with performance tests: http://howto.philippkeller.com/2005/06/19/Tagsystems-performance-tests/ Note that the conclusions there are very specific to MySQL, which (at least in 2005 at the time that was written) had very poor full text indexing characteristics. A: You might want to experiment with a not-strictly-database solution like a Java Content Repository implementation (e.g. Apache Jackrabbit) and use a search engine built on top of that like Apache Lucene. This solution with the appropriate caching mechanisms would possibly yield better performance than a home-grown solution. However, I don't really think that in a small or medium-sized application you would require a more sophisticated implementation than the normalized database mentioned in earlier posts. EDIT: with your clarification it seems more compelling to use a JCR-like solution with a search engine. That would greatly simplify your programs in the long run. A: The easiest method is to create a tags table. Target_Type -- in case you are tagging multiple tables Target -- The key to the record being tagged Tag -- The text of a tag Querying the data would be something like: Select distinct target from tags where tag in ([your list of tags to search for here]) and target_type = [the table you're searching] UPDATE Based on your requirement to AND the conditions, the query above would turn into something like this select target from ( select target, count(*) cnt from tags where tag in ([your list of tags to search for here]) and target_type = [the table you're searching] ) where cnt = [number of tags being searched] A: About ANDing: It sounds like you are looking for the "relational division" operation. This article covers relational division in concise and yet comprehendible way. About performance: A bitmap-based approach intuitively sounds like it will suit the situation well. However, I'm not convinced it's a good idea to implement bitmap indexing "manually", like digiguru suggests: It sounds like a complicated situation whenever new tags are added(?) But some DBMSes (including Oracle) offer bitmap indexes which may somehow be of use, because a built-in indexing system does away with the potential complexity of index maintenance; additionally, a DBMS offering bitmap indexes should be able to consider them in a proper when when performing the query plan. A: I just wanted to highlight that the article that @Jeff Atwood links to (http://howto.philippkeller.com/2005/04/24/Tags-Database-schemas/) is very thorough (It discusses the merits of 3 different schema approaches) and has a good solution for the AND queries that will usually perform better than what has been mentioned here so far (i.e. it doesn't use a correlated subquery for each term). Also lots of good stuff in the comments. ps - The approach that everyone is talking about here is referred to as the "Toxi" solution in the article. A: I don't see a problem with a straightforward solution: Table for items, table for tags, crosstable for "tagging" Indices on cross table should be enough optimisation. Selecting appropriate items would be SELECT * FROM items WHERE id IN (SELECT DISTINCT item_id FROM item_tag WHERE tag_id = tag1 OR tag_id = tag2 OR ...) AND tagging would be SELECT * FROM items WHERE EXISTS (SELECT 1 FROM item_tag WHERE id = item_id AND tag_id = tag1) AND EXISTS (SELECT 1 FROM item_tag WHERE id = item_id AND tag_id = tag2) AND ... which is admittedly, not so efficient for large number of comparing tags. If you are to maintain tag count in memory, you could make query to start with tags that are not often, so AND sequence would be evaluated quicker. Depending on expected number of tags to be matched against and expectancy of matching any single of them this could be OK solution, if you are to match 20 tags, and expect that some random item will match 15 of them, then this would still be heavy on a database. A: I'd second @Zizzencs suggestion that you might want something that's not totally (R)DB-centric Somehow, I believe that using plain nvarchar fields to store that tags with some proper caching/indexing might yield faster results. But that's just me. I've implemented tagging systems using 3 tables to represent a Many-to-Many relationship before (Item Tags ItemTags), but I suppose you will be dealing with tags in a lot of places, I can tell you that with 3 tables having to be manipulated/queried simultaneously all the time will definitely make your code more complex. You might want to consider if the added complexity is worth it. A: You won't be able to avoid joins and still be somewhat normalized. My approach is to have a Tag Table. TagId (PK)| TagName (Indexed) Then, you have a TagXREFID column in your items table. This TagXREFID column is a FK to a 3rd table, I'll call it TagXREF: TagXrefID | ItemID | TagId So, to get all tags for an item would be something like: SELECT Tags.TagId,Tags.TagName FROM Tags,TagXref WHERE TagXref.TagId = Tags.TagId AND TagXref.ItemID = @ItemID And to get all items for a tag, I'd use something like this: SELECT * FROM Items, TagXref WHERE TagXref.TagId IN ( SELECT Tags.TagId FROM Tags WHERE Tags.TagName = @TagName; ) AND Items.ItemId = TagXref.ItemId; To AND a bunch of tags together, You would to modify the above statement slightly to add AND Tags.TagName = @TagName1 AND Tags.TagName = @TagName2 etc...and dynamically build the query. A: What I like to do is have a number of tables that represent the raw data, so in this case you'd have Items (ID pk, Name, <properties>) Tags (ID pk, Name) TagItems (TagID fk, ItemID fk) This works fast for the write times, and keeps everything normalized, but you may also note that for each tag, you'll need to join tables twice for every further tag you want to AND, so it's got slow read. A solution to improve read is to create a caching table on command by setting up a stored procedure that essentially creates new table that represents the data in a flattened format... CachedTagItems(ID, Name, <properties>, tag1, tag2, ... tagN) Then you can consider how often the Tagged Item table needs to be kept up to date, if it's on every insert, then call the stored procedure in a cursor insert event. If it's an hourly task, then set up an hourly job to run it. Now to get really clever in data retrieval, you'll want to create a stored procedure to get data from the tags. Rather than using nested queries in a massive case statement, you want to pass in a single parameter containing a list of tags you want to select from the database, and return a record set of Items. This would be best in binary format, using bitwise operators. In binary format, it is easy to explain. Let's say there are four tags to be assigned to an item, in binary we could represent that 0000 If all four tags are assigned to an object, the object would look like this... 1111 If just the first two... 1100 Then it's just a case of finding the binary values with the 1s and zeros in the column you want. Using SQL Server's Bitwise operators, you can check that there is a 1 in the first of the columns using very simple queries. Check this link to find out more. A: To paraphrase what others have said: the trick isn't in the schema, it's in the query. The naive schema of Entities/Labels/Tags is the right way to go. But as you've seen, it's not immediately clear how to perform an AND query with a lot of tags. The best way to optimize that query will be platform-dependent, so I would recommend re-tagging your question with your RDBS and changing the title to something like "Optimal way to perform AND query on a tagging database". I have a few suggestions for MS SQL, but will refrain in case that's not the platform you're using. A: A variation to the above answer is take the tag ids, sort them, combine as a ^ separated string and hash them. Then simply associate the hash to the item. Each combination of tags produces a new key. To do an AND search simply re-create the hash with the given tag ids and search. Changing tags on an item will cause the hash to be recreated. Items with the same set of tags share the same hash key.
{ "language": "en", "url": "https://stackoverflow.com/questions/48475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "181" }
Q: Choosing a desktop database I'm looking for a desktop/embedded database. The two candidates I'm looking at are Microsoft SQL Server CE and Oracle Lite. If anyone's used both of these products, it'd be great if you could compare them. I haven't been able to find any comparisons online. The backend DB is Oracle10g. Update: Clarification, the business need is a client-server app with offline functionality (hence the need for a local data store on the client) A: If the backend database is Oracle 10g it will probably be easier for you to use Oracle Lite - that way you don't have to use two completely different SQL dialects in the same project. BTW, In my product I use SQLite as the desktop database A: I also used SQLite as a desktop database. It's lightning quick and doesn't need a seperate process or any prior installation. All you need is a library to access the data as part of your code. In light of your clarification I'd evaluate both OracleXE and Oracle 10g Lite before the others. Stick with the same tech, SQL/Oracle have some funny disagreements about SQL syntax and datatypes. I imagine you'd get the same issue with SQLite. A: I'll second the vote for SQLite. I'm not sure what you're trying to accomplish but if you're doing any sort of local storage with syncing SQLite is a good choice. It has very widespread adoption and a lot of community support. A: Perhaps I'm not fully understanding the need here. You are developing against 10g, but for your own test/dev environment you want a more lightweight database? Or, are you developing an application that synchs with 10g database when online, but when offline uses a local store? In both cases, I'd recommend staying with Oracle only because it will simplify your code. In the first case, I'd wonder why you don't have a 10g QA machine somewhere that all the developers can connect to. A: One advantage you have with SQL Server CE is that it is free and you can use the Sync Framework to syncronize it with any ADO.NET accesible database. Also, the same SQL CE file is usable from the PC and mobile devices, and if you develop your application using .NET, you can use the same code for the desktop and the mobile device without changes. A: You might want to look at Oracle XE. I cannot remember all of the differences, but O-Lite didn't fit my project needs. Oracle XE is a very good database for local development. Brad A: As @Nir mentioned, it's better to have homogeneous environment. However if you decide to not use Oracle Light, I would highly recommend you to take a look at Firebird. It's one of best choices for desktop database scenarios.
{ "language": "en", "url": "https://stackoverflow.com/questions/48486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: keep rsync from removing unfinished source files I have two machines, speed and mass. speed has a fast Internet connection and is running a crawler which downloads a lot of files to disk. mass has a lot of disk space. I want to move the files from speed to mass after they're done downloading. Ideally, I'd just run: $ rsync --remove-source-files speed:/var/crawldir . but I worry that rsync will unlink a source file that hasn't finished downloading yet. (I looked at the source code and I didn't see anything protecting against this.) Any suggestions? A: How much control do you have over the download process? If you roll your own, you can have the file being downloaded go to a temp directory or have a temporary name until it's finished downloading, and then mv it to the correct name when it's done. If you're using third party software, then you don't have as much control, but you still might be able to do the temp directory thing. A: Rsync can exclude files matching certain patters. Even if you can't modify it to make it download files to a temporary directory, maybe it has a convention of naming the files differently during download (for example: foo.downloading while downloading for a file named foo) and you can use this property to exclude files which are still being downloaded from being copied. A: If you have control over the crawling process, or it has predictable output, the above solutions (storing in a tempfile until finished, then mv'ing to the completed-downloads place, or ignoring files with a '.downloading' kind of name) might work. If all of that is beyond your control, you can make sure that the file is not opened by any process by doing 'lsof $filename' and checking if there's a result. Clearly if no one has the file open, it's safe to move it over. A: It seems to me the problem is transferring a file before it's complete, not that you're deleting it. If this is Linux, it's possible for a file to be open by process A and process B can unlink the file. There's no error, but of course A is wasting its time. Therefore, the fact that rsync deletes the source file is not a problem. The problem is rsync deletes the source file only after it's copied, and if it's still being written to disk you'll have a partial file. How about this: Mount mass as a remote file system (NFS would work) in speed. Then just web-crawl the files directly.
{ "language": "en", "url": "https://stackoverflow.com/questions/48491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "170" }
Q: Firewall - Build or Buy I have a Linux web server farm with about 5 web servers, web traffic is about 20Mbps. We currently have a Barracuda 340 Load Balancer (keep away from this device - piece of crap!) that is acting as a firewall. I want to put in a dedicated firewall and I'd like to know what peoples opinions are on building versus buying a dedicated firewall. Main requirements: * *Dynamically block rouge traffic *Dynamically rate limit traffic *Block all ports except 80, 443 *Limit port 22 to a set of IPs *High availability setup Also if we go for the build route, how do we know what level traffic the system can handle. A: Definitely build. I help manage an ISP and we have two firewalls built. One is for fail over and for redundancy. We use a program called pfsense. I couldn't recommend this program more. It has a great web interface for configuring it and we actually run it off a compact flash card. A: in my current startup, we have used PFSense to replace multiple routers/firewalls, and it has throughput which replaces much more expensive routers. Maybe that is why Cisco is having trouble? :) A: Related to high availability: OpenBSD can be configured in a failover / HA way for firewalls. See this description. I've heard that they've done demos where such setups done as well (if not better) as high-end Cisco gear. A: As they say - "there are more than one way to skin a cat": Build it yourself, running something like Linux or *BSD. The benefit of this, is that it makes it easy to do the dynamic part of your question, it's just a matter of a few well-placed shell/python/perl/whatever scripts. The drawback is that your ceiling traffic rate might not be what it would be on a purpose-built firewall device, although you should still be able to achieve data rates in the 300Mbit/sec range. (You start hitting PCI bus limitations at this point) This may be high enough to where it won't be a problem for you. Buy a dedicated "firewall device" - Possible drawbacks of doing this, is that doing the "dynamic" part of what you're trying to accomplish is somewhat more difficult - depending on the device, this could be easy (Net::Telnet/Net::SSH come to mind), or not. If you are worried about peak traffic rates, you'll have to carefully check the manufacturer's specifications - several of these devices are prone to the same traffic limitations as "regular" PC's, in that they still run into the PCI bus bandwidth issue, etc. At that point, you might as well roll your own. I guess you could read this more as a "pro's and con's" of doing either, if you want. FWIW, we run dual FreeBSD firewalls at my place of employment, and regularly push 40+Mbit/sec with no noticeable load/issues. A: Over the last 8 years we maintained a small development network with about 20 to 30 machines. We had one computer dedicated to be the firewall. Actually, we never run into serious problems we are now replacing it with a dedicated router/firewall solution (though we haven't decided yet which). Reasons for that are: simplicity (the goal is the firewall, not to maintain the linux for running it as well), less space and less power consumption. A: Don't know much about this field, but maybe an Astaro security gateway? A: Hi I would go for a dedicated firewall product in this scenario. I have used the Checkpoint firewall range of products for many years and I have always found them to be easy to setup and manage and they have great support. Using Checkpoint or one of their competitors is a fairly expensive option, especially if you're comparing it to open source software, so it depends on your budget. I've also used Cisco's line of PIX and ASA firewalls. These are also good, but in my opinion are more difficult to manage
{ "language": "en", "url": "https://stackoverflow.com/questions/48494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to teach a crash course on C++? In a few weeks, we'll be teaching a crash course on C++ for Java programmers straight out of college. They have little or no experience yet with C or C++. Previous editions of this course were just 1 or 2 half-day sessions and covered topics including: * *new language features, e.g. * *header vs. implementation *pointers and references *memory management *operator overloading *templates *the standard libraries, e.g. * *the C library headers *basic iostreams *basic STL *using libraries (headers, linking) *they'll be using Linux, so * *Basic Linux console commands *GCC and how to interpret its error messages *Makefiles and Autotools *basic debugger commands *any topic they ask about During the course, each person individually writes, compiles, runs, and debugs simple programs using the newly introduced features. Is this the best way to learn? * *Which topics do you consider most crucial? *Which topics should be added or removed? *Which topics just can't be covered adequately in a short time? A: If they are coming from a Java world, they are used to garbage collection. As such, I'd probably spend a bit of time talking about smart (reference counted) pointers, and how they compare to garbage collection. A: If you are going to put a lot of Java programmers straight out of college to write production code, I'd say the first thing you should be concerning is pointers and memory management. Really, those who come directly from managed code rarely have the skills to debug pointer-related exception, let alone use it correctly, or even understands how their language/tools utilize it. Pointers is how you think not just write code. The framework and coding practices can be taught as tips and notes along the way. But failing to understand pointers when writing C code is just waiting to shoot yourself in the foot, if not the head. A: I would like to add that you should make sure to point out where they can find language and API references. In java, the API and language specification is at your fingertips online at java.sun.com... with C or C++, it's not quite as simple and easy to find reference documentation. Whenever I am doing something in C or C++, that is my biggest problem... trying to find what I need. I usually turn to cplusplus.com, which usually has what I need, otherwise I google for it. If you have a set of references you use (online or in the form of books), list them and tell them what you use each reference for. A: Memory management (pointers, allocation etc), basics of STL and templates (since STL uses templates). I think STL is important since one would be missing the richness of the Java SE class library in C++. A: I would spend a whole day discussing how to write a good class in C++. Deitel & Deitel may help as a reference. * *When are constructors called? *When are assignment operators called? *When are destructors called? *What's the point for const Foo & a_foo? A: I can only once again point to Stroustrup and preach: Don't teach the C subset! It's important, but not for beginners! C++ is complex enough as it is and the standard library classes, especially the STL, is much more important and (at least superficially) easier to understand than the C subset of C++. Same goes for pointers and heap memory allocation, incidentally. Of course they're important but only after having taught the STL containers. Another important concept that new students have to get their head around is the concept of different compilation units, the One Definition Rule (because if you don't know it you won't be able to decypher error messages) and headers. This is actually quite a barrier and one that has to be breached early on. Apart from the language features the most important thing to be taught is how to understand the C++ compiler and how to get help. Getting help (i.e. knowing how to search for the right information) in my experience is the single most important thing that has to be taught about C++. I've had quite good experiences with this order of teaching in the past. /EDIT: If you happen to know any German, take a look at http://madrat.net/coding/cpp/skript, part of a very short introduction used in one of my courses. A: You should take some time on memory management, and especially RAII.
{ "language": "en", "url": "https://stackoverflow.com/questions/48496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Issue with dojo dijit.form.ValidationTextBox The following XHTML code is not working: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <link rel="stylesheet" type="text/css" href="/dojotoolkit/dijit/themes/tundra/tundra.css" /> <link rel="stylesheet" type="text/css" href="/dojotoolkit/dojo/resources/dojo.css" /> <script type="text/javascript" src="/dojotoolkit/dojo/dojo.js" djConfig="parseOnLoad: true" /> <script type="text/javascript"> dojo.require("dijit.form.ValidationTextBox"); dojo.require("dojo.parser"); </script> </head> <body class="nihilo"> <input type="text" dojoType="dijit.form.ValidationTextBox" size="30" /> </body> </html> In Firebug I get the following error message: [Exception... "Component returned failure code: 0x80004003 (NS_ERROR_INVALID_POINTER) [nsIDOMNSHTMLElement.innerHTML]" nsresult: "0x80004003 (NS_ERROR_INVALID_POINTER)" location: "JS frame :: http://localhost:21000/dojotoolkit/dojo/dojo.js :: anonymous :: line 319" data: no] http://localhost:21000/dojotoolkit/dojo/dojo.js Line 319 Any idea what is wrong? A: The problem seams to be the ending of the file... * *If I name the file test2.html everything works. *If I name the file test2.xhtml I get the error message. The diverence between the two seams to be the Content-Type in the response header from apache. * *For .html it is Content-Type text/html; charset=ISO-8859-1 *For .xhtml it is Content-Type application/xhtml+xml A: Where you import dojo.js: <script type="text/javascript" src="/dojotoolkit/dojo/dojo.js" djConfig="parseOnLoad: true"/> It should be: <script type="text/javascript" src="/dojotoolkit/dojo/dojo.js" djConfig="parseOnLoad:true"></script> Have fun with dojo, it's can do some cool stuff. Brian Gianforcaro A: The problem is that innerHTML is an unofficial property that is not part of the W3C specifications, and thus may or may not work depending upon the browser, especially when the page is being rendered as a XHTML file rather than a HTML file. See here and here. A: Well, what is dojo.js doing at line 319? A: Are you sure your pointing to the right path in the script tags? I put it up on the web, check it out. The left is Dojo parsed input, the right is an regular old input. Link I'm on OS X, using firefox 3.0.1 I get no errors under firebug. A: There are some similar tickets on the dojo trac page: http://trac.dojotoolkit.org/search?q=xhtml+ns_error&noquickjump=1&ticket=on Probably you are facing a bug and you will need to fill a new ticket.
{ "language": "en", "url": "https://stackoverflow.com/questions/48497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Citrix Server sort of app - on a Mac? Does anyone know of a similar product to Citrix Server that'll run on the Mac OS? Essentially, I'm looking to allow multiple remote users to log in to the same OSX Server at the same time (with full visual desktop, not SSH). A: OS X's Quartz window server has no remoting abilities, unlike its predecessor. X11 does, but 'native' OS X applications don't use that; of the few Mac apps typically run in X11 (such as GIMP or CrossOver), none are specific to the Mac, so you might as well run them on a different OS. That said, if all you want is to visually remote-control a session, it is possible to use VNC or a derivative, such as Apple Remote Desktop. Since 10.4, this allows for multiple simultaneous sessions, as implemented with Vine Server. Remote Desktop also has other abilities such as remotely installing and updating software. (Unlike Citrix and X11, VNC does not send drawing commands over the network; it instead transmits a compressed image representation pixel-per-pixel.) You should specify your exact needs. You will not get a Citrix-like experience where you can run single Mac apps in their own remote session. You will, however, get remote graphical control, and that may be more than enough for you. A: I've never heard of it, but from their blog: Aqua Connect Terminal Server uses the VNC (Virtual Network Computing) protocol to send data between Mac OS X and the client application. Now, if someone does know of a non-VNC solution, I'd be happy to hear it. A: Anyone have experience with Aqua Connect? Found them from Google, and they claim the next version works on RDP as well as VNC. Wondering if it's just a nice wrapper around the VNC capabilities @Soeren Kuklau pointed out. Thanks for the link to Vine Server, that's worth investigating. A: John Vasileff, Back to My Mac is a tunnelling / NAT traversal technique that enables the use of any networking (including VNC-based remote control). iChat screen sharing, Finder Screen Sharing and Remote Desktop all use VNC. Apple does not offer any non-VNC solutions. A: Citrix XenDesktop iPhone Demo
{ "language": "en", "url": "https://stackoverflow.com/questions/48505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How can you databind a single object in .NET? I would like to use a component that exposes the datasource property, but instead of supplying the datasource with whole list of objects, I would like to use only simple object. Is there any way to do this ? The mentioned component is DevExpress.XtraDataLayout.DataLayoutControl - this is fairly irrelevant to the question though. A: Databinding expects an IEnumerable object, because it enumorates over it just like a foreach loop does. So to do this, just wrap your single object in an IEnumerable. Even this would work: DataBindObject.DataSource = new List<YourObject>().Add(YourObjectInstance); A: In ASP.NET2.0 you can use the generic collections to make this single object a list of only one object in it that you can databind to any server control using the objectdatasource, e.g. List<clsScannedDriverLicense> DriverLicenses = new List<clsScannedDriverLicense>(); //this creates a generic collection for you that you can return from //your BLL to the ObjectDataSource DriverLicenses.Add(TheOneObjectThatYouHaveofType_c lsDriverLicense); Then your ObjectDataSource would look like that: <asp:ObjectDataSource ID="odsDL" runat="server" SelectMethod="OrdersByCustomer" TypeName="YourBLL.UtiltiesClassName" DataObjectTypeName="clsScannedDriverLicense"> </asp:ObjectDataSource> Source A: I don't think you have much choice other than using an class that implements IEnumerable<T>. Even if the DataSource property was smart enough to take a scalar object, it would probably convert it internally to a vector. I would however consider using a simple array rather than a List<T> as this will result in fewer memory allocations. If you don't like the array syntax (and also to increase readability) you could use a helper method: T[] DataSourceHelper::ToVector(T scalar) { return new T[] { scalar }; } A: I'm after the same thing you are. I've posted a new question Two-way databinding of a custom templated asp.net control that has a bit of a lead. See what you can make of it... A: Using this in my formView: databoundControl.DataSource = new [] { singleObject }; databoundControl.DataBind();
{ "language": "en", "url": "https://stackoverflow.com/questions/48521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Call Visitors web stat program from PHP I've been looking into different web statistics programs for my site, and one promising one is Visitors. Unfortunately, it's a C program and I don't know how to call it from the web server. I've tried using PHP's shell_exec, but my web host (NFSN) has PHP's safe mode on and it's giving me an error message. Is there a way to execute the program within safe mode? If not, can it work with CGI? If so, how? (I've never used CGI before) A: Visitors looks like a log analyzer and report generator. Its probably best setup as a chron job to create static HTML pages once a day or so. If you don't have shell access to your hosting account, or some sort of control panel that lets you setup up chron jobs, you'll be out of luck. A: Is there any reason not to just use Google Analytics? It's free, and you don't have to write it yourself. I use it, and it gives you a lot of information. Sorry, I know it's not a "programming" answer ;) A: I second the answer of Jonathan: this is a log analyzer, meaning that you must feed it as input the logfile of the webserver and it generates a summarization of it. Given that you are on a shared host, it is improbable that you can access to that file, and even if you would access it, it is probable that it contains then entries for all the websites hosted on the given machine (setting up separate logging for each VirtualHost is certainly possible with Apache, but I don't know if it is a common practice). One possible workaround would be for you to write out a logfile from your pages. However this is rather difficult and can have a severe performance impact (you have to serialize the writes to the logfile for one, if you don't want to get garbage from time to time). All in all, I would suggest going with an online analytics service, like Google Analytics. A: As fortune would have it I do have access to the log file for my site. I've been able to generate the HTML page on the server manually - I've just been looking for a way to get it to happen automatically. All I need is to execute a shell command and get the output to display as the page. Sounds like a good job for an intern. =) Call your host and see if you can work out a deal for doing a shell execute. A: I managed to solve this problem on my own. I put the following lines in a file named visitors.cgi: #!/bin/sh printf "Content-type: text/html\n\n" exec visitors -A /home/logs/access_log
{ "language": "en", "url": "https://stackoverflow.com/questions/48526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I publish a Asp.NET web application using MSBuild? I am trying to publish an Asp.net MVC web application locally using the NAnt and MSBuild. This is what I am using for my NAnt target; <target name="publish-artifacts-to-build"> <msbuild project="my-solution.sln" target="Publish"> <property name="Configuration" value="debug" /> <property name="OutDir" value="builds\" /> <arg line="/m:2 /tv:3.5" /> </msbuild> </target> and all I get is this as a response; [msbuild] Skipping unpublishable project. Is it possible to publish web applications via the command line in this way? A: The "Publish" target you are trying to invoke is for "OneClick" deployment, not for publishing a website... This is why you are getting the seemingly bizarre message. You would want to use the AspNetCompiler task, rather than the MSBuild task. See http://msdn2.microsoft.com/en-us/library/ms164291.aspx for more info on this task. Your "PublishDir" would correspond to the TargetPath property of the task. Source A: I came up with such solution, works great for me: msbuild /t:ResolveReferences;_WPPCopyWebApplication /p:BuildingProject=true;OutDir=C:\Temp\buidl\ Test.csproj Secret sauce is _WPPCopyWebApplication target.
{ "language": "en", "url": "https://stackoverflow.com/questions/48550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Best way to compress HTML, CSS & JS with mod_deflate and mod_gzip disabled I have a few sites on a shared host that is running Apache 2. I would like to compress the HTML, CSS and Javascript that is delivered to the browser. The host has disabled mod_deflate and mod_gzip, so these options are out. I do have PHP 5 at my disposal, though, so I could use the gzip component of that. I am currently placing the following in my .htaccess file: php_value output_handler ob_gzhandler However, this only compresses the HTML and leaves out the CSS and JS. Is there a reliable way of transparently compressing the output of the CSS and JS without having to change every page? I have searched Google and a number of solutions are presented, but I've yet to get one to work. If anyone could suggest a solution that they know to work, that would be very gratefully received. Note, Method 2 in The Definitive Post on Gzipping your CSS looks like a good solution, but I couldn't get it working. Has anyone else succeeded using this method? A: Sorry about the delay - it's a busy week for me. Assumptions: * *.htaccess is in the same file as compress.php *static files to be compressed are in static subdirectory I started my solution from setting the following directives in .htaccess: RewriteEngine on RewriteRule ^static/.+\.(js|ico|gif|jpg|jpeg|png|css|swf)$ compress.php [NC] It's required that your provider allows you to override mod_rewrite options in .htaccess files. Then the compress.php file itself can look like this: <?php $basedir = realpath( dirname($_SERVER['SCRIPT_FILENAME']) ); $file = realpath( $basedir . $_SERVER["REQUEST_URI"] ); if( !file_exists($file) && strpos($file, $basedir) === 0 ) { header("HTTP/1.0 404 Not Found"); print "File does not exist."; exit(); } $components = split('\.', basename($file)); $extension = strtolower( array_pop($components) ); switch($extension) { case 'css': $mime = "text/css"; break; default: $mime = "text/plain"; } header( "Content-Type: " . $mime ); readfile($file); You should of course add more mime types to the switch statement. I didn't want to make the solution dependant on the pecl fileinfo extension or any other magical mime type detecting libraries - this is the simplest approach. As for securing the script - I do a translation to a real path in the file system so no hacked '../../../etc/passwd' or other shellscript file paths don't go through. That's the $basedir = realpath( dirname($_SERVER['SCRIPT_FILENAME']) ); $file = realpath( $basedir . $_SERVER["REQUEST_URI"] ); snippet. Although I'm pretty sure most of the paths that are in other hierarchy than $basedir will get handled by the Apache before they even reach the script. Also I check if the resulting path is inside the script's directory tree. Add the headers for cache control as pilif suggested and you should have a working solution to your problem. A: What I do: * *I place scripts in a js and stylesheets in a css dir, respectively. *In the Apache configuration, I add directives like so: <Directory /data/www/path/to/some/site/js/> AddHandler application/x-httpd-php .js php_value auto_prepend_file gzip-js.php php_flag zlib.output_compression On </Directory> <Directory /data/www/path/to/some/site/css/> AddHandler application/x-httpd-php .css php_value auto_prepend_file gzip-css.php php_flag zlib.output_compression On </Directory> *gzip-js.php in the js directory looks like this: <?php header("Content-type: text/javascript; charset: UTF-8"); ?> *…and gzip-cs.php in the css directory looks like this: <?php header("Content-type: text/css; charset: UTF-8"); ?> This may not be the most elegant solution, but it most certainly is a simple one that requires few changes and works well. A: You can try your luck with mod_rewrite. Create a script that takes a local static file name as input, through e.g. $_SERVER['QUERY_STRING'] and outputs it in compressed form. Many providers don't allow configuring mod_rewrite with .htaccess files or have it completely disabled though. If you haven't used rewrite before, I recommend a good beginner's guide, like probably this one. This way you can make the apache redirect all requests for a static file to a php script. style.css will be redirected to compress.php?style.css for instance. As always be extremely cautious on the input you accept or you have an XSS exploit on your hands! A: what ever you do, be careful about caching on the client side: Browsers do all sort of tricks to try and minimize the bandwith and there are many ways in the HTTP protocol to do that, all of which are dealt with by apache - if you are just serving a local file. If you are not, then it's your responsibility. Have a look at least at the ETag and the If-Modified-Since mechanics which are supported by all current browsers and seem to be the most robust way to query the server for updated content. A possible way to serve a CSS file to browsers using the If-Modified-Since-Header is something like this (the empty headers to turn off any non-caching headers PHP sends per default): $p = 'path/to/css/file' $i = stat($p); if ($_SERVER['HTTP_IF_MODIFIED_SINCE']){ $imd = strtotime($_SERVER['HTTP_IF_MODIFIED_SINCE']); if ( ($imd > 0) && ($imd >= $i['mtime'])){ header('HTTP/1.0 304 Not Modified'); header('Expires:'); header('Cache-Control:'); header('Last-Modified: '.date('r', $i['mtime'])); exit; } } header('Last-Modified: '.date('r', $i['mtime'])); header('Content-Type: text/css'); header('Content-Length: '.filesize($p)); header('Cache-Control:'); header('Pragma:'); header('Expires:'); readfile($p); The code will use the if-modified-since-header the browser sends to check if the actual file on the server has changed since the date the browser has given. If so, the file is sent, otherwise, a 304 Not Modified is returned and the browser does not have to re-download the whole content (and if it's intelligent enough, it keeps the parsed CSS around in memory too). There is another mechanic involving the server sending a unique ETag-Header for each piece of content. The Client will send that back using an If-None-Match header allowing the server to decide not only on the date of last modification but also on the content itself. This just makes the code more complicated though, so I have left it out. FF, IE and Opera (probably Safari too) all send the If-Modified-Since header when they receive content with a Last-Modified header attached, so this works fine. Also keep in mind that certain versions of IE (or the JScript-Runtime it uses) still have problems with GZIP-transferred content. Oh. And I know that's not part of the question, but so does Acrobat in some versions. I've had cases and cases of white screens while serving PDFs with gzip transfer encoding. A: Instead of gzipping on the fly when users request the CSS and JavaScript files, you could gzip them ahead of time. As long as Apache serves them with the right headers, you’re golden. For example, on Mac OS X, gzipping a file on the command line is as easy as: gzip -c styles.css > styles-gzip.css Might not be the sort of workflow that works for you though.
{ "language": "en", "url": "https://stackoverflow.com/questions/48555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do I implement a pre-commit hook script in SVN that calls dos2unix to validate checked-in file I was wondering if anyone here had some experience writing this type of script and if they could give me some pointers. I would like to modify this script to validate that the check-in file does not have a Carriage Return in the EOL formatting. The EOL format is CR LF in Windows and LF in Unix. When a User checks-in code with the Windows format. It does not compile in Unix anymore. I know this can be done on the client side but I need to have this validation done on the server side. To achieve this, I need to do the following: 1) Make sure the file I check is not a binary, I dont know how to do this with svnlook, should I check the mime:type of the file? The Red Book does not indicate this clearly or I must have not seen it. 2) I would like to run the dos2unix command to validate that the file has the correct EOL format. I would compare the output of the dos2unix command against the original file. If there is a diff between both, I give an error message to the client and cancel the check-in. I would like your comments/feedback on this approach. A: I think you can avoid a commit hook script in this case by using the svn:eol-style property as described in the SVNBook: * *End-of-Line Character Sequences *Subversion Properties This way SVN can worry about your line endings for you. Good luck! A: What exactly are you trying to do? Of course, there are numerous places to learn about svn pre-commit hooks (e.g. here , here, and in the Red Book) but it depends what you're trying to do and what is available on your system. Can you be more specific?
{ "language": "en", "url": "https://stackoverflow.com/questions/48562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Change user for running windows forms program I wrote a simple Windows Forms program in C#. I want to be able to input a windows user name and password and when I click a login button to run code run as the user I've entered as input. A: You can use the WindowsIdentity.Impersonate method to achieve this. This method allows code to impersonate a different Windows user. Here is a link for more information on this method with a good sample: http://msdn.microsoft.com/en-us/library/system.security.principal.windowsidentity.impersonate.aspx Complete example: // This sample demonstrates the use of the WindowsIdentity class to impersonate a user. // IMPORTANT NOTES: // This sample can be run only on Windows XP. The default Windows 2000 security policy // prevents this sample from executing properly, and changing the policy to allow // proper execution presents a security risk. // This sample requests the user to enter a password on the console screen. // Because the console window does not support methods allowing the password to be masked, // it will be visible to anyone viewing the screen. // The sample is intended to be executed in a .NET Framework 1.1 environment. To execute // this code in a 1.0 environment you will need to use a duplicate token in the call to the // WindowsIdentity constructor. See KB article Q319615 for more information. using System; using System.Runtime.InteropServices; using System.Security.Principal; using System.Security.Permissions; using System.Windows.Forms; [assembly:SecurityPermissionAttribute(SecurityAction.RequestMinimum, UnmanagedCode=true)] [assembly:PermissionSetAttribute(SecurityAction.RequestMinimum, Name = "FullTrust")] public class ImpersonationDemo { [DllImport("advapi32.dll", SetLastError=true, CharSet = CharSet.Unicode)] public static extern bool LogonUser(String lpszUsername, String lpszDomain, String lpszPassword, int dwLogonType, int dwLogonProvider, ref IntPtr phToken); [DllImport("kernel32.dll", CharSet=System.Runtime.InteropServices.CharSet.Auto)] private unsafe static extern int FormatMessage(int dwFlags, ref IntPtr lpSource, int dwMessageId, int dwLanguageId, ref String lpBuffer, int nSize, IntPtr *Arguments); [DllImport("kernel32.dll", CharSet=CharSet.Auto)] public extern static bool CloseHandle(IntPtr handle); [DllImport("advapi32.dll", CharSet=CharSet.Auto, SetLastError=true)] public extern static bool DuplicateToken(IntPtr ExistingTokenHandle, int SECURITY_IMPERSONATION_LEVEL, ref IntPtr DuplicateTokenHandle); // Test harness. // If you incorporate this code into a DLL, be sure to demand FullTrust. [PermissionSetAttribute(SecurityAction.Demand, Name = "FullTrust")] public static void Main(string[] args) { IntPtr tokenHandle = new IntPtr(0); IntPtr dupeTokenHandle = new IntPtr(0); try { string userName, domainName; // Get the user token for the specified user, domain, and password using the // unmanaged LogonUser method. // The local machine name can be used for the domain name to impersonate a user on this machine. Console.Write("Enter the name of the domain on which to log on: "); domainName = Console.ReadLine(); Console.Write("Enter the login of a user on {0} that you wish to impersonate: ", domainName); userName = Console.ReadLine(); Console.Write("Enter the password for {0}: ", userName); const int LOGON32_PROVIDER_DEFAULT = 0; //This parameter causes LogonUser to create a primary token. const int LOGON32_LOGON_INTERACTIVE = 2; tokenHandle = IntPtr.Zero; // Call LogonUser to obtain a handle to an access token. bool returnValue = LogonUser(userName, domainName, Console.ReadLine(), LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, ref tokenHandle); Console.WriteLine("LogonUser called."); if (false == returnValue) { int ret = Marshal.GetLastWin32Error(); Console.WriteLine("LogonUser failed with error code : {0}", ret); throw new System.ComponentModel.Win32Exception(ret); } Console.WriteLine("Did LogonUser Succeed? " + (returnValue? "Yes" : "No")); Console.WriteLine("Value of Windows NT token: " + tokenHandle); // Check the identity. Console.WriteLine("Before impersonation: " + WindowsIdentity.GetCurrent().Name); // Use the token handle returned by LogonUser. WindowsIdentity newId = new WindowsIdentity(tokenHandle); WindowsImpersonationContext impersonatedUser = newId.Impersonate(); // Check the identity. Console.WriteLine("After impersonation: " + WindowsIdentity.GetCurrent().Name); // Stop impersonating the user. impersonatedUser.Undo(); // Check the identity. Console.WriteLine("After Undo: " + WindowsIdentity.GetCurrent().Name); // Free the tokens. if (tokenHandle != IntPtr.Zero) CloseHandle(tokenHandle); } catch(Exception ex) { Console.WriteLine("Exception occurred. " + ex.Message); } } } A: Impersonate will change the Thread context. If you want to change the identity and launch a separate process, you will have to use runas command. The .NET Developer's Guide to Windows Security by Keith Brown is an excellent read which describes all the security scenarios. Online version is also available.
{ "language": "en", "url": "https://stackoverflow.com/questions/48567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Something like a callback delegate function in php I would like to implement something similar to a c# delegate method in PHP. A quick word to explain what I'm trying to do overall: I am trying to implement some asynchronous functionality. Basically, some resource-intensive calls that get queued, cached and dispatched when the underlying system gets around to it. When the asynchronous call finally receives a response I would like a callback event to be raised. I am having some problems coming up with a mechanism to do callbacks in PHP. I have come up with a method that works for now but I am unhappy with it. Basically, it involves passing a reference to the object and the name of the method on it that will serve as the callback (taking the response as an argument) and then use eval to call the method when need be. This is sub-optimal for a variety of reasons, is there a better way of doing this that anyone knows of? A: How do you feel about using the Observer pattern? If not, you can implement a true callback this way: // This function uses a callback function. function doIt($callback) { $data = "this is my data"; $callback($data); } // This is a sample callback function for doIt(). function myCallback($data) { print 'Data is: ' . $data . "\n"; } // Call doIt() and pass our sample callback function's name. doIt('myCallback'); Displays: Data is: this is my data A: I was wondering if we could use __invoke magic method to create "kind of" first class function and thus implement a callback Sound something like that, for PHP 5.3 interface Callback { public function __invoke(); } class MyCallback implements Callback { private function sayHello () { echo "Hello"; } public function __invoke () { $this->sayHello(); } } class MySecondCallback implements Callback { private function sayThere () { echo "World"; } public function __invoke () { $this->sayThere(); } } class WhatToPrint { protected $callbacks = array(); public function register (Callback $callback) { $this->callbacks[] = $callback; return $this; } public function saySomething () { foreach ($this->callbacks as $callback) $callback(); } } $first_callback = new MyCallback; $second_callback = new MySecondCallback; $wrapper = new WhatToPrint; $wrapper->register($first_callback)->register($second_callback)->saySomething(); Will print HelloWorld Hope it'll help ;) But I'd prefer the Controller pattern with SPL for such a feature. A: (Apart from the observer pattern) you can also use call_user_func() or call_user_func_array(). If you pass an array(obj, methodname) as first parameter it will invoked as $obj->methodname(). <?php class Foo { public function bar($x) { echo $x; } } function xyz($cb) { $value = rand(1,100); call_user_func($cb, $value); } $foo = new Foo; xyz( array($foo, 'bar') ); ?>
{ "language": "en", "url": "https://stackoverflow.com/questions/48570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Troubleshooting a NullReference exception in a service I have a windows service that runs various system monitoring operations. However, when running SNMP related checks, I always get a NullReference exception. The code runs fine when run through the user interface (under my username and password), but always errors running as the service. I've tried running the service as different user accounts (including mine), with no luck. I've tried replacing the SNMP monitoring code with calling the PowerShell cmdlet get-snmp (from the /n NetCmdlets), but that yields the same error. The application I'm working with is PolyMon. Any ideas? A: You can attach a debugger to the running process before triggering the exception. This should give you a better idea what's up with the application. A: Some ways to debug: * *Is there any additional information in the Windows events log? *I believe you should be able to listen to some kind of global-exception event like Application_Exception in windows services. I can't remember the exact name but you can atelast dump stack trace from there. *You should be able to start debugging the project in service mode. Some code snippets/stack trace/information will definitely help. A: A couple of things we've seen - more about differences between interactive vs services, but might help... One thing we've seen that does not seem relevant is the difference with what is on the user vs system path. Another thing we've seen relates to temporary files - the service we had was creating lots in the windows\temp directory - we tracked this down when it had created something like 65000 of these files and thus hit the limit of what a directory can hold... Regards, Chris A: I have tackled these kind of issues before, if you haven't already found the answer, I suggest the following: * *Enable tracing/logging in all third party apps and libraries you are using such that the errors are logged to files instead of stdout or stderr. Often times, you will find a clue from these. *Your Windows Service may be relying on some Windows networking set up to be in place before startup. This, can be due to environment (PATH, as others have suggested) or due to 'dependencies' on other services. Jay.........
{ "language": "en", "url": "https://stackoverflow.com/questions/48574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: .NET MVC Ambiguous Type Reference Not entirely sure what's going on here; any help would be appreciated. I'm trying to create a new .NET MVC web app. I was pretty sure I had it set up correctly, but I'm getting the following error: The type 'System.Web.Mvc.ViewPage' is ambiguous: it could come from assembly 'C:\MyProject\bin\System.Web.Mvc.DLL' or from assembly 'C:\MyProject\bin\MyProject.DLL'. Please specify the assembly explicitly in the type name. The source error it reports is as follows: Line 1: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage" %> Line 2: Line 3: <asp:Content ID="indexContent" ContentPlaceHolderID="MainContentPlaceHolder" runat="server"> Anything stand out that I'm doing completely wrong? A: I suppose you named one of your page "ViewPage" is that the case? And like @Jonathan mentioned, this smells: Inherits="System.Web.Mvc.ViewPage" On my MVC application, all the view pages have this instead: Inherits="MySite.Views.Pages.Home" Or something along the line. Your aspx page markup should have "Inherits" point to your code-behind class name, not the actual class that it is inheriting. The attribute name is rather misleading but its an artifact of earlier days. A: Are you using a CodeBehind file, I don't see CodeBehind="" attribute where you are specifying the Inherits from? Then you have to point inherits to the class name of the codebehind. Example: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" CodeBehind="Index.aspx.cs" Inherits="MvcApplication4.Views.Home.Index" %> Make sure the Inherits is fully qualified. It should be the namespace followed by the class name. A: I'm running ASP.NET MVC Beta and also encountered this error. It came about while I was trying to remove the code behind file for a view. I removed the "CodeBehind" attribute in the @Page directive and changed the "Inherits" attribute to point to System.Web.Mvc.ViewPage, but left the actual aspx.cs code behind file untouched in my project. At this point, if I tried to run my project, I got the error you mentioned. I resolved this error by deleting the aspx.cs (code behind) file for the view from my project. A: Check your assembly references to System.Web.Mvc in web.config. Mine were explicitly specifying 2.0.0.0, but my project referenced 3.0.0.0 A: This error usually indicates a class naming conflict. You are referencing two namespaces or you created a class with the same name in another namespace that you are using. I would start by looking at what that could be. A: Inherits="System.Web.Mvc.ViewPage" I'd imagine that this should be pointed at your View codebehind file class, not at the base ViewPage class. A: Open up C:\MyProject\bin\MyProject.DLL in Reflector and look for ViewPage to see if you've defined one by accident. A: No this should be a structural error message. Check http://trikks.wordpress.com/2011/08/26/the-type-is-ambiguous-it-could-come-from-assembly-or-from-assembly-please-specify-the-assembly-explicitly-in-the-type-name/ A: I had the same exact error this week. My Webform.aspx file had a generated designer.cs file. In this file the class was actually named "ViewPage". Delete the designer file, or move the class contents to the webform.aspx.cs file and my error was gone. Just putting it out there in case someone had this error.
{ "language": "en", "url": "https://stackoverflow.com/questions/48578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Why do most system architects insist on first programming to an interface? Almost every Java book I read talks about using the interface as a way to share state and behaviour between objects that when first "constructed" did not seem to share a relationship. However, whenever I see architects design an application, the first thing they do is start programming to an interface. How come? How do you know all the relationships between objects that will occur within that interface? If you already know those relationships, then why not just extend an abstract class? A: How come? Because that's what all the books say. Like the GoF patterns, many people see it as universally good and don't ever think about whether or not it is really the right design. How do you know all the relationships between objects that will occur within that interface? You don't, and that's a problem. If you already know those relationships, then why not just extend an abstract class? Reasons to not extend an abstract class: * *You have radically different implementations and making a decent base class is too hard. *You need to burn your one and only base class for something else. If neither apply, go ahead and use an abstract class. It will save you a lot of time. Questions you didn't ask: What are the down-sides of using an interface? You cannot change them. Unlike an abstract class, an interface is set in stone. Once you have one in use, extending it will break code, period. Do I really need either? Most of the time, no. Think really hard before you build any object hierarchy. A big problem in languages like Java is that it makes it way too easy to create massive, complicated object hierarchies. Consider the classic example LameDuck inherits from Duck. Sounds easy, doesn't it? Well, that is until you need to indicate that the duck has been injured and is now lame. Or indicate that the lame duck has been healed and can walk again. Java does not allow you to change an objects type, so using sub-types to indicate lameness doesn't actually work. A: Programming to an interface means respecting the "contract" created by using that interface This is the single most misunderstood thing about interfaces. There is no way to enforce any such contract with interfaces. Interfaces, by definition, cannot specify any behaviour at all. Classes are where behaviour happens. This mistaken belief is so widespread as to be considered the conventional wisdom by many people. It is, however, wrong. So this statement in the OP Almost every Java book I read talks about using the interface as a way to share state and behavior between objects is just not possible. Interfaces have neither state nor behaviour. They can define properties, that implementing classes must provide, but that's as close as they can get. You cannot share behaviour using interfaces. You can make an assumption that people will implement an interface to provide the sort of behaviour implied by the name of its methods, but that's not anything like the same thing. And it places no restrictions at all on when such methods are called (eg that Start should be called before Stop). This statement Required for GoF type patterns, such as the visitor pattern is also incorrect. The GoF book uses exactly zero interfaces, as they were not a feature of the languages used at the time. None of the patterns require interfaces, although some can use them. IMO, the Observer pattern is one in which interfaces can play a more elegant role (although the pattern is normally implemented using events nowadays). In the Visitor pattern it is almost always the case that a base Visitor class implementing default behaviour for each type of visited node is required, IME. Personally, I think the answer to the question is threefold: * *Interfaces are seen by many as a silver bullet (these people usually labour under the "contract" misapprehension, or think that interfaces magically decouple their code) *Java people are very focussed on using frameworks, many of which (rightly) require classes to implement their interfaces *Interfaces were the best way to do some things before generics and annotations (attributes in C#) were introduced. Interfaces are a very useful language feature, but are much abused. Symptoms include: * *An interface is only implemented by one class *A class implements multiple interfaces. Often touted as an advantage of interfaces, usually it means that the class in question is violating the principle of separation of concerns. *There is an inheritance hierarchy of interfaces (often mirrored by a hierarchy of classes). This is the situation you're trying to avoid by using interfaces in the first place. Too much inheritance is a bad thing, both for classes and interfaces. All these things are code smells, IMO. A: It's one way to promote loose coupling. With low coupling, a change in one module will not require a change in the implementation of another module. A good use of this concept is Abstract Factory pattern. In the Wikipedia example, GUIFactory interface produces Button interface. The concrete factory may be WinFactory (producing WinButton), or OSXFactory (producing OSXButton). Imagine if you are writing a GUI application and you have to go look around all instances of OldButton class and changing them to WinButton. Then next year, you need to add OSXButton version. A: In my opinion, you see this so often because it is a very good practice that is often applied in the wrong situations. There are many advantages to interfaces relative to abstract classes: * *You can switch implementations w/o re-building code that depends on the interface. This is useful for: proxy classes, dependency injection, AOP, etc. *You can separate the API from the implementation in your code. This can be nice because it makes it obvious when you're changing code that will affect other modules. *It allows developers writing code that is dependent on your code to easily mock your API for testing purposes. You gain the most advantage from interfaces when dealing with modules of code. However, there is no easy rule to determine where module boundaries should be. So this best practice is easy to over-use, especially when first designing some software. A: Programming to an interface means respecting the "contract" created by using that interface. And so if your IPoweredByMotor interface has a start() method, future classes that implement the interface, be they MotorizedWheelChair, Automobile, or SmoothieMaker, in implementing the methods of that interface, add flexibility to your system, because one piece of code can start the motor of many different types of things, because all that one piece of code needs to know is that they respond to start(). It doesn't matter how they start, just that they must start. A: Great question. I'll refer you to Josh Bloch in Effective Java, who writes (item 16) why to prefer the use of interfaces over abstract classes. By the way, if you haven't got this book, I highly recommend it! Here is a summary of what he says: * *Existing classes can be easily retrofitted to implement a new interface. All you need to do is implement the interface and add the required methods. Existing classes cannot be retrofitted easily to extend a new abstract class. *Interfaces are ideal for defining mix-ins. A mix-in interface allows classes to declare additional, optional behavior (for example, Comparable). It allows the optional functionality to be mixed in with the primary functionality. Abstract classes cannot define mix-ins -- a class cannot extend more than one parent. *Interfaces allow for non-hierarchical frameworks. If you have a class that has the functionality of many interfaces, it can implement them all. Without interfaces, you would have to create a bloated class hierarchy with a class for every combination of attributes, resulting in combinatorial explosion. *Interfaces enable safe functionality enhancements. You can create wrapper classes using the Decorator pattern, a robust and flexible design. A wrapper class implements and contains the same interface, forwarding some functionality to existing methods, while adding specialized behavior to other methods. You can't do this with abstract methods - you must use inheritance instead, which is more fragile. What about the advantage of abstract classes providing basic implementation? You can provide an abstract skeletal implementation class with each interface. This combines the virtues of both interfaces and abstract classes. Skeletal implementations provide implementation assistance without imposing the severe constraints that abstract classes force when they serve as type definitions. For example, the Collections Framework defines the type using interfaces, and provides a skeletal implementation for each one. A: Programming to interfaces provides several benefits: * *Required for GoF type patterns, such as the visitor pattern *Allows for alternate implementations. For example, multiple data access object implementations may exist for a single interface that abstracts the database engine in use (AccountDaoMySQL and AccountDaoOracle may both implement AccountDao) *A Class may implement multiple interfaces. Java does not allow multiple inheritance of concrete classes. *Abstracts implementation details. Interfaces may include only public API methods, hiding implementation details. Benefits include a cleanly documented public API and well documented contracts. *Used heavily by modern dependency injection frameworks, such as http://www.springframework.org/. *In Java, interfaces can be used to create dynamic proxies - http://java.sun.com/j2se/1.5.0/docs/api/java/lang/reflect/Proxy.html. This can be used very effectively with frameworks such as Spring to perform Aspect Oriented Programming. Aspects can add very useful functionality to Classes without directly adding java code to those classes. Examples of this functionality include logging, auditing, performance monitoring, transaction demarcation, etc. http://static.springframework.org/spring/docs/2.5.x/reference/aop.html. *Mock implementations, unit testing - When dependent classes are implementations of interfaces, mock classes can be written that also implement those interfaces. The mock classes can be used to facilitate unit testing. A: I would assume (with @eed3s9n) that it's to promote loose coupling. Also, without interfaces unit testing becomes much more difficult, as you can't mock up your objects. A: I think one of the reasons abstract classes have largely been abandoned by developers might be a misunderstanding. When the Gang of Four wrote: Program to an interface not an implementation. there was no such thing as a java or C# interface. They were talking about the object-oriented interface concept, that every class has. Erich Gamma mentions it in this interview. I think following all the rules and principles mechanically without thinking leads to a difficult to read, navigate, understand and maintain code-base. Remember: The simplest thing that could possibly work. A: Why extends is evil. This article is pretty much a direct answer to the question asked. I can think of almost no case where you would actually need an abstract class, and plenty of situations where it is a bad idea. This does not mean that implementations using abstract classes are bad, but you will have to take care so you do not make the interface contract dependent on artifacts of some specific implementation (case in point: the Stack class in Java). One more thing: it is not necessary, or good practice, to have interfaces everywhere. Typically, you should identify when you need an interface and when you do not. In an ideal world, the second case should be implemented as a final class most of the time. A: There are some excellent answers here, but if you're looking for a concrete reason, look no further than Unit Testing. Consider that you want to test a method in the business logic that retrieves the current tax rate for the region where a transaction occurrs. To do this, the business logic class has to talk to the database via a Repository: interface IRepository<T> { T Get(string key); } class TaxRateRepository : IRepository<TaxRate> { protected internal TaxRateRepository() {} public TaxRate Get(string key) { // retrieve an TaxRate (obj) from database return obj; } } Throughout the code, use the type IRepository instead of TaxRateRepository. The repository has a non-public constructor to encourage users (developers) to use the factory to instantiate the repository: public static class RepositoryFactory { public RepositoryFactory() { TaxRateRepository = new TaxRateRepository(); } public static IRepository TaxRateRepository { get; protected set; } public static void SetTaxRateRepository(IRepository rep) { TaxRateRepository = rep; } } The factory is the only place where the TaxRateRepository class is referenced directly. So you need some supporting classes for this example: class TaxRate { public string Region { get; protected set; } decimal Rate { get; protected set; } } static class Business { static decimal GetRate(string region) { var taxRate = RepositoryFactory.TaxRateRepository.Get(region); return taxRate.Rate; } } And there is also another other implementation of IRepository - the mock up: class MockTaxRateRepository : IRepository<TaxRate> { public TaxRate ReturnValue { get; set; } public bool GetWasCalled { get; protected set; } public string KeyParamValue { get; protected set; } public TaxRate Get(string key) { GetWasCalled = true; KeyParamValue = key; return ReturnValue; } } Because the live code (Business Class) uses a Factory to get the Repository, in the unit test you plug in the MockRepository for the TaxRateRepository. Once the substitution is made, you can hard code the return value and make the database unneccessary. class MyUnitTestFixture { var rep = new MockTaxRateRepository(); [FixtureSetup] void ConfigureFixture() { RepositoryFactory.SetTaxRateRepository(rep); } [Test] void Test() { var region = "NY.NY.Manhattan"; var rate = 8.5m; rep.ReturnValue = new TaxRate { Rate = rate }; var r = Business.GetRate(region); Assert.IsNotNull(r); Assert.IsTrue(rep.GetWasCalled); Assert.AreEqual(region, rep.KeyParamValue); Assert.AreEqual(r.Rate, rate); } } Remember, you want to test the business logic method only, not the repository, database, connection string, etc... There are different tests for each of those. By doing it this way, you can completely isolate the code that you are testing. A side benefit is that you can also run the unit test without a database connection, which makes it faster, more portable (think multi-developer team in remote locations). Another side benefit is that you can use the Test-Driven Development (TDD) process for the implementation phase of development. I don't strictly use TDD but a mix of TDD and old-school coding. A: In one sense, I think your question boils down to simply, "why use interfaces and not abstract classes?" Technically, you can achieve loose coupling with both -- the underlying implementation is still not exposed to the calling code, and you can use Abstract Factory pattern to return an underlying implementation (interface implementation vs. abstract class extension) to increase the flexibility of your design. In fact, you could argue that abstract classes give you slightly more, since they allow you to both require implementations to satisfy your code ("you MUST implement start()") and provide default implementations ("I have a standard paint() you can override if you want to") -- with interfaces, implementations must be provided, which over time can lead to brittle inheritance problems through interface changes. Fundamentally, though, I use interfaces mainly due to Java's single inheritance restriction. If my implementation MUST inherit from an abstract class to be used by calling code, that means I lose the flexibility to inherit from something else even though that may make more sense (e.g. for code reuse or object hierarchy). A: One reason is that interfaces allow for growth and extensibility. Say, for example, that you have a method that takes an object as a parameter, public void drink(coffee someDrink) { } Now let's say you want to use the exact same method, but pass a hotTea object. Well, you can't. You just hard-coded that above method to only use coffee objects. Maybe that's good, maybe that's bad. The downside of the above is that it strictly locks you in with one type of object when you'd like to pass all sorts of related objects. By using an interface, say IHotDrink, interface IHotDrink { } and rewrting your above method to use the interface instead of the object, public void drink(IHotDrink someDrink) { } Now you can pass all objects that implement the IHotDrink interface. Sure, you can write the exact same method that does the exact same thing with a different object parameter, but why? You're suddenly maintaining bloated code. A: Its all about designing before coding. If you dont know all the relationships between two objects after you have specified the interface then you have done a poor job of defining the interface -- which is relatively easy to fix. If you had dived straight into coding and realised half way through you are missing something its a lot harder to fix. A: You could see this from a perl/python/ruby perspective : * *when you pass an object as a parameter to a method you don't pass it's type , you just know that it must respond to some methods I think considering java interfaces as an analogy to that would best explain this . You don't really pass a type , you just pass something that responds to a method ( a trait , if you will ). A: I think the main reason to use interfaces in Java is the limitation to single inheritance. In many cases this lead to unnecessary complication and code duplication. Take a look at Traits in Scala: http://www.scala-lang.org/node/126 Traits are a special kind of abstract classes, but a class can extend many of them.
{ "language": "en", "url": "https://stackoverflow.com/questions/48605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: How to access controls in listview's layouttemplate? How do I set a property of a user control in ListView's LayoutTemplate from the code-behind? <asp:ListView ...> <LayoutTemplate> <myprefix:MyControl id="myControl" ... /> </LayoutTemplate> ... </asp:ListView> I want to do this: myControl.SomeProperty = somevalue; Please notice that my control is not in ItemTemplate, it is in LayoutTemplate, so it does not exist for all items, it exists only once. So I should be able to access it once, not for every data bound item. A: var control = (MyControl)myListView.FindControl("myControlId"); This will work but make sure you do it after the data bind or the LayoutTemplate will not have been created thus throwing an error. A: To set a property of a control that is inside the LayoutTemplate, simply use the FindControl method on the ListView control. var control = (MyControl)myListView.FindControl("myControlId"); A: Use the FindControl method on each ListViewItem. var control = (MyControl)Item.FindControl("yourControlId"); A: This has been answered in this Stack Overflow question: Access a control inside a the LayoutTemplate of a ListView See the comment on the accepted answer by tanathos. I know this was asked over a year ago, but it's one of the first results for the search term I used to get here, so I wanted to leave the answer for anyone else who stumbled upon it. A: The layout gets created, and fires a LayoutCreated event that says the layout has been created in the system. Then, you can use listview.FindControl to get a reference to that control. A: In case you need the VB.net version, here it is: Dim control = CType(myListView.FindControl("myControlId"), MyControl)
{ "language": "en", "url": "https://stackoverflow.com/questions/48616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Maximum table size for a MySQL database What is the maximum size for a MySQL table? Is it 2 million at 50GB? 5 million at 80GB? At the higher end of the size scale, do I need to think about compressing the data? Or perhaps splitting the table if it grew too big? A: I once worked with a very large (Terabyte+) MySQL database. The largest table we had was literally over a billion rows. It worked. MySQL processed the data correctly most of the time. It was extremely unwieldy though. Just backing up and storing the data was a challenge. It would take days to restore the table if we needed to. We had numerous tables in the 10-100 million row range. Any significant joins to the tables were too time consuming and would take forever. So we wrote stored procedures to 'walk' the tables and process joins against ranges of 'id's. In this way we'd process the data 10-100,000 rows at a time (Join against id's 1-100,000 then 100,001-200,000, etc). This was significantly faster than joining against the entire table. Using indexes on very large tables that aren't based on the primary key is also much more difficult. Mysql stores indexes in two pieces -- it stores indexes (other than the primary index) as indexes to the primary key values. So indexed lookups are done in two parts: First MySQL goes to an index and pulls from it the primary key values that it needs to find, then it does a second lookup on the primary key index to find where those values are. The net of this is that for very large tables (1-200 Million plus rows) indexing against tables is more restrictive. You need fewer, simpler indexes. And doing even simple select statements that are not directly on an index may never come back. Where clauses must hit indexes or forget about it. But all that being said, things did actually work. We were able to use MySQL with these very large tables and do calculations and get answers that were correct. A: About your first question, the effective maximum size for the database is usually determined by operating system, specifically the file size MySQL Server will be able to create, not by MySQL Server itself. Those limits play a big role in table size limits. And MyISAM works differently from InnoDB. So any tables will be dependent on those limits. If you use InnoDB you will have more options on manipulating table sizes, resizing the tablespace is an option in this case, so if you plan to resize it, this is the way to go. Give a look at The table is full error page. I am not sure the real record quantity of each table given all necessary information (OS, Table type, Columns, data type and size of each and etc...) And I am not sure if this info is easy to calculate, but I've seen simple table with around 1bi records in a couple cases and MySQL didn't gave up.
{ "language": "en", "url": "https://stackoverflow.com/questions/48633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: How do I specify "the word under the cursor" on VIM's commandline? I want to write a command that specifies "the word under the cursor" in VIM. For instance, let's say I have the cursor on a word and I make it appear twice. For instance, if the word is "abc" and I want "abcabc" then I could type: :s/\(abc\)/\1\1/ But then I'd like to be able to move the cursor to "def" and use the same command to change it to "defdef": :s/\(def\)/\1\1/ How can I write the command in the commandline so that it does this? :s/\(*whatever is under the commandline*\)/\1\1 A: <cword> is the word under the cursor (:help <cword>). You can nmap a command to it, or this series of keystrokes for the lazy will work: b #go to beginning of current word yw #yank to register Then, when you are typing in your pattern you can hit <control-r>0<enter> which will paste in your command the contents of the 0-th register. You can also make a command for this like: :nmap <leader>w :s/\(<c-r>=expand("<cword>")<cr>\)/ Which will map hitting '' and 'w' at the same time to replace your command line with :s/\(<currentword>\)/ A: yiwP yiw: Yank inner word (the word under the cursor). This command also moves the cursor to the beginning of the word. P: Paste before the cursor. You can then map the e.g.: < ALT > - D to this command: :nmap < ALT >-D yiwP A: While in command-line mode, CTRL+R CTRL+W will insert the word under the cursor. See the help for c_CTRL-R for a listing of all the other special registers: :help c_CTRL-R A: Another easy way to do this is to use the * command. In regular mode, when over a word, type *:s//\0\0<Enter> * makes the search pattern the current word (e.g. \<abc\>). :s// does a substitution using the current search pattern, and \0 in the replacement section is the matched string. You can then repeat this behaviour, say over word "def", by either typing the same again, or by typing *@: @: just repeats the last ex command, without a need for an <Enter>, in this case the substitution. You can also record a quick macro to do this using the q command qd*:s//\0\0<Enter>q Then repeat it to your hearts content by typing @d when over a word you want to double. As this is only one character less than the prior solution, it may not be worth it to you - unless you will be doing other ex-commands between the word-doubling, which would change the behaviour of @: A: You need to escape the backslashes within the mapping. You can also include the substitution string within the mapping. :nmap <leader>w :s/\\(<c-r>=expand("<cword>")<cr>\\)/\\1\\1<cr> A: ywPx will do what you describe. ywPxw will also advance the cursor to the next word. A: @user11211 has the most straightforward way to duplicate the word under cursor: yiwP yank inner word (moves cursor to start of word), paste (before cursor). eg. straigh[t]forward ----> straightforwar[d]straightforward [] is cursor To elaborate... You probably want to have the cursor following your duplicated word: yiwPea straigh[t]forward ----> straightforwardstraightforward[] NOTE: yiw is yank inner word (without whitespace) yaw is yank all word (including trailing whitespace). yawPea is therefore duplicate word including whitespace, and position cursor. straigh[t]forward ----> straightforward straightforward[] A: " count word (case sensitive) nmap <F4> :%s/\(<c-r>=expand("<cword>")<cr>\)//gn<cr>
{ "language": "en", "url": "https://stackoverflow.com/questions/48642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101" }
Q: Does ScopeGuard use really lead to better code? I came across this article written by Andrei Alexandrescu and Petru Marginean many years ago, which presents and discusses a utility class called ScopeGuard for writing exception-safe code. I'd like to know if coding with these objects truly leads to better code or if it obfuscates error handling, in that perhaps the guard's callback would be better presented in a catch block? Does anyone have any experience using these in actual production code? A: It definitely improves your code. Your tentatively formulated claim, that it's obscure and that code would merit from a catch block is simply not true in C++ because RAII is an established idiom. Resource handling in C++ is done by resource acquisition and garbage collection is done by implicit destructor calls. On the other hand, explicit catch blocks would bloat the code and introduce subtle errors because the code flow gets much more complex and resource handling has to be done explicitly. RAII (including ScopeGuards) isn't an obscure technique in C++ but firmly established best-practice. A: Yes. If there is one single piece of C++ code that I could recommend every C++ programmer spend 10 minutes learning, it is ScopeGuard (now part of the freely available Loki library). I decided to try using a (slightly modified) version of ScopeGuard for a smallish Win32 GUI program I was working on. Win32 as you may know has many different types of resources that need to be closed in different ways (e.g. kernel handles are usually closed with CloseHandle(), GDI BeginPaint() needs to be paired with EndPaint(), etc.) I used ScopeGuard with all these resources, and also for allocating working buffers with new (e.g. for character set conversions to/from Unicode). What amazed me was how much shorter the program was. Basically, it's a win-win: your code gets shorter and more robust at the same time. Future code changes can't leak anything. They just can't. How cool is that? A: I often use it for guarding memory usage, things that need to be freed that were returned from the OS. For example: DATA_BLOB blobIn, blobOut; blobIn.pbData=const_cast<BYTE*>(data); blobIn.cbData=length; CryptUnprotectData(&blobIn, NULL, NULL, NULL, NULL, CRYPTPROTECT_UI_FORBIDDEN, &blobOut); Guard guardBlob=guardFn(::LocalFree, blobOut.pbData); // do stuff with blobOut.pbData A: I think above answers lack one important note. As others have pointed out, you can use ScopeGuard in order to free allocated resources independent of failure (exception). But that might not be the only thing you might want to use scope guard for. In fact, the examples in linked article use ScopeGuard for a different purpose: transcations. In short, it might be useful if you have multiple objects (even if those objects properly use RAII) that you need to keep in a state that's somehow correlated. If change of state of any of those objects results in an exception (which, I presume, usually means that its state didn't change) then all changes already applied need to be rolled back. This creates it's own set of problems (what if a rollback fails as well?). You could try to roll out your own class that manages such correlated objects, but as the number of those increases it would get messy and you would probably fall back to using ScopeGuard internally anyway. A: Yes. It was so important in C++ that even a special syntax for it in D: void somefunction() { writeln("function enter"); // c++ has similar constructs but not in syntax level scope(exit) writeln("function exit"); // do what ever you do, you never miss the function exit output } A: I haven't used this particular template but I've used something similar before. Yes, it does lead to clearer code when compared to equally robust code implemented in different ways. A: I have to say, no, no it does not. The answers here help to demonstrate why it's a genuinely awful idea. Resource handling should be done through re-usable classes. The only thing they've achieved by using a scope guard is to violate DRY up the wazoo and duplicate their resource freeing code all over their codebase, instead of writing one class to handle the resource and then that's it, for the whole lot. If scope guards have any actual uses, resource handling is not one of them. They're massively inferior to plain RAII in that case, since RAII is deduplicated and automatic and scope guards are manual code duplication or bust. A: My experience shows that usage of scoped_guard is far inferior to any of the short reusable RAII classes that you can write by hand. Before trying the scoped_guard, I had written RAII classes to * *set GLcolor or GLwidth back to the original, once I've drawn a shape *make sure a file has fclosed once I had fopened it. *reset a mouse pointer to its initial state, after I've changed it to gears/hourgrlass during a execution of a slow function *reset the sorting state of a QListView's back to its previous state, once I've temporarily finished with altering its QListViewItems -- I did not want the list to reorder itself everytime I changed the text of a single item... using simple RAII class Here's how my code looked like with my hand-crafted RAII classes: class scoped_width { int m_old_width; public: scoped_width(int w) { m_old_width = getGLwidth(); setGLwidth(w); } ~scoped_width() { setGLwidth(m_old_width); } }; void DrawTriangle(Tria *t) { // GLwidth=1 here auto guard = scoped_width(2); // sets GLwidth=2 draw_line(t->a, t->b); draw_line(t->b, t->c); draw_line(t->c, t->a); setGLwidth(5); draw_point(t->a); draw_point(t->b); draw_point(t->c); } // scoped_width sets GLwidth back to 1 here Very simple implementation for scoped_width, and quite reusable. Very simple and readable from the consumer side, also. using scoped_guard (C++14) Now, with the scoped_guard, I have to capture the existing value in the introducer ([]) in order to pass it to the guard's callback: void DrawTriangle(Tria *t) { // GLwidth=1 here auto guard = sg::make_scoped_guard([w=getGLwidth()](){ setGLwidth(w); }); // capture current GLwidth in order to set it back setGLwidth(2); // sets GLwidth=2 draw_line(t->a, t->b); draw_line(t->b, t->c); draw_line(t->c, t->a); setGLwidth(5); draw_point(t->a); draw_point(t->b); draw_point(t->c); } // scoped_guard sets GLwidth back to 1 here The above doesn't even work on C++11. Not to mention that trying to introduce the state to the lambda this way hurts my eyes. using scoped_guard (C++11) In C++11 you have to do this: void DrawTriangle(Tria *t) { // GLwidth=1 here int previous_width = getGLwidth(); // explicitly capture current width auto guard = sg::make_scoped_guard([=](){ setGLwidth(previous_width); }); // pass it to lambda in order to set it back setGLwidth(2); // sets GLwidth=2 draw_line(t->a, t->b); draw_line(t->b, t->c); draw_line(t->c, t->a); setGLwidth(5); draw_point(t->a); draw_point(t->b); draw_point(t->c); } // scoped_guard sets GLwidth back to 1 here As you can see, * *the scoped_guard snoppet requires * *3 lines to keep previous value (state) and set it to a new one, and *2 stack variables (previous_width and guard, again) to hold the previous state *the hand-crafted RAII class requires * *1 readable line to set new state and keep the previous one, and *1 stack variable (guard) to hold the previous state. Conclusion I think that examples such as void some_function() { sg::scoped_guard([](){ cout << "this is printed last"; } cout << "this is printed first"; } are no proof of the usefullness of scoped_guard. I hope that somebody can show me why I don't get the expected gain from scoped_guard. I am convinced that RAII can be exploited better by writing short hand-crafted classes, than using the more generic but hard to use scoped_guard
{ "language": "en", "url": "https://stackoverflow.com/questions/48647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: How do you send and receive UDP packets in Java on a multihomed machine? I have a machine with VmWare installed which added two extra network interfaces. The OS is Vista. I have two Java applications, one which broadcasts datagrams, and one which receives those datagrams. The problem I'm having is that unless I disable both VmWare network interfaces, the receiver can't receive the datagrams. What is the best way to make that work without disabling the interfaces? A: Look at the alternate constructor for DatagramSocket: DatagramSocket(int port, InetAddress laddr) Creates a datagram socket, bound to the specified local address. I'm guessing you're only specifying the port.
{ "language": "en", "url": "https://stackoverflow.com/questions/48659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }