text
stringlengths
8
267k
meta
dict
Q: Replace a database connection for report and all subreports Is there is any way to change the datasource location for a report and all of it's subreports without having to open each of them manually? A: Here is how I set my connections at runtime. I get the connection info from a config location. #'SET REPORT CONNECTION INFO For i = 0 To rsource.ReportDocument.DataSourceConnections.Count - 1 rsource.ReportDocument.DataSourceConnections(i).SetConnection(crystalServer, crystalDB, crystalUser, crystalPassword) Next For i = 0 To rsource.ReportDocument.Subreports.Count - 1 For x = 0 To rsource.ReportDocument.Subreports(i).DataSourceConnections.Count - 1 rsource.ReportDocument.OpenSubreport(rsource.ReportDocument.Subreports(i).Name).DataSourceConnections(x).SetConnection(crystalServer, crystalDB, crystalUser, crystalPassword) Next Next A: If you are just doing this as a one-shot deal, my suggestion might not help. But, if you change data sources frequently, it might be useful. Disclaimer: I haven't worked with Crystal since version 9.0, so I don't know if they have improved on this. I always used UDL files. Basically, it is a pointer to a data source. Set up your report to point to the UDL, and the UDL points to the data source. If the source changes, just update the UDL. This is incredibly useful if you have multiple reports. You only have to update one file when the server changes. A: Linked sub-reports (at least in CR XI) share the main report's datasource - presumably your report is already configured so that's not an option for you? A: @Unsliced I think the problem he is getting at is when you take a crystal report someone developed against another database, and you bring it up in Crystal Reports XI, you have to do a Change Datasource for each field, including those in subreports. If you just change source on the top level of the report, it often errors. (I think that is a known issue in Crystal Reports). A: I'm guessing you're talking about .rdl files from Reporting Services? (If not, my answer might be wrong) They're basically just XML, so you could load each one of them in and do an XPath query to get the node that contains the datasource and update it.
{ "language": "en", "url": "https://stackoverflow.com/questions/40545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Are square brackets permitted in URLs? Are square brackets in URLs allowed? I noticed that Apache commons HttpClient (3.0.1) throws an IOException, wget and Firefox however accept square brackets. URL example: http://example.com/path/to/file[3].html My HTTP client encounters such URLs but I'm not sure whether to patch the code or to throw an exception (as it actually should be). A: RFC 3986 states A host identified by an Internet Protocol literal address, version 6 [RFC3513] or later, is distinguished by enclosing the IP literal within square brackets ("[" and "]"). This is the only place where square bracket characters are allowed in the URI syntax. So you should not be seeing such URI's in the wild in theory, as they should arrive encoded. A: Pretty much the only characters not allowed in pathnames are # and ? as they signify the end of the path. The uri rfc will have the definative answer: http://www.ietf.org/rfc/rfc1738.txt Unsafe: Characters can be unsafe for a number of reasons. The space character is unsafe because significant spaces may disappear and insignificant spaces may be introduced when URLs are transcribed or typeset or subjected to the treatment of word-processing programs. The characters "<" and ">" are unsafe because they are used as the delimiters around URLs in free text; the quote mark (""") is used to delimit URLs in some systems. The character "#" is unsafe and should always be encoded because it is used in World Wide Web and in other systems to delimit a URL from a fragment/anchor identifier that might follow it. The character "%" is unsafe because it is used for encodings of other characters. Other characters are unsafe because gateways and other transport agents are known to sometimes modify such characters. These characters are "{", "}", "|", "\", "^", "~", "[", "]", and "`". All unsafe characters must always be encoded within a URL. For example, the character "#" must be encoded within URLs even in systems that do not normally deal with fragment or anchor identifiers, so that if the URL is copied into another system that does use them, it will not be necessary to change the URL encoding. The answer is that they should be hex encoded, but knowing postel's law, most things will accept them verbatim. A: Any browser or web-enabled software that accepts URLs and is not throwing an exception when special characters are introduced is almost guaranteed to be encoding the special characters behind the scenes. Curly brackets, square brackets, spaces, etc all have special encoded ways of representing them so as not to produce conflicts. As per the previous answers, the safest way to deal with these is to URL-encode them before handing them off to something that will try to resolve the URL. A: Square brackets [ and ] in URLs are not often supported. Replace them by %5B and %5D: * *Using a command line, the following example is based on bash and sed: url='http://example.com?day=[0-3][0-9]' encoded_url="$( sed 's/\[/%5B/g;s/]/%5D/g' <<< "$url")" *Using Java URLEncoder.encode(String s, String enc) *Using PHP rawurlencode() or urlencode() <?php echo '<a href="http://example.com/day/', rawurlencode('[0-3][0-9]'), '">'; ?> output: <a href="http://example.com/day/%5B0-3%5D%5B0-9%5D"> or: <?php $query_string = 'day=' . urlencode('[0-3][0-9]') . '&month=' . urlencode('[0-1][0-9]'); echo '<a href="http://example.com?', htmlentities($query_string), '">'; ?> *Using your favorite programming language... Please extend this answer by posting a comment or editing directly this answer to add the function you use from your programming language ;-) For more details, see the RFC 3986 specifying the URL syntax. The Appendix A is about %-encoding in the query string (brackets as belonging to “gen-delims” to be %-encoded). A: For using the HttpClient commons class, you want to look into the org.apache.commons.httpclient.util.URIUtil class, specifically the encode() method. Use it to URI-encode the URL before trying to fetch it. A: StackOverflow seems to not encode them: https://stackoverflow.com/search?q=square+brackets+[url] A: I know this question is a bit old, but I just wanted to note that PHP uses brackets to pass arrays in a URL. http://www.example.com/foo.php?bar[]=1&bar[]=2&bar[]=3 In this case $_GET['bar'] will contain array(1, 2, 3). A: Best to URL encode those, as they are clearly not supported in all web servers. Sometimes, even when there is a standard, not everyone follows it. A: According to the URL specification, the square brackets are not valid URL characters. Here's the relevant snippets: The "national" and "punctuation" characters do not appear in any productions and therefore may not appear in URLs. national { | } | vline | [ | ] | \ | ^ | ~ punctuation < | > A: Square brackets are considered unsafe, but majority of browsers will parse those correctly. Having said that it is better to replace square brackets with some other characters.
{ "language": "en", "url": "https://stackoverflow.com/questions/40568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: Datetime arithmetic with a string in Ruby In Ruby, I'm trying to do the following. def self.stats(since) return Events.find(:all, :select => 'count(*) as this_count', :conditions => ['Date(event_date) >= ?', (Time.now - since)]).first.this_count end where "since" is a string representing an amount of time ('1 hour', '1 day', '3 days') and so on. Any suggestions? A: Try using Chronic to parse the date strings into actual datetime objects. A: I hacked this together with the ActiveSupport gem: require 'active_support' def string_to_date(date_string) parts = date_string.split return parts[0].to_i.send(parts[1]) end sinces = ['1 hour', '1 day', '3 days'] sinces.each do |since| puts "#{since} ago: #{string_to_date(since).ago(Time.now)}" end [edit] To answer your question, you might try it like that: :conditions => ['Date)event_date) >= ?', (string_to_date(since).ago(Time.now))] A: I agree with John Millikin. Chronic, or even your own helpers, would be a much lighter and effective dependency to carry than whole ActiveSupport, assuming you are not already trapped inside Rails.
{ "language": "en", "url": "https://stackoverflow.com/questions/40577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: cx_Oracle: how do I get the ORA-xxxxx error number? In a try/except block, how do I extract the Oracle error number? A: try: cursor.execute("select 1 / 0 from dual") except cx_Oracle.DatabaseError, e: error, = e print "Code:", error.code print "Message:", error.message This results in the following output: Code: 1476 Message: ORA-01476: divisor is equal to zero
{ "language": "en", "url": "https://stackoverflow.com/questions/40586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: jQuery and Prototype Selector Madness Both the jQuery and Prototpye JavaScript libraries refuse to allow me to use a variable to select an list item element by index number although they accept a hard coded number. For example, in Prototype this works: $$('li')[5].addClassName('active'); But this will not work no matter how I try to cast the variable as a number or integer: $$('li')[currentPage].addClassName('active'); In jQuery I get similar weirdness. This will work: jQuery('li').eq(5).addClass("active"); But this will not work again even though the value of currentPage is 5 and its type is number: jQuery('li').eq(currentPage).addClass("active"); I'm trying to create a JavaScript pagination system and I need to set the class on the active page button. The list item elements are created dynamically depending upon the number of pages I need. A: Are you certain that currentPage is an integer? Try something like: var currentPage = 5; jQuery('li').eq(currentPage); as a simple sanity check. If that works, you should try casting to Integer. A: Make sure that the currentPage variable is correctly scoped in the code where it is being accessed. Could the variable be changed somewhere else in the code before you are accessing it? Tools like Firebug can help you to add a breakpoint at the point of execution and see the value of your variable. A: It looks like I just needed to be more specific in my element selector although it is weird that a hard coded number would work. jQuery('#pagination-digg li').eq(currentPage).addClass("active");
{ "language": "en", "url": "https://stackoverflow.com/questions/40590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What kind of problems are state machines good for? What kind of programming problems are state machines most suited for? I have read about parsers being implemented using state machines, but would like to find out about problems that scream out to be implemented as a state machine. A: Stateful protocols such as TCP are often represented as state machines. However it's rare that you should want to implement anything as a state machine proper. Usually you will use a corruption of one, i.e. have it carrying out a repeated action while sitting in one state, logging data while it transitions, or exchanging data while remaining in one state. A: AI in games is very often implemented using State Machines. Helps create discrete logic that is much easier to build and test. A: Objects in games are often represented as state machines. An AI character might be: * *Guarding *Aggressive *Patroling *Asleep So you can see these might model some simple but effective states. Of course you could probably make a more complex continuous system. Another example would be a process such as making a purchase on Google Checkout. Google gives a number of states for Financial and Order, and then informs you of transistions such as the credit card clearing or getting rejected, and allows you to inform it that the order has been shipped. A: Regular expression matching, Parsing, Flow control in a complex system. Regular expressions are a simple form of state machine, specifically finite automata. They have a natural represenation as such, although it is possible to implement them using mutually recursive functions. State machines when implemented well, will be very efficient. There is an excellent state machine compiler for a number of target languages, if you want to make a readable state machine. http://research.cs.queensu.ca/~thurston/ragel/ It also allows you to avoid the dreaded 'goto'. A: The easiest answer is probably that they are suited for practically any problem. Don't forget that a computer itself is also a state machine. Regardless of that, state machines are typically used for problems where there is some stream of input and the activity that needs to be done at a given moment depends the last elements seen in that stream at that point. Examples of this stream of input: some text file in the case of parsing, a string for regular expressions, events such as player entered room for game AI, etc. Examples of activities: be ready to read a number (after another number followed by a + have appear in the input in a parser for a calculator), turn around (after player approached and then sneezed), perform jumping kick (after player pressed left, left, right, up, up). A: A good resource is this free State Machine EBook. My own quick answer is below. When your logic must contain information about what happened the last time it was run, it must contain state. So a state machine is simply any code that remembers (or acts on) information that can only be gained by understanding what happened before. For instance, I have a cellular modem that my program must use. It has to perform the following steps in order: * *reset the modem *initiate communications with the modem *wait for the signal strength to indicate a good connection with a tower *... Now I could block the main program and simply go through all these steps in order, waiting for each to run, but I want to give my user feedback and perform other operations at the same time. So I implement this as a state machine inside a function, and run this function 100 times a second. enum states{reset,initsend, initresponse, waitonsignal,dial,ppp,...} modemfunction() { static currentstate switch(currentstate) { case reset: Do reset if reset was successful, nextstate=init else nextstate = reset break case initsend send "ATD" nextstate = initresponse break ... } currentstate=nextstate } More complex state machines implement protocols. For instance a ECU diagnostics protocol I used can only send 8 byte packets, but sometimes I need to send bigger packets. The ECU is slow, so I need to wait for a response. Ideally when I send a message I use one function and then I don't care what happens, but somewhere my program must monitor the line and send and respond to these messages, breaking them up into smaller pieces and reassembling the pieces of received messages into the final message. A: Workflow (see WF in .net 3.0) A: They have many uses, parsers being a notable one. I have personally used simplified state machines to implement complex multi-step task dialogs in applications. A: A parser example. I recently wrote a parser that takes a binary stream from another program. The meaning of the current element parsed indicates the size/meaning of the next elements. There are a (small) finite number of elements possible. Hence a state machine. A: They're great for modelling things that change status, and have logic that triggers on each transition. I'd use finite state machines for tracking packages by mail, or to keep track of the different stata of a user during the registration process, for example. As the number of possible status values goes up, the number of transitions explodes. State machines help a lot in that case. A: Just as a side note, you can implement state machines with proper tail calls like I explained in the tail recursion question. In that exemple each room in the game is considered one state. Also, Hardware design with VHDL (and other logic synthesis languages) uses state machines everywhere to describe hardware. A: Any workflow application, especially with asynchronous activities. You have an item in the workflow in a certain state, and the state machine knows how to react to external events by placing the item in a different state, at which point some other activity occurs. A: The concept of state is very useful for applications to "remember" the current context of your system and react properly when a new piece of information arrives. Any non trivial application has that notion embedded in the code thru variables and conditionals. So if your application has to react differently every time it receives a new piece of information because of the context you are in, you could model your system with with a state machines. An example would be how to interpret the keys on a calculator, which depends on what your are processing at that point in time. On the contrary, if your computation does not depend of the context but solely on the input (like a function adding two numbers), you will not need an state machine (or better said, you will have a state machine with zero states) Some people design the whole application in terms of state machines since they capture the essential things to keep in mind in your project and then use some procedure or autocoders to make them executable. It takes some paradigm chance to program in this way, but I found it very effective. A: Things that comes to mind are: * *Robot/Machine manipulation... those robot arms in factories *Simulation Games, (SimCity, Racing Game etc..) Generalizing: When you have a string of inputs that when interacting with anyone of them, requires the knowledge of the previous inputs or in other words, when processing of any single input requires the knowledge of previous inputs. (that is, it needs to have "states") Not much that I know of that isn't reducible to a parsing problem though. A: If you need a simple stochastic process, you might use a Markov chain, which can be represented as a state machine (given the current state, at the next step the chain will be in state X with a certain probability).
{ "language": "en", "url": "https://stackoverflow.com/questions/40602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: MSI Installer fails without removing a previous install I have built an MSI that I would like to deploy, and update frequently. Unfortunately, when you install the MSI, and then try to install a newer version of the same MSI, it fails with a message like "Another version of this product is already installed. Installation of this version cannot continue..." appears. The MSI was built with a Visual Studio 2008 Setup Project. I have tried setting the "Remove Previous Versions" property to both true and false, in an effort to just make newer versions overwrite the older install, but nothing has worked. At a previous company I know I did not have this problem with installers built by Wise and Advanced Installer. Is there a setting I am missing? Or is my desired functionality not supported by the VS 2008 Setup Project? A: I have built numerous MSIs with VS 2005 Pro that do this correctly. Are you sure that the 'Version' property of the deployment project has been incremented? This property is independent of the version of the assemblies in the application, and this is the error message you will see if the Version property of the MSI is the same as it was for the one you are trying to overwrite. A: Increment the version number on your project. In VS, select the node for your setup app and hit F4 to view the properties. Find the version field, and increment it. A: You need to change the ProductCode between each version, if you don't do this you will get the behavior you are seeing. The ProductCode is seen in the project properties. Check out the MSDN Online Help for ProductCode to understand better. A: This is little more complex: To automatically remove previous versions of installed application in Setup Project, it is needed to: * *Increment the Version property (e.g. from 1.0.0 to 1.0.1 ... also change on 3rd position works) * *Version is the property of installer project that identifies, which version of application is installed *Change the ProductCode property, so installer knows that it is not the same installer executed twice * *ProductCode is the property of installer project, Visual Studio offers to automatically change it, when Version property is changed *Keep the value of "UpgradeCode" property. * *UpgradeCode is also the property of installer project *it needs to stay the same among entire "upgrade line", so installer knows what to upgrade *If you also want to remove old application versions from Control Panel's list of software, set RemovePreviousVersions to true A: Had the same problem when going from XP to Win7. To solve it I had to set DetectNewerInstalledVersion to False. Also, as mentioned by others, you need to inc the version of the setup project. Good luck.
{ "language": "en", "url": "https://stackoverflow.com/questions/40603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Loading JSON with PHP I've been using PHP for too long, but I'm new to JavaScript integration in some places. I'm trying to find the fastest way to pass database information into a page where it can be modified and displayed dynamically in JavaScript. Right now, I'm looking at loading a JSON with PHP echo statements because it's fast and effective, but I saw that I could use PHP's JSON library (PHP 5.2). Has anybody tried the new JSON library, and is it better than my earlier method? A: the json_encode and json_decode methods work perfectly. Just pass them an object or an array that you want to encode and it recursively encodes them to JSON. Make sure that you give it UTF-8 encoded data! A: Library has worked great for me. FWIW I needed to do this on a project with earlier version of PHP lacking JSON support. Function below worked as a granted risky version of "json_encode" for arrays of strings. function my_json_encode($row) { $json = "{"; $keys = array_keys($row); $i=1; foreach ($keys as $key) { if ($i>1) $json .= ','; $json .= '"'.addslashes($key).'":"'.addslashes($row[$key]).'"'; $i++; } $json .= "}"; return $json; } A: Use the library. If you try to generate it manually, I predict with 99% certainty that the resulting text will be invalid in some way. Especially with more esoteric features like Unicode strings or exponential notation.
{ "language": "en", "url": "https://stackoverflow.com/questions/40608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Is Flex development without FlexBuilder realistic? Is it realistic to try and learn and code a Flex 3 application without purchasing FlexBuilder? Since the SDK and BlazeDS are open source, it seems technically possible to develop without Flex Builder, but how realistic is it. I would like to test out Flex but don't want to get into a situation where I am dependent on the purchase of FlexBuilder (at least not until I am confident and competent enough with the technology to recommend purchase to my employer). I am experimenting right now, so I'm taking a long time and the trial license on my Windows machine has expired. Also Linux is my primary development platform and there is only an alpha available for Linux. Most of the documentation I've found seem to use Flex Builder. Maybe I should use Laszlo... A: IntelliJ IDEA works as a Flex IDE, if you happen to also be a Java developer. It's free if you contribute to open source projects. A: Check out FlashDevelop for Windows. I like it better than Flex Builder. A: I've been using Flex since version 2 and Flex3/BlazeDS since it came out of beta. I also have some experience with Lazzlo and the difference is day and night (Flex rocks!). I have not regretted once using Flex. Regarding FlexBuilder, it is worth every penny. While it is completely possible and reasonable to write Flex application without FlexBuilder, the productivity gains of using it will more than recoup the investment. Try the evaluation for 30 days and compare it to some of the other options suggested about (I'm going to try FlashDevelop). Some things you get with FlexBuilder include: * *Code completion *Visual editor *Debugger (it is fantastic!!) *Profiler (also very good) Regarding Linux, the alpha version of FlexBuilder does not have a visual editor. Other than that, I understand it is reasonably feature complete, still free, and many of the Adobe employees I've talked with that use Linux are happy with it. A: FlashDevelop is really easy to setup with the Flex SDK. Just download FlashDevelop, then download the Flex SDK. In FlashDevelop go to Tools > Program Options > AS3Context (under Plugins) > Set the "Flex SDK Location" to the root of the folder you extracted the SDK to and build away. FlashDevelop even has a basic MXML project that will get you going. If you use ColdFusion for the backend, having FlexBuilder in Eclipse and CFEclipse can mean one less IDE to have to get familiar with. A: I'm going to join the choir here and say FlashDevelop for an alternative. The only reasons you might want FlexBuilder are: * *Flex charts *Step-through debugging. *Profiler (I haven't used it) *Visual style editor However, the code-completion and general bloody-awesomeness of FlashDevelop's code-completion and syntax highlighting knocks the gimpy eclipse crap out of the water. So, pretty much what Todd said, except for the code-completion part. Flex Builder is very flakey in that department. A: Short answer: Yes I'm working on a team of developers and designers. We code our .MXML and .AS in FlashDevelop 3 and our designer creates .FLA with skins and widgets that get [Import()]ed in ActionScript. I wrote a little more about this subject here: Flash designer/coder collaboration best practices A: I have been using FlashDevelop for along time (4/5 years), I am actively using it to develop Flex4.5 applications, it has built in support for code completion, it has a profiler and a debugger that work excellently. The IDE itself is responsive and require the .Net framework, in fact here, I'll list some stuff. FlashDevelop Pros * *Free IDE *Code completion feature *Very capable Debugger *Profiler *Documenting *Ability to build Air / Flex files *Templating *Plugins FlashDevelop Cons * *Lack of UI desing support *.Net support only (Won't work with Mono) Everything else is pretty simple to get running with, the instructions are available at http://www.flashdevelop.org/ A: Absolutely. I've been a Flex developer since Flex 2 and until recently I've used my regular editor, TextMate, for coding and Ant for building. TextMate has some good extensions for ActionScript and Flex coding, but I think you could get that for any decent editor. What's been missing from my setup is a usable debugger, the command line version is a pain to work with. Because of that I've been starting to use FlexBuilder on the side, using it in parallel with my regular setup. Having a profiler doesn't hurt too. A: I've been using FlexBuilder for awhile now and just started to switch to using Eclipse with Flex SDK. I work for a non-profit so the word FREE is huge. Initially, it is fairly intimidating so if you have the money, you might want FlexBuilder. There is a lot you need to know and do if you use the SDK. The learning and experience may pay off though... I am still undecided myself. A: I second FlashDevelop. You don't get the visual design stuff for the MXML, but for the code (both MXML and AS) it's excellent. A: I also use FlashDevelop when working on AS3 projects. For me, the ugliness (UI design) and sluggishness of Eclipse/Flex Builder is enough of a deterrent to stay away from Flex Builder. In addition to the weaknesses of FlashDevelop pointed out previously, one of my biggest gripes is that it is not a true .NET only app and therefore will never work in mono and therefore can not be easily ported to the mac - which is my platform of choice for development web/javascript/AS3 development. A: Amethyst is also a pretty good option to try. It is a plugin for MS Visual Studio, and takes advantage of a lot of the goodies there. It is significantly less sluggish than FlashBuilder, has a really good debugger, and a decent visual designer as well. The personal version is free, but quite crippled. You have to buy the pro version after a 60 day free trial. However, it is (at time of writing) almost 1/3 the cost of Flash Builder. As an added bonus you don't need to pay for Visual Studio since it works with the free (albeit hard to find) "shell version (integrated)" of Visual Studio. It won't work with any of the free Express editions, though.
{ "language": "en", "url": "https://stackoverflow.com/questions/40622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: I just don't get continuations! What are they and what are they good for? I do not have a CS degree and my background is VB6 -> ASP -> ASP.NET/C#. Can anyone explain it in a clear and concise manner? A: A heads up, this example is not concise nor exceptionally clear. This is a demonstration of a powerful application of continuations. As a VB/ASP/C# programmer, you may not be familiar with the concept of a system stack or saving state, so the goal of this answer is a demonstration and not an explanation. Continuations are extremely versatile and are a way to save execution state and resume it later. Here is a small example of a cooperative multithreading environment using continuations in Scheme: (Assume that the operations enqueue and dequeue work as expected on a global queue not defined here) (define (fork) (display "forking\n") (call-with-current-continuation (lambda (cc) (enqueue (lambda () (cc #f))) (cc #t)))) (define (context-switch) (display "context switching\n") (call-with-current-continuation (lambda (cc) (enqueue (lambda () (cc 'nothing))) ((dequeue))))) (define (end-process) (display "ending process\n") (let ((proc (dequeue))) (if (eq? proc 'queue-empty) (display "all processes terminated\n") (proc)))) This provides three verbs that a function can use - fork, context-switch, and end-process. The fork operation forks the thread and returns #t in one instance and #f in another. The context-switch operation switches between threads, and end-process terminates a thread. Here's an example of their use: (define (test-cs) (display "entering test\n") (cond ((fork) (cond ((fork) (display "process 1\n") (context-switch) (display "process 1 again\n")) (else (display "process 2\n") (end-process) (display "you shouldn't see this (2)")))) (else (cond ((fork) (display "process 3\n") (display "process 3 again\n") (context-switch)) (else (display "process 4\n"))))) (context-switch) (display "ending process\n") (end-process) (display "process ended (should only see this once)\n")) The output should be entering test forking forking process 1 context switching forking process 3 process 3 again context switching process 2 ending process process 1 again context switching process 4 context switching context switching ending process ending process ending process ending process ending process ending process all processes terminated process ended (should only see this once) Those that have studied forking and threading in a class are often given examples similar to this. The purpose of this post is to demonstrate that with continuations you can achieve similar results within a single thread by saving and restoring its state - its continuation - manually. P.S. - I think I remember something similar to this in On Lisp, so if you'd like to see professional code you should check the book out. A: One way to think of a continuation is as a processor stack. When you "call-with-current-continuation c" it calls your function "c", and the parameter passed to "c" is your current stack with all your automatic variables on it (represented as yet another function, call it "k"). Meanwhile the processor starts off creating a new stack. When you call "k" it executes a "return from subroutine" (RTS) instruction on the original stack, jumping you back in to the context of the original "call-with-current-continuation" ("call-cc" from now on) and allowing your program to continue as before. If you passed a parameter to "k" then this becomes the return value of the "call-cc". From the point of view of your original stack, the "call-cc" looks like a normal function call. From the point of view of "c", your original stack looks like a function that never returns. There is an old joke about a mathematician who captured a lion in a cage by climbing into the cage, locking it, and declaring himself to be outside the cage while everything else (including the lion) was inside it. Continuations are a bit like the cage, and "c" is a bit like the mathematician. Your main program thinks that "c" is inside it, while "c" believes that your main program is inside "k". You can create arbitrary flow-of-control structures using continuations. For instance you can create a threading library. "yield" uses "call-cc" to put the current continuation on a queue and then jumps in to the one on the head of the queue. A semaphore also has its own queue of suspended continuations, and a thread is rescheduled by taking it off the semaphore queue and putting it on to the main queue. A: Imagine if every single line in your program was a separate function. Each accepts, as a parameter, the next line/function to execute. Using this model, you can "pause" execution at any line and continue it later. You can also do inventive things like temporarily hop up the execution stack to retrieve a value, or save the current execution state to a database to retrieve later. A: Basically, a continuation is the ability for a function to stop execution and then pick back up where it left off at a later point in time. In C#, you can do this using the yield keyword. I can go into more detail if you wish, but you wanted a concise explanation. ;-) A: I'm still getting "used" to continuations, but one way to think about them that I find useful is as abstractions of the Program Counter (PC) concept. A PC "points" to the next instruction to execute in memory, but of course that instruction (and pretty much every instruction) points, implicitly or explicitly, to the instruction that follows, as well as to whatever instructions should service interrupts. (Even a NOOP instruction implicitly does a JUMP to the next instruction in memory. But if an interrupt occurs, that'll usually involve a JUMP to some other instruction in memory.) Each potentially "live" point in a program in memory to which control might jump at any given point is, in a sense, an active continuation. Other points that can be reached are potentially active continuations, but, more to the point, they are continuations that are potentially "calculated" (dynamically, perhaps) as a result of reaching one or more of the currently active continuations. This seems a bit out of place in traditional introductions to continuations, in which all pending threads of execution are explicitly represented as continuations into static code; but it takes into account the fact that, on general-purpose computers, the PC points to an instruction sequence that might potentially change the contents of memory representing a portion of that instruction sequence, thus essentially creating a new (or modified, if you will) continuation on the fly, one that doesn't really exist as of the activations of continuations preceding that creation/modification. So continuation can be viewed as a high-level model of the PC, which is why it conceptually subsumes ordinary procedure call/return (just as ancient iron did procedure call/return via low-level JUMP, aka GOTO, instructions plus recording of the PC on call and restoring of it on return) as well as exceptions, threads, coroutines, etc. So just as the PC points to computations to happen in "the future", a continuation does the same thing, but at a higher, more-abstract level. The PC implicitly refers to memory plus all the memory locations and registers "bound" to whatever values, while a continuation represents the future via the language-appropriate abstractions. Of course, while there might typically be just one PC per computer (core processor), there are in fact many "active" PC-ish entities, as alluded to above. The interrupt vector contains a bunch, the stack a bunch more, certain registers might contain some, etc. They are "activated" when their values are loaded into the hardware PC, but continuations are abstractions of the concept, not PCs or their precise equivalent (there's no inherent concept of a "master" continuation, though we often think and code in those terms to keep things fairly simple). In essence, a continuation is a representation of "what to do next when invoked", and, as such, can be (and, in some languages and in continuation-passing-style programs, often is) a first-class object that is instantiated, passed around, and discarded just like most any other data type, and much like how a classic computer treats memory locations vis-a-vis the PC -- as nearly interchangeable with ordinary integers. A: In C#, you have access to two continuations. One, accessed through return, lets a method continue from where it was called. The other, accessed through throw, lets a method continue at the nearest matching catch. Some languages let you treat these statements as first-class values, so you can assign them and pass them around in variables. What this means is that you can stash the value of return or of throw and call them later when you're really ready to return or throw. Continuation callback = return; callMeLater(callback); This can be handy in lots of situations. One example is like the one above, where you want to pause the work you're doing and resume it later when something happens (like getting a web request, or something). I'm using them in a couple of projects I'm working on. In one, I'm using them so I can suspend the program while I'm waiting for IO over the network, then resume it later. In the other, I'm writing a programming language where I give the user access to continuations-as-values so they can write return and throw for themselves - or any other control flow, like while loops - without me needing to do it for them. A: You probably understand them better than you think you did. Exceptions are an example of "upward-only" continuations. They allow code deep down the stack to call up to an exception handler to indicate a problem. Python example: try: broken_function() except SomeException: # jump to here pass def broken_function(): raise SomeException() # go back up the stack # stuff that won't be evaluated Generators are examples of "downward-only" continuations. They allow code to reenter a loop, for example, to create new values. Python example: def sequence_generator(i=1): while True: yield i # "return" this value, and come back here for the next i = i + 1 g = sequence_generator() while True: print g.next() In both cases, these had to be added to the language specifically whereas in a language with continuations, the programmer can create these things where they're not available. A: Think of threads. A thread can be run, and you can get the result of its computation. A continuation is a thread that you can copy, so you can run the same computation twice. A: Continuations have had renewed interest with web programming because they nicely mirror the pause/resume character of web requests. A server can construct a continaution representing a users session and resume if and when the user continues the session.
{ "language": "en", "url": "https://stackoverflow.com/questions/40632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Insert current date in Excel template at creation I'm building an excel template (*.xlt) for a user here, and one of the things I want to do is have it insert the current date when a new document is created (ie, when they double-click the file in windows explorer). How do I do this? Update: I should have added that I would prefer not to use any vba (macro). If that's the only option, then so be it, but I'd really like to avoid forcing my user to remember to click some 'allow macro content' button. A: You could use the worksheet function =TODAY(), but obviously this would be updated to the current date whenever the workbook is recalculated. The only other method I can think of is, as 1729 said, to code the Workbook_Open event: Private Sub Workbook_Open() ThisWorkbook.Worksheets("Sheet1").Range("A1").Value = Date End Sub You can reduce the problem of needing the user to accept macros each time by digitaly signing the template (in VBA IDE Tools | Digital Signature...) and select a digital certificate, however, you will need to get a certificate from a commercial certification authority (see http://msdn.microsoft.com/en-us/library/ms995347.aspx). The user will need to select to always trust this certificate the first time they run the template, but thereafter, they will not be prompted again. A: You can edit the default template for excel - There is a file called Book.xlt in the XLSTART directory, normally located at C:\Program Files\Microsoft Office\Office\XLStart\ You should be able to add a macro called Workbook_Open Private Sub Workbook_Open() If ActiveWorkBook.Sheets(1).Range("A1") = "" Then ActiveWorkBook.Sheets(1).Range("A1") = Now End If End Sub My VBA is a little rusty, but you might find something like this works. A: To avoid VBA, and if you think your users might follow instructions, you could ask them to copy the date and then paste special->values to set the date so that it won't change in future.
{ "language": "en", "url": "https://stackoverflow.com/questions/40637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Check if a record exists in a VB6 collection? I've inherited a large VB6 app at my current workplace. I'm kinda learning VB6 on the job and there are a number of problems I'm having. The major issue at the moment is I can't figure out how to check if a key exists in a Collection object. Can anyone help? A: I've always done it with a function like this: public function keyExists(myCollection as collection, sKey as string) as Boolean on error goto handleerror: dim val as variant val = myCollection(sKey) keyExists = true exit sub handleerror: keyExists = false end function A: As pointed out by Thomas, you need to Set an object instead of Let. Here's a general function from my library that works for value and object types: Public Function Exists(ByVal key As Variant, ByRef col As Collection) As Boolean 'Returns True if item with key exists in collection On Error Resume Next Const ERR_OBJECT_TYPE As Long = 438 Dim item As Variant 'Try reach item by key item = col.item(key) 'If no error occurred, key exists If Err.Number = 0 Then Exists = True 'In cases where error 438 is thrown, it is likely that 'the item does exist, but is an object that cannot be Let ElseIf Err.Number = ERR_OBJECT_TYPE Then 'Try reach object by key Set item = col.item(key) 'If an object was found, the key exists If Not item Is Nothing Then Exists = True End If End If Err.Clear End Function As also advised by Thomas, you can change the Collection type to Object to generalize this. The .Item(key) syntax is shared by most collection classes, so that might actually be useful. EDIT Seems like I was beaten to the punch somewhat by Thomas himself. However for easier reuse I personally prefer a single function with no private dependencies. A: My standard function is very simple. This will work regardless of the element type, since it doesn't bother doing any assignment, it merely executes the collection property get. Public Function Exists(ByVal oCol As Collection, ByVal vKey As Variant) As Boolean On Error Resume Next oCol.Item vKey Exists = (Err.Number = 0) Err.Clear End Function A: Using the error handler to catch cases when the key does not exists in the Collection can make debugging with "break on all errors" option quite annoying. To avoid unwanted errors I quite often create a class which has the stored objects in a Collection and all keys in a Dictionary. Dictionary has exists(key) -function so I can call that before trying to get an object from the collection. You can only store strings in a Dictionary, so a Collection is still needed if you need to store objects. A: The statement "error handling will fail if an error handler is already active" is only partly right. You can have multiple error handlers within your routine. So, one could accommodate the same functionality in only one function. Just rewrite your code like this: Public Function Exists(col, index) As Boolean Dim v As Variant TryObject: On Error GoTo ExistsTryObject Set v = col(index) Exists = True Exit Function TryNonObject: On Error GoTo ExistsTryNonObject v = col(index) Exists = True Exit Function ExistsTryObject: ' This will reset your Err Handler Resume TryNonObject ExistsTryNonObject: Exists = False End Function However, if you were to only incorporate the code in the TryNonObject section of the routine, this would yield the same information. It will succeed for both Objects, and non-objects. It will speed up your code for non-objects, however, since you would only have to perform one single statement to assert that the item exists within the collection. A: @Mark Biek Your keyExists closely matches my standard Exists() function. To make the class more useful for COM-exposed collections and checking for numeric indexes, I'd recommend changing sKey and myCollection to not be typed. If the function is going to be used with a collection of objects, 'set' is required (on the line where val is set). EDIT: It was bugging me that I've never noticed different requirements for an object-based and value-based Exists() function. I very rarely use collections for non-objects, but this seemed such a perfect bottleneck for a bug that would be so hard to track down when I needed to check for existence. Because error handling will fail if an error handler is already active, two functions are required to get a new error scope. Only the Exists() function need ever be called: Public Function Exists(col, index) As Boolean On Error GoTo ExistsTryNonObject Dim o As Object Set o = col(index) Exists = True Exit Function ExistsTryNonObject: Exists = ExistsNonObject(col, index) End Function Private Function ExistsNonObject(col, index) As Boolean On Error GoTo ExistsNonObjectErrorHandler Dim v As Variant v = col(index) ExistsNonObject = True Exit Function ExistsNonObjectErrorHandler: ExistsNonObject = False End Function And to verify the functionality: Public Sub TestExists() Dim c As New Collection Dim b As New Class1 c.Add "a string", "a" c.Add b, "b" Debug.Print "a", Exists(c, "a") ' True ' Debug.Print "b", Exists(c, "b") ' True ' Debug.Print "c", Exists(c, "c") ' False ' Debug.Print 1, Exists(c, 1) ' True ' Debug.Print 2, Exists(c, 2) ' True ' Debug.Print 3, Exists(c, 3) ' False ' End Sub A: Better solution would be to write a TryGet function. A lot of the time you are going to be checking exists, and then getting the item. Save time by doing it at the same time. public Function TryGet(key as string, col as collection) as Variant on error goto errhandler Set TryGet= col(key) exit function errhandler: Set TryGet = nothing end function A: see http://www.visualbasic.happycodings.com/Other/code10.html the implementation here has the advantage of also optionally returning the found element, and works with object/native types (according to the comments). reproduced here since the link is no longer available: Determine if an item exists in a collection The following code shows you how to determine if an item exists within a collection. Option Explicit 'Purpose : Determines if an item already exists in a collection 'Inputs : oCollection The collection to test for the existance of the item ' vIndex The index of the item. ' [vItem] See Outputs 'Outputs : Returns True if the item already exists in the collection. ' [vItem] The value of the item, if it exists, else returns "empty". 'Notes : 'Example : Function CollectionItemExists(vIndex As Variant, oCollection As Collection, Optional vItem As Variant) As Boolean On Error GoTo ErrNotExist 'Clear output result If IsObject(vItem) Then Set vItem = Nothing Else vItem = Empty End If If VarType(vIndex) = vbString Then 'Test if item exists If VarType(oCollection.Item(CStr(vIndex))) = vbObject Then 'Return an object Set vItem = oCollection.Item(CStr(vIndex)) Else 'Return an standard variable vItem = oCollection.Item(CStr(vIndex)) End If Else 'Test if item exists If VarType(oCollection.Item(Int(vIndex))) = vbObject Then 'Return an object Set vItem = oCollection.Item(Int(vIndex)) Else 'Return an standard variable vItem = oCollection.Item(Int(vIndex)) End If End If 'Return success CollectionItemExists = True Exit Function ErrNotExist: CollectionItemExists = False On Error GoTo 0 End Function 'Demonstration routine Sub Test() Dim oColl As New Collection, oValue As Variant oColl.Add "red1", "KEYA" oColl.Add "red2", "KEYB" 'Return the two items in the collection Debug.Print CollectionItemExists("KEYA", oColl, oValue) Debug.Print "Returned: " & oValue Debug.Print "-----------" Debug.Print CollectionItemExists(2, oColl, oValue) Debug.Print "Returned: " & oValue 'Should fail Debug.Print CollectionItemExists("KEYC", oColl, oValue) Debug.Print "Returned: " & oValue Set oColl = Nothing End Sub * *See more at: https://web.archive.org/web/20140723190623/http://visualbasic.happycodings.com/other/code10.html#sthash.MlGE42VM.dpuf A: While looking for a function like this i designed it as following. This should work with objects and non-objects without assigning new variables. Public Function Exists(ByRef Col As Collection, ByVal Key) As Boolean On Error GoTo KeyError If Not Col(Key) Is Nothing Then Exists = True Else Exists = False End If Exit Function KeyError: Err.Clear Exists = False End Function
{ "language": "en", "url": "https://stackoverflow.com/questions/40651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Validating a HUGE XML file I'm trying to find a way to validate a large XML file against an XSD. I saw the question ...best way to validate an XML... but the answers all pointed to using the Xerces library for validation. The only problem is, when I use that library to validate a 180 MB file then I get an OutOfMemoryException. Are there any other tools,libraries, strategies for validating a larger than normal XML file? EDIT: The SAX solution worked for java validation, but the other two suggestions for the libxml tool were very helpful as well for validation outside of java. A: Use libxml, which performs validation and has a streaming mode. A: Instead of using a DOMParser, use a SAXParser. This reads from an input stream or reader so you can keep the XML on disk instead of loading it all into memory. SAXParserFactory factory = SAXParserFactory.newInstance(); factory.setValidating(true); factory.setNamespaceAware(true); SAXParser parser = factory.newSAXParser(); XMLReader reader = parser.getXMLReader(); reader.setErrorHandler(new SimpleErrorHandler()); reader.parse(new InputSource(new FileReader ("document.xml"))); A: Personally I like to use XMLStarlet which has a command line interface, and works on streams. It is a set of tools built on Libxml2. A: SAX and libXML will help, as already mentioned. You could also try increasing the maximum heap size for the JVM using the -Xmx option. E.g. to set the maximum heap size to 512MB: java -Xmx512m com.foo.MyClass
{ "language": "en", "url": "https://stackoverflow.com/questions/40663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: *= in Sybase SQL I'm maintaining some code that uses a *= operator in a query to a Sybase database and I can't find documentation on it. Does anyone know what *= does? I assume that it is some sort of a join. select * from a, b where a.id *= b.id I can't figure out how this is different from: select * from a, b where a.id = b.id A: It means outer join, a simple = means inner join. *= is LEFT JOIN and =* is RIGHT JOIN. (or vice versa, I keep forgetting since I'm not using it any more, and Google isn't helpful when searching for *=) A: Of course, you should write it this way: SELECT * FROM a LEFT JOIN b ON b.id=a.id The a,b syntax is evil. A: ANSI-82 syntax select * from a , b where a.id *= b.id ANSI-92 select * from a left outer join b on a.id = b.id A: From http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.dc34982_1500/html/mig_gde/mig_gde160.htm: Inner and outer tables The terms outer table and inner table describe the placement of the tables in an outer join: * *In a left join, the outer table and inner table are the left and right tables respectively. The outer table and inner table are also referred to as the row-preserving and null-supplying tables, respectively. *In a right join, the outer table and inner table are the right and left tables respectively. For example, in the queries below, T1 is the outer table and T2 is the inner table: * *T1 left join T2 *T2 right join T1 Or, using Transact-SQL syntax: * *T1 *= T2 *T2 =* T1 A: select * from a, b where a.id = b.id Requires that a row exist in where b.id = a.id in order to return an answer select * from a, b where a.id *= b.id Will fill the columns from b with nulls when there wasn't a row in b where b.id = a.id.
{ "language": "en", "url": "https://stackoverflow.com/questions/40665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do I get the full url of the page I am on in C# I need to be able to get at the full URL of the page I am on from a user control. Is it just a matter of concatenating a bunch of Request variables together? If so which ones? Or is there a more simpiler way? A: if you need the full URL as everything from the http to the querystring you will need to concatenate the following variables Request.ServerVariables("HTTPS") // to check if it's HTTP or HTTPS Request.ServerVariables("SERVER_NAME") Request.ServerVariables("SCRIPT_NAME") Request.ServerVariables("QUERY_STRING") A: Request.RawUrl A: Request.Url.AbsoluteUri This property does everything you need, all in one succinct call. A: Better to use Request.Url.OriginalString than Request.Url.ToString() (according to MSDN) A: Thanks guys, I used a combination of both your answers @Christian and @Jonathan for my specific need. "http://" + Request.ServerVariables["SERVER_NAME"] + Request.RawUrl.ToString() I don't need to worry about secure http, needed the servername variable and the RawUrl handles the path from the domain name and includes the querystring if present. A: Here is a list I normally refer to for this type of information: Request.ApplicationPath : /virtual_dir Request.CurrentExecutionFilePath : /virtual_dir/webapp/page.aspx Request.FilePath : /virtual_dir/webapp/page.aspx Request.Path : /virtual_dir/webapp/page.aspx Request.PhysicalApplicationPath : d:\Inetpub\wwwroot\virtual_dir\ Request.QueryString : /virtual_dir/webapp/page.aspx?q=qvalue Request.Url.AbsolutePath : /virtual_dir/webapp/page.aspx Request.Url.AbsoluteUri : http://localhost:2000/virtual_dir/webapp/page.aspx?q=qvalue Request.Url.Host : localhost Request.Url.Authority : localhost:80 Request.Url.LocalPath : /virtual_dir/webapp/page.aspx Request.Url.PathAndQuery : /virtual_dir/webapp/page.aspx?q=qvalue Request.Url.Port : 80 Request.Url.Query : ?q=qvalue Request.Url.Scheme : http Request.Url.Segments : / virtual_dir/ webapp/ page.aspx Hopefully you will find this useful! A: For ASP.NET Core you'll need to spell it out: var request = Context.Request; @($"{ request.Scheme }://{ request.Host }{ request.Path }{ request.QueryString }") Or you can add a using statement to your view: @using Microsoft.AspNetCore.Http.Extensions then @Context.Request.GetDisplayUrl() The _ViewImports.cshtml might be a better place for that @using A: I usually use Request.Url.ToString() to get the full url (including querystring), no concatenation required. A: If you need the port number also, you can use Request.Url.Authority Example: string url = Request.Url.Authority + HttpContext.Current.Request.RawUrl.ToString(); if (Request.ServerVariables["HTTPS"] == "on") { url = "https://" + url; } else { url = "http://" + url; } A: Try the following - var FullUrl = Request.Url.AbsolutePath.ToString(); var ID = FullUrl.Split('/').Last();
{ "language": "en", "url": "https://stackoverflow.com/questions/40680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "175" }
Q: Tips / Resources for building a Google Chrome plugin After test driving Google Chrome for 30 minutes or so, I like it, even if it seems bare-bones at the moment. The obvious way to add a few things I can't live without would be through plugins. Does anyone have any links to resources on how to get started building a plugin/addon for Chrome? Thanks. A: Matt Cutts (the Google SEO guru) has a Q&A about chrome, and writes about it: Q: But I can’t install extension X! Google Chrome is dead to me if I can’t use extension X! A: Then you’ll have to use another browser for a while. Google Chrome currently doesn’t support browser extensions (it does support plug-ins, such as Flash). I’m sure that extensions/add-ons are something that the Chrome team would like to do down the road, but the Chrome team will be a bit busy for a while, what with the feedback from the launch plus working on Mac and Linux support. I’d suggest that you give Google Chrome a try for a few days to see if enjoy browsing even without extension X. A lot of really cool extension-like behaviors such as resize-able textareas and drag-and-drop file upload are already built into Google Chrome. A: Q: But I can’t install extension X! Google Chrome is dead to me if I can’t use extension X! A: No worries! Now google chrome has extensions too. Look here. If anyone's interested in chrome extension development here is a link to the latest extension developers documentation page for Google chrome. NOTE: Plugins (NPAPI) and extensions(JS Based) are not the same From the doc... Extensions are small software programs that can modify and enhance the functionality of Google Chrome. You write them using web technologies like HTML, JavaScript, and CSS. So if you know how to write web pages, you already know most of what you need to know to write extensions. A: Chromium supports NPAPI plugins which is harder to program compared to Firefox extensions. However NPAPI has better performance and is more versatile. Checkout this minimalistic example of an NPAPI plugin. A: Chrome does support the netscape plugin api, but that is for displaying certain kinds of content. You seem to be after a extention api, really firefox is the only major browser to encourage and support third party extentions to browsing capability (that aren't simply new toolbars) Nothing in the developer documentation points to a browser enhancing api - google seem to want to keep a tight reign on the look and feel of the application. You might find a more conclusive answer on the development site: dev.chromium.org, and some of the developers might be on irc on #chromium on freenode. A: Chrome now supports extensions and themes. Here is the documentation for developing extensions, and this is a page which describes theme creation. A: The accepted answer is out of date. A couple of useful URLs: * *For developers: http://developer.chrome.com/extensions/index.html *For end users: https://chrome.google.com/webstore/category/home?hl=en-US There's a difference between a 'plugin' and an 'extension'. Chrome supports NPAPI plugins: * *http://developer.chrome.com/extensions/npapi.html Chrome may also support a new Pepper Plugin API (ppapi): * *http://code.google.com/p/ppapi/
{ "language": "en", "url": "https://stackoverflow.com/questions/40689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Possible to create REST web service with ASP.NET 2.0 Is it possible to create a REST web service using ASP.NET 2.0? The articles and blog entries I am finding all seem to indicate that ASP.NET 3.5 with WCF is required to create REST web services with ASP.NET. If it is possible to create REST web services in ASP.NET 2.0 can you provide an example. Thanks! A: I have actually created a REST web service with asp.net 2.0. Its really no different than creating a web page. When I did it, I really didn't have much time to research how to do it with an asmx file so I did it in a standard aspx file. I know thier is extra overhead by doing it this way but as a first revision it was fine. protected void PageLoad(object sender, EventArgs e) { using (XmlWriter xm = XmlWriter.Create(Response.OutputStream, GetXmlSettings())) { //do your stuff xm.Flush(); } } /// <summary> /// Create Xml Settings object to properly format the output of the xml doc. /// </summary> private static XmlWriterSettings GetXmlSettings() { XmlWriterSettings xmlSettings = new XmlWriterSettings(); xmlSettings.Indent = true; xmlSettings.IndentChars = " "; return xmlSettings; } That should be enough to get you started, I will try and post more later. Also if you need basic authentication for your web service it can be done, but it needs to be done manually if you aren't using active directory. A: It is definitely possible to create RESTful web services using ASP.NET. If you are starting a new project I would definitely look into creating RESTful web services using WCF. The 3.5 .NET Framework allows you to specify RESTful endpoint along with a regular old SOAP endpoint and still deliver the same service. All you really have to do is enable an endpointbehavior that calls out <webHttp /> Here is a good series on creating RESTful web services using WCF: http://blogs.msdn.com/bags/archive/2008/08/05/rest-in-wcf-blog-series-index.aspx A: You can certainly create RESTful web services in ASP.NET 2.0, for example, but there are no high-level APIs to do all the donkey work for you, as provided by WCF in .NET 3.5. A: Well, of course you could always implement the spec yourself. It's just that there's nothing built-in to support it. If you use Nathan Lee's solution, do it as an http handler (.ashx) rather than an aspx. You can just about copy/paste his code into a new handler file. A: You can do RESTful web services easily by implementing the spec using IHTTPHandlers. A: Also check out using ASP.Net MVC. I've written some articles on this at my blog: http://shouldersofgiants.co.uk/Blog/ Look for my Creating a RESTful Web Service Using ASP.Net MVC series A: I'm only just beginning to use them, but from what I've seen 2.0 pretty assumes SOAP. A: You can create RESTful service using 1) WCF REST service 2) ASP.NET Web API If you all care about RESTful service, ASP.NET web api is that you should go with. But if you need service that supports both SOAP webservice and RESTful then WCF REST would be a good choice. There are some articles that discuss about one versus another. This article may be helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/40692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Where can I find and submit bug reports on Google's Chrome browser? It will be important for developers wanting to develop for the chrome browser to be able to review existing bugs (to avoid too much pulling-out of hair), and to add new ones (to improve the thing). Yet I can't seem to find the bug tracking for this project. It is open source, right? A: Click the Chrome menu (), then Help > Report an issue.... Since Google Code has been deprecated, you can also go to bugs.chromium.org in order to report new bugs and features or search for the existing one. The Chrome browser is under chromium category, so after logging-in, you can submit a new bug report by clicking New issue on the top-left corner and follow the wizard steps. See: Report a problem or send feedback on Chrome at Chrome Help A: From the Google Site * *Click the Page menu page menu. *Select Report a bug or broken website. *Choose an issue type from the drop-down menu. The web address of the webpage you're on is recorded automatically. *If possible, add key details in the 'Description' field, including steps to reproduce the issue you're experiencing. *Keep 'Send source of current page' and 'Send screenshot of current page' checkboxes selected. *Click the Send report button to report a Google Chrome bug. I don't see any reference to public bug tracking... A: The Google Code site for the Chrome project is available at: http://code.google.com/chromium/ Facilities available allow you to: *File bug reports; *Join the Google group discussions; *Submit a patch; Plus there are links to the development blog and a whole bunch of other useful stuff. A: See the Issues tab on Chrome's Google Code page. A: The bug tracker is here: https://bugs.chromium.org/p/chromium/ Some shortcuts: * *crbug.com *crbug.com/new (file a new bug) *crbug.com/150835 (go to a bug by number) A: Go to the wrench -> About Google Chrome -> report an issue. (accurate as of v19.0.1084.46) A: Google is calling it Chromium on Google Code The Chromium Bug Reporting Page is there and has the link to submit bugs listed. (Google Account Required) Here's a direct link to the bug report form. A: This is the home page for the Code: http://code.google.com/chromium/ And here's more info: http://dev.chromium.org/getting-involved including the bug list
{ "language": "en", "url": "https://stackoverflow.com/questions/40703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "163" }
Q: Python deployment and /usr/bin/env portability At the beginning of all my executable Python scripts I put the shebang line: #!/usr/bin/env python I'm running these scripts on a system where env python yields a Python 2.2 environment. My scripts quickly fail because I have a manual check for a compatible Python version: if sys.version_info < (2, 4): raise ImportError("Cannot run with Python version < 2.4") I don't want to have to change the shebang line on every executable file, if it's possible; however, I don't have administrative access to the machine to change the result of env python and I don't want to force a particular version, as in: #!/usr/bin/env python2.4 I'd like to avoid this because system may have a newer version than Python 2.4, or may have Python 2.5 but no Python 2.4. What's the elegant solution? [Edit:] I wasn't specific enough in posing the question -- I'd like to let users execute the scripts without manual configuration (e.g. path alteration or symlinking in ~/bin and ensuring your PATH has ~/bin before the Python 2.2 path). Maybe some distribution utility is required to prevent the manual tweaks? A: "env" simply executes the first thing it finds in the PATH env var. To switch to different python, prepend the directory for that python's executable to the path before invoking your script. A: Pretty hackish solution - if your check fails, use this function (which probably could be significantly improved) to determine the best interpreter available, determine if it is acceptable, and if so relaunch your script with os.system or something similar and your sys.argv using the new interpreter. import os import glob def best_python(): plist = [] for i in os.getenv("PATH").split(":"): for j in glob.glob(os.path.join(i, "python2.[0-9]")): plist.append(os.path.join(i, j)) plist.sort() plist.reverse() if len(plist) == 0: return None return plist[0] A: If you are running the scripts then you can set your PATH variable to point to a private bin directory first: $ mkdir ~/bin $ ln -s `which python2.4` ~/bin/python $ export PATH=~/bin:$PATH Then when you execute your python script it'll use python 2.4. You'll have to change your login scripts to change your PATH. Alternatively run your python script with the explicit interpreter you want: $ /path/to/python2.4 <your script> A: @morais: That's an interesting idea, but I think maybe we can take it one step farther. Maybe there's a way to use Ian Bicking's virtualenv to: * *See if we're running in an acceptable environment to begin with, and if so, do nothing. *Check if there exists a version-specific executable on the PATH, i.e. check if python2.x exists for x in reverse(range(4, 10)). If so, re-run the command with the better interpreter. *If no better interpreter exists, use virtualenv to try and install a newer version of Python from the older version of Python and get any prerequisite packages. I have no idea if virtualenv is capable of this, so I'll go mess around with it sometime soon. :) A: Here's a solution if you're (1) absolutely set on using shebangs and (2) able to use Autotools in your build process. I just found last night that you can use the autoconf macro AM_PATH_PYTHON to find a minimal Python 2 binary. The how-to is here. So, your process would be: * *Issue an AM_PATH_PYTHON(2.4) in your configure.ac *Rename all of your .py scripts to .py.in (in my experience, this doesn't confuse vi) *Name all of those Python scripts you want to generate with AC_CONFIG_FILES. *Instead of starting with #!/usr/bin/env python, use #!@PYTHON@ Then your resultant Python scripts will always have an appropriate shebang. So, you have this solution, at least possible, if not practical.
{ "language": "en", "url": "https://stackoverflow.com/questions/40705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Is solving the halting problem easier than people think? Although the general case is undecidable, many people still do solve problems that are equivilent well enough for day to day use. In cohen's phd thesis on computer viruses, he showed how virus scanning is equivilent to the halting problem, yet we have an entire industry based around this challenge. I also have seen microsoft's terminator project - http://research.microsoft.com/Terminator/ Which leads me to ask - is the halting problem overrated - do we need to worry about the general case? Will types become turing complete over time - dependant types do seem like a good development? Or, to look the other way, will we begin to use non turing complete languages to gain the benefits of static analysis ? A: There are plenty of programs for which the halting problem can be solved and plenty of those programs are useful. If you had a compiler that would tell you "Halts", "Doesn't halt", or "Don't know" then it could tell you which part of the program caused the "Halt" or "Don't know" condition. If you really wanted a program that definitely halted or didn't halt then you'd fix those "don't know" units in much the same way we get rid of compiler warnings. I think we would all be surprised at how often trying to solve this generally-impossible problem proved useful. A: As a day-to-day programmer, I'd say it's worthwhile to continue as far down the path to solving halting-style problems, even if you only approach that limit and never reach it. As you pointed out, virus scanning proves valuable. Google search doesn't pretend to be the absolute answer to "find me the best X for Y," but it's also notably useful. If I unleash a novel virus (muwahaha), does that create a bigger solution set, or just cast light on an existing problem area? Regardless of the technical difference, some will pragmatically develop and charge for follow-up "detection and removal" services. I look forward to real scientific answers for your other questions... A: Is solving the halting problem easier than people think? I think it is exactly as difficult as people think. Will types become turing complete over time? My dear, they already are! dependant types do seem like a good development? Very much so. I think there could be a growth in non-Turing complete-but-provable languages. For quite some time, SQL was in this category (it isn't any more), but this didn't really diminish its utility. There is certainly a place for such systems, I think. A: First: The Halting Problem is not a "problem" in a practical sense, as in "a problem that needs to be solved." It is rather a statement about the nature of mathematics, analogous to Gödel's Incompleteness Theorem. Second: The fact that building a perfect virus scanner is intractable (due to its being equivalent to the Halting Problem) is precisely the reason that there is "an entire industry built around this challenge." If an algorithm for perfect virus scanning could be designed, it would simply be a matter of someone doing it once, and then there's no need for an industry any more. Story over. Third: Working in a Turing Complete language does not eliminate "the benefits of static analysis"-- it merely means that there are limits to the static analysis. That's ok-- there are limits to almost everything we do, anyway. Finally: If the Halting Problem could be "solved" in any way, it would definitely be "easier than people think", as Turing demonstrated that it is unsolvable. The general case is the only relevant case, from a mathematical standpoint. Specific cases are matters of engineering. A: The Halting Problem is really only interesting if you look at it in the general case, since if the Halting problem were decidable, all other undecidable problems would also be decidable via reduction. So, my opinion on this question is, no, it is not easy in the cases that matter. That said, in the real world, it may not be such a big deal. See also: http://en.wikipedia.org/wiki/Halting_problem#Importance_and_consequences A: Incidentally, I think that the Turing completeness of templates shows that halting is overrated. Most languages guarantee that their compilers will halt; not so C++. Does this diminish C++ as a language? I don't think so; it has many flaws, but compiles that don't always halt aren't one of them. A: I don't know how hard people think it is, so I can't say if it is easier. However, you are right in your observation that undecidability of a problem (in general) does not mean that all instances of that problem are undecidable. For instance, I can easily tell you that a program like while false do something terminates (assuming the obvious semantics of the while and false). Projects like the Terminator project you mentioned obviously exist (and probably even work in some cases), so it is clear that not all is hopeless. There is also a contest (I believe every year) for tools that try to prove termination for rewrite systems, which are basically a model of computation. But it is the case that termination in many cases is very hard to prove. The easiest way to look at it is perhaps to see the undecidability as a maximum on the complexity of instantiations of a problem. Each instantiation is somewhere on the scale of trivial to this maximum and with a higher maximum you typically have that the instantiations are harder on average as well. A: The fact that a problem is undecidable does not mean that it is not interesting: on the contrary! So yes, the fact that we do not have an effective and uniform procedure to address termination for all programs (as well as many other problems about software) does not mean that it is not worth to look for partial solutions. In a sense, this is why we need software engineering: because we cannot just delegate the task to computers. The title of your question is, however, a bit misleading. I agree with DrPizza: the termination problem is exactly as difficult as people think. Moreover, the fact that we do not necessarily have to worry with the general case does not imply that the termination problem is overrated: it is worth to look for partial solutions beacuse we know that the general solution is hard. Finally, the issues about dependent types and subrecursive languages, although partially related, are really different questions, and I am not sure to see the point to mix them all together. A: 001 int D(int (*x)()) 002 { 003 int Halt_Status = H(x, x); 004 if (Halt_Status) 005 HERE: goto HERE; 006 return Halt_Status; 007 } 008 009 int main() 010 { 011 Output("Input_Halts = ", H(D,D)); 012 } H correctly predicts that D(D) will never stop running unless H aborts its simulation of its input. (a) If simulating halt decider H correctly simulates its input D until H correctly determines that its simulated D could not possibly reach its own "return" statement in a finite number of simulated steps then: (b) H can abort its simulation of D and correctly report that D specifies a non-halting sequence of configurations. When it is understood that (b) is a necessary consequence of (a) and we can see that (a) has been met then we understand that H(D,D) could correctly determine the halt status of its otherwise "impossible" input. Simulating halt deciders applied to the halting theorem The above is fully operational code in the x86utm operating system. Because H correctly detects that D correctly simulated by H would continue to call H(D,D) never reaching its own "return" statement H aborts it simulation of D and returns 0 to main() on line 011. I finally have agreement on this key point: H(D,D) does correctly compute the mapping from its input to its reject state on the basis that H correctly predicts that D correctly simulated by H would never halt. I am the original author of this work and anything that you find on the internet about this was written by me.
{ "language": "en", "url": "https://stackoverflow.com/questions/40716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Should you register new extensions with Apple? Do I need to register new extension types with Apple before I release an application that would create them on OS X? A: No, there's no need to register extensions. A: As a follow up, there is a little more information in the FAQs at the Apple Developer Connection (ADC) website: http://developer.apple.com/faq/datatype.html
{ "language": "en", "url": "https://stackoverflow.com/questions/40719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Server centered vs. client centered architecture For a typical business application, should the focus be on client processing via AJAX i.e. pull the data from the server and process it on the client or would you suggest a more classic ASP.Net approach with the server being responsible for handling most of the UI events? I find it hard to come up with a good 'default architecture' from which to start. Maybe someone has an open source example application which they could recommend. A: It depends greatly on the application and user. In the general case, however, you'll always scale better and the user will have a better experience if as much of the processing as possible happens on the client. Further, with Google Gears and other such frameworks it's possible to separate the client from the network and still have use of the application. If all the UI is on the server it's much harder to roll out a roaming solution. A: It really depends on the application and the situation, but just keep in mind that every hit to the server is costly, both in adding load (perhaps minimally), but also in terms of UI responsiveness. I am of the mind that doing things in JavaScript when possible is a good idea, if it can make your UI feel snappier. Of course, it all depends on what you are trying to do, and whether it matters if the UI is snappy (an internal web app probably doesn't NEED extra development to make the UI more attractive and quicker/easier to use, whereas something that is used by the general public by a mass audience probably needs to be as polished and tuned as possible). A: Do you need to trust the data? If so, be aware that it's trivial to tamper with client-processed data in nasty and malicious ways. If that's the case, you'll want to process info on the server. Also, be aware that it can be a lot harder to code javascript apps so they are stable, reliable, and bug free. Can you lock down your users so they only use one particular browser?
{ "language": "en", "url": "https://stackoverflow.com/questions/40723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What is the best way to give a C# auto-property an initial value? How do you give a C# auto-property an initial value? I either use the constructor, or revert to the old syntax. Using the Constructor: class Person { public Person() { Name = "Initial Name"; } public string Name { get; set; } } Using normal property syntax (with an initial value) private string name = "Initial Name"; public string Name { get { return name; } set { name = value; } } Is there a better way? A: You can simple put like this public sealed class Employee { public int Id { get; set; } = 101; } A: Sometimes I use this, if I don't want it to be actually set and persisted in my db: class Person { private string _name; public string Name { get { return string.IsNullOrEmpty(_name) ? "Default Name" : _name; } set { _name = value; } } } Obviously if it's not a string then I might make the object nullable ( double?, int? ) and check if it's null, return a default, or return the value it's set to. Then I can make a check in my repository to see if it's my default and not persist, or make a backdoor check in to see the true status of the backing value, before saving. A: In the constructor. The constructor's purpose is to initialized it's data members. A: In C# 6.0 this is a breeze! You can do it in the Class declaration itself, in the property declaration statements. public class Coordinate { public int X { get; set; } = 34; // get or set auto-property with initializer public int Y { get; } = 89; // read-only auto-property with initializer public int Z { get; } // read-only auto-property with no initializer // so it has to be initialized from constructor public Coordinate() // .ctor() { Z = 42; } } A: private string name; public string Name { get { if(name == null) { name = "Default Name"; } return name; } set { name = value; } } A: Have you tried using the DefaultValueAttribute or ShouldSerialize and Reset methods in conjunction with the constructor? I feel like one of these two methods is necessary if you're making a class that might show up on the designer surface or in a property grid. A: Starting with C# 6.0, We can assign default value to auto-implemented properties. public string Name { get; set; } = "Some Name"; We can also create read-only auto implemented property like: public string Name { get; } = "Some Name"; See: C# 6: First reactions , Initializers for automatically implemented properties - By Jon Skeet A: Use the constructor because "When the constructor is finished, Construction should be finished". properties are like states your classes hold, if you had to initialize a default state, you would do that in your constructor. A: In Version of C# (6.0) & greater, you can do : For Readonly properties public int ReadOnlyProp => 2; For both Writable & Readable properties public string PropTest { get; set; } = "test"; In current Version of C# (7.0), you can do : (The snippet rather displays how you can use expression bodied get/set accessors to make is more compact when using with backing fields) private string label = "Default Value"; // Expression-bodied get / set accessors. public string Label { get => label; set => this.label = value; } A: In C# 9.0 was added support of init keyword - very useful and extremly sophisticated way for declaration read-only auto-properties: Declare: class Person { public string Name { get; init; } = "Anonymous user"; } ~Enjoy~ Use: // 1. Person with default name var anonymous = new Person(); Console.WriteLine($"Hello, {anonymous.Name}!"); // > Hello, Anonymous user! // 2. Person with assigned value var me = new Person { Name = "@codez0mb1e"}; Console.WriteLine($"Hello, {me.Name}!"); // > Hello, @codez0mb1e! // 3. Attempt to re-assignment Name me.Name = "My fake"; // > Compilation error: Init-only property can only be assigned in an object initializer A: Edited on 1/2/15 C# 6 : With C# 6 you can initialize auto-properties directly (finally!), there are now other answers that describe that. C# 5 and below: Though the intended use of the attribute is not to actually set the values of the properties, you can use reflection to always set them anyway... public class DefaultValuesTest { public DefaultValuesTest() { foreach (PropertyDescriptor property in TypeDescriptor.GetProperties(this)) { DefaultValueAttribute myAttribute = (DefaultValueAttribute)property.Attributes[typeof(DefaultValueAttribute)]; if (myAttribute != null) { property.SetValue(this, myAttribute.Value); } } } public void DoTest() { var db = DefaultValueBool; var ds = DefaultValueString; var di = DefaultValueInt; } [System.ComponentModel.DefaultValue(true)] public bool DefaultValueBool { get; set; } [System.ComponentModel.DefaultValue("Good")] public string DefaultValueString { get; set; } [System.ComponentModel.DefaultValue(27)] public int DefaultValueInt { get; set; } } A: To clarify, yes, you need to set default values in the constructor for class derived objects. You will need to ensure the constructor exists with the proper access modifier for construction where used. If the object is not instantiated, e.g. it has no constructor (e.g. static methods) then the default value can be set by the field. The reasoning here is that the object itself will be created only once and you do not instantiate it. @Darren Kopp - good answer, clean, and correct. And to reiterate, you CAN write constructors for Abstract methods. You just need to access them from the base class when writing the constructor: Constructor at Base Class: public BaseClassAbstract() { this.PropertyName = "Default Name"; } Constructor at Derived / Concrete / Sub-Class: public SubClass() : base() { } The point here is that the instance variable drawn from the base class may bury your base field name. Setting the current instantiated object value using "this." will allow you to correctly form your object with respect to the current instance and required permission levels (access modifiers) where you are instantiating it. A: In C# 5 and earlier, to give auto implemented properties an initial value, you have to do it in a constructor. Since C# 6.0, you can specify initial value in-line. The syntax is: public int X { get; set; } = x; // C# 6 or higher DefaultValueAttribute is intended to be used by the VS designer (or any other consumer) to specify a default value, not an initial value. (Even if in designed object, initial value is the default value). At compile time DefaultValueAttribute will not impact the generated IL and it will not be read to initialize the property to that value (see DefaultValue attribute is not working with my Auto Property). Example of attributes that impact the IL are ThreadStaticAttribute, CallerMemberNameAttribute, ... A: In addition to the answer already accepted, for the scenario when you want to define a default property as a function of other properties you can use expression body notation on C#6.0 (and higher) for even more elegant and concise constructs like: public class Person{ public string FullName => $"{First} {Last}"; // expression body notation public string First { get; set; } = "First"; public string Last { get; set; } = "Last"; } You can use the above in the following fashion var p = new Person(); p.FullName; // First Last p.First = "Jon"; p.Last = "Snow"; p.FullName; // Jon Snow In order to be able to use the above "=>" notation, the property must be read only, and you do not use the get accessor keyword. Details on MSDN A: When you inline an initial value for a variable it will be done implicitly in the constructor anyway. I would argue that this syntax was best practice in C# up to 5: class Person { public Person() { //do anything before variable assignment //assign initial values Name = "Default Name"; //do anything after variable assignment } public string Name { get; set; } } As this gives you clear control of the order values are assigned. As of C#6 there is a new way: public string Name { get; set; } = "Default Name"; A: This is old now, and my position has changed. I'm leaving the original answer for posterity only. Personally, I don't see the point of making it a property at all if you're not going to do anything at all beyond the auto-property. Just leave it as a field. The encapsulation benefit for these item are just red herrings, because there's nothing behind them to encapsulate. If you ever need to change the underlying implementation you're still free to refactor them as properties without breaking any dependent code. Hmm... maybe this will be the subject of it's own question later A: public Class ClassName{ public int PropName{get;set;} public ClassName{ PropName=0; //Default Value } } A: In C# 6 and above you can simply use the syntax: public object Foo { get; set; } = bar; Note that to have a readonly property simply omit the set, as so: public object Foo { get; } = bar; You can also assign readonly auto-properties from the constructor. Prior to this I responded as below. I'd avoid adding a default to the constructor; leave that for dynamic assignments and avoid having two points at which the variable is assigned (i.e. the type default and in the constructor). Typically I'd simply write a normal property in such cases. One other option is to do what ASP.Net does and define defaults via an attribute: http://msdn.microsoft.com/en-us/library/system.componentmodel.defaultvalueattribute.aspx A: My solution is to use a custom attribute that provides default value property initialization by constant or using property type initializer. [AttributeUsage(AttributeTargets.Property, AllowMultiple = false, Inherited = true)] public class InstanceAttribute : Attribute { public bool IsConstructorCall { get; private set; } public object[] Values { get; private set; } public InstanceAttribute() : this(true) { } public InstanceAttribute(object value) : this(false, value) { } public InstanceAttribute(bool isConstructorCall, params object[] values) { IsConstructorCall = isConstructorCall; Values = values ?? new object[0]; } } To use this attribute it's necessary to inherit a class from special base class-initializer or use a static helper method: public abstract class DefaultValueInitializer { protected DefaultValueInitializer() { InitializeDefaultValues(this); } public static void InitializeDefaultValues(object obj) { var props = from prop in obj.GetType().GetProperties() let attrs = prop.GetCustomAttributes(typeof(InstanceAttribute), false) where attrs.Any() select new { Property = prop, Attr = ((InstanceAttribute)attrs.First()) }; foreach (var pair in props) { object value = !pair.Attr.IsConstructorCall && pair.Attr.Values.Length > 0 ? pair.Attr.Values[0] : Activator.CreateInstance(pair.Property.PropertyType, pair.Attr.Values); pair.Property.SetValue(obj, value, null); } } } Usage example: public class Simple : DefaultValueInitializer { [Instance("StringValue")] public string StringValue { get; set; } [Instance] public List<string> Items { get; set; } [Instance(true, 3,4)] public Point Point { get; set; } } public static void Main(string[] args) { var obj = new Simple { Items = {"Item1"} }; Console.WriteLine(obj.Items[0]); Console.WriteLine(obj.Point); Console.WriteLine(obj.StringValue); } Output: Item1 (X=3,Y=4) StringValue A: little complete sample: using System.ComponentModel; private bool bShowGroup ; [Description("Show the group table"), Category("Sea"),DefaultValue(true)] public bool ShowGroup { get { return bShowGroup; } set { bShowGroup = value; } } A: class Person { /// Gets/sets a value indicating whether auto /// save of review layer is enabled or not [System.ComponentModel.DefaultValue(true)] public bool AutoSaveReviewLayer { get; set; } } A: I think this would do it for ya givng SomeFlag a default of false. private bool _SomeFlagSet = false; public bool SomeFlag { get { if (!_SomeFlagSet) SomeFlag = false; return SomeFlag; } set { if (!_SomeFlagSet) _SomeFlagSet = true; SomeFlag = value; } } A: I know this is an old question, but it came up when I was looking for how to have a default value that gets inherited with the option to override, I came up with //base class public class Car { public virtual string FuelUnits { get { return "gasoline in gallons"; } protected set { } } } //derived public class Tesla : Car { public override string FuelUnits => "ampere hour"; }
{ "language": "en", "url": "https://stackoverflow.com/questions/40730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2275" }
Q: Disable WPF label accelerator key (text underscore is missing) I am setting the .Content value of a Label to a string that contains underscores; the first underscore is being interpreted as an accelerator key. Without changing the underlying string (by replacing all _ with __), is there a way to disable the accelerator for Labels? A: If you use a TextBlock as the Content of the Label, its Text will not absorb underscores. A: You could override the RecognizesAccessKey property of the ContentPresenter that is in the default template for the label. For example: <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Grid> <Grid.Resources> <Style x:Key="{x:Type Label}" BasedOn="{StaticResource {x:Type Label}}" TargetType="Label"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Label"> <Border> <ContentPresenter HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}" RecognizesAccessKey="False" /> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> </Grid.Resources> <Label>_This is a test</Label> </Grid> </Page> A: Use a <TextBlock> ... </TextBlock> instead of <Label> ... </Label> to print the exact text, which is having underscores. A: Why not like this? public partial class LabelEx : Label { public bool PreventAccessKey { get; set; } = true; public LabelEx() { InitializeComponent(); } public new object Content { get { var content = base.Content; if (content == null || !(content is string)) return content; return PreventAccessKey ? (content as string).Replace("__", "_") : content; } set { if (value == null || !(value is string)) { base.Content = value; return; } base.Content = PreventAccessKey ? (value as string).Replace("_", "__") : value; } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/40733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: Brownfield vs Greenfield development? This is not a question with a precise answer (strictly speaking the answer would be best captured by a poll, but that functionality is not available), but I am genuinely interested in the answer, so I will ask it anyway. Over the course of your career, how much time have you spent on greenfield development compared with brownfield? Over the last 10 years I would estimate that I have spent 20% on greenfield and 80% on brownfield. Is this typical? A: I think it's typical for professionals who deal with customers to spend more time in brownfield development. The reason is that customers typically aren't willing to throw out their existing software to adopt the "latest and greatest" (green) software. Developers in research or academics, however, may be more likely to do greenfield development. Start-ups as well. A: I think that your ratio 20:80 is representative of many/most developers. As to new development: if you are building software incrementally (Scrum, XP, etc) then one could argue that you spend almost all of your time in brownfield development. Except for the initial iteration/exploratory work, prototyping, even when you are building something new, you are already working on an established code base, refactoring and extending. So how much greenfield development is actually green? A: Often the problem doesn't just boil down to brownfield vs greenfield. In some cases there is a valid opportunity for a hybrid greenfield/brownfield approach. I have written an article called "Classic software mistakes: To Greenfield or Refactor Legacy Code" which discusses this exact subject and outlines a range of possible combinations then evaluates the consequences of each. http://stepaheadsoftware.blogspot.com.au/2012/09/greenfield-or-refactor-legacy-code-base.html What may surprise some people is that a non technical attribute, company size, will be a big determinant in the choice of strategy and the likelihood of success of that strategy. A: Over the past decade or so, I've always worked on software that was used as the center of my company's business. (Both SaaS and a software product.) And while I've always come into the with an existing system (so brownfield), we've usually put out a ground-up redesign/rewrite (so greenfield.) So, to break to down: * *about 60/40 brown/green for the big projects, in number *about 20/80 brown/green for the big projects, in time spent on them *and nearly 0/100 brown green for little side projects So, that is seems to be the opposite of you. It is the nature of the companies I've sought out, and hence the projects. My software is our company's main product, and that means I work on the same code base for years, usually after having created it from scratch myself/ourselves. And I like it that way.
{ "language": "en", "url": "https://stackoverflow.com/questions/40735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Generate disk usage graphs/charts with CLI only tools in Linux In this question someone asked for ways to display disk usage in Linux. I'd like to take this one step further down the cli-path... how about a shell script that takes the output from something like a reasonable answer to the previous question and generates a graph/chart from it (output in a png file or something)? This may be a bit too much code to ask for in a regular question, but my guess is that someone already has a oneliner laying around somewhere... A: I would recommend munin. It is designed for exactly this sort of thing - graphing CPU usage, memory usage, disc-usage and such. sort of like MRTG (but MRTG is primarily aimed at graphing router's traffic, graphing anything but bandwidth with it is very hackish) Writing Munin plugins is very easy (it was one of the projects goals). They can be written in almost anything (shell script, perl/python/ruby/etc, C, anything that can be execute and produce an output). The plugin output format is basically disc1usage.value 1234. And debugging the plugins is very easy (compared to MRTG) I've set it up on my laptop to monitor disc-usage, bandwidth usage (by pulling data from my ISP's control panel, it graphs my two download "bins", uploads and newsgroup usage), load average and number of processes. Once I got it installed (currently slightly difficult on OS X, but it's trivial on Linux/FreeBSD), I had written a plugin in a few minutes, and it worked, first time! I would describe how it's setup, but the munin site will do that far better than I could! There's an example installation here Some alternatives are nagios and cacti. You could also write something similar using rrdtool. Munin, MRTG and Cacti are basically all far-nicer-to-use systems based around this graphing tool. If you want something really, really simple, you could do.. import os import time while True: disc_usage = os.system("df -h / | awk '{print $3}'") log = open("mylog.txt") log.write(disc_usage + "\n") log.close() time.sleep(60*5) Then.. f = open("mylog.txt") lines = f.readlines() # Convert each line to a float number lines = [float(cur_line) for cur_line in lines] # Get the biggest and smallest biggest = max(lines) smallest = min(lines) for cur_line in lines: base = (cur_line - smallest) + 1 # make lowest value 1 normalised = base / (biggest - smallest) # normalise value between 0 and 1 line_length = int(round(normalised * 28)) # make a graph between 0 and 28 characters wide print "#" * line_length That'll make a simple ascii graph of the disc usage. I really really don't recommend you use something like this. Why? The log file will get bigger, and bigger, and bigger. The graph will get progressively slower to graph. RRDTool uses a rolling-database system to store it's data, so the file will never get bigger than about 50-100KB, and it's consistently quick to graph as the file is a fixed length. In short. If you want something to easily graph almost anything, use munin. If you want something smaller and self-contained, write something with RRDTool. A: We rolled our own at work using RRDtool (the data storage back end to tools like MRTG). We run a perl script every 5 minutes that takes a du per partition and stuffs it into an RRD database and then uses RRD's graph function to build graphs. It takes a while to igure out how to set up the .rrd files (for instance, I had to re-learn RPN to do some of the calculations I wanted to do) but if you have some data you want to graph over time, RRD tool's a good bet. A: If some ASCII chars are "graphical" enough for you, I can recommend ncdu. It is a very nice interactive CLI tool, which helps me a lot to step down large directories without doing cd bigdir ; du -hs over and over again. A: I guess there are a couple of options: * *For a pure CLI solution, use something like gnuplot. See here for example usage. I haven't used gnuplot since my student days :-) *Not really a pure CLI solution, but download something like JFreeChart and write a simple Java app that reads stdin and creates your chart. Hope this helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/40737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How should I cast in VB.NET? Are all of these equal? Under what circumstances should I choose each over the others? * *var.ToString() *CStr(var) *CType(var, String) *DirectCast(var, String) EDIT: Suggestion from NotMyself… * *TryCast(var, String) A: I prefer the following syntax: Dim number As Integer = 1 Dim str As String = String.TryCast(number) If str IsNot Nothing Then Hah you can tell I typically write code in C#. 8) The reason I prefer TryCast is you do not have to mess with the overhead of casting exceptions. Your cast either succeeds or your variable is initialized to null and you deal with that accordingly. A: MSDN seems to indicate that the Cxxx casts for specific types can improve performance in VB .NET because they are converted to inline code. For some reason, it also suggests DirectCast as opposed to CType in certain cases (the documentations states it's when there's an inheritance relationship; I believe this means the sanity of the cast is checked at compile time and optimizations can be applied whereas CType always uses the VB runtime.) When I'm writing VB .NET code, what I use depends on what I'm doing. If it's prototype code I'm going to throw away, I use whatever I happen to type. If it's code I'm serious about, I try to use a Cxxx cast. If one doesn't exist, I use DirectCast if I have a reasonable belief that there's an inheritance relationship. If it's a situation where I have no idea if the cast should succeed (user input -> integers, for example), then I use TryCast so as to do something more friendly than toss an exception at the user. One thing I can't shake is I tend to use ToString instead of CStr but supposedly Cstr is faster. A: User Konrad Rudolph advocates for DirectCast() in Stack Overflow question "Hidden Features of VB.NET". A: Those are all slightly different, and generally have an acceptable usage. * *var.ToString() is going to give you the string representation of an object, regardless of what type it is. Use this if var is not a string already. *CStr(var) is the VB string cast operator. I'm not a VB guy, so I would suggest avoiding it, but it's not really going to hurt anything. I think it is basically the same as CType. *CType(var, String) will convert the given type into a string, using any provided conversion operators. *DirectCast(var, String) is used to up-cast an object into a string. If you know that an object variable is, in fact, a string, use this. This is the same as (string)var in C#. *TryCast (as mentioned by @NotMyself) is like DirectCast, but it will return Nothing if the variable can't be converted into a string, rather than throwing an exception. This is the same as var as string in C#. The TryCast page on MSDN has a good comparison, too. A: Cstr() is compiled inline for better performance. CType allows for casts between types if a conversion operator is defined ToString() Between base type and string throws an exception if conversion is not possible. TryParse() From String to base typeif possible otherwise returns false DirectCast used if the types are related via inheritance or share a common interface , will throw an exception if the cast is not possible, trycast will return nothing in this instance A: According to the certification exam you should use Convert.ToXXX() whenever possible for simple conversions because it optimizes performance better than CXXX conversions. A: At one time, I remember seeing the MSDN library state to use CStr() because it was faster. I do not know if this is true though.
{ "language": "en", "url": "https://stackoverflow.com/questions/40764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "160" }
Q: Path to Program-Files on remote computer How do I determine the (local-) path for the "Program Files" directory on a remote computer? There does not appear to any version of SHGetFolderPath (or related function) that takes the name of a remote computer as a parameter. I guess I could try to query HKLM\Software\Microsoft\Windows\CurrentVersion\ProgramFilesDir using remote-registry, but I was hoping there would be "documented" way of doing it. A: Many of the standard paths require a user to be logged in, especially the SH* functions as those are provided by the "shell", that is, Explorer. I suspect the only way you're going to get the right path is through the registry like you already mentioned. A: This is what I ended up doing: (pszComputer must be on the form "\\name". nPath is size of pszPath (in TCHARs)) DWORD GetProgramFilesDir(PCTSTR pszComputer, PTSTR pszPath, DWORD& nPath) { DWORD n; HKEY hHKLM; if ((n = RegConnectRegistry(pszComputer, HKEY_LOCAL_MACHINE, &hHKLM)) == ERROR_SUCCESS) { HKEY hWin; if ((n = RegOpenKeyEx(hHKLM, _T("Software\\Microsoft\\Windows\\CurrentVersion"), 0, KEY_READ, &hWin)) == ERROR_SUCCESS) { DWORD nType, cbPath = nPath * sizeof(TCHAR); n = RegQueryValueEx(hWin, _T("ProgramFilesDir"), NULL, &nType, reinterpret_cast<PBYTE>(pszPath), &cbPath); nPath = cbPath / sizeof(TCHAR); RegCloseKey(hWin); } RegCloseKey(hHKLM); } return n; }
{ "language": "en", "url": "https://stackoverflow.com/questions/40769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Ticket Tracking Software w/ Good Email Integration and Decent Navigation? I am looking for a simple system to manage inbound emails from a support mailbox for a group with about 3 support people. I've looked at OTRS which seems to have the features that we need. Unfortunately, so far the UI still looks like a confusing mess. Are there any good FOSS tools that would meet this need? I've heard murmurings that something called fooogzeeebugzo might have similar features, but it seems quite expensive for such simple needs. A: Did you try IssueBurner? It was designed for this purpose. You can forward your mailbox (e.g. [email protected]) to a IssueBurner group and you can track the inbound mails until they are closed. Here is a link to their video: http://issueburner.com/a/video A: I have to agree, Fogbugz is probably the best out there. I have used both the hosted version and the purchased version which I hosted. It is top-notch. A: My company recently started using Mojo Helpdesk: www.mojohelpdesk.com. It's a hosted service, not FOSS, but it's pretty cheap and the interface is slick. A: BugTracker.NET is free, open source, and widely used. It has integration with incoming email. In other words, it will accept an incoming email and turn it into a support ticket. A: TicketDesk- C# issue tracking system and support system http://www.codeplex.com/TicketDesk TicketDesk is efficient and designed to do only one thing, facilitate communications between help desk staff and end users. The overriding design goal is to be as simple and frictionless for both users and help desk staff as is possible. TicketDesk is an asp.net web application written in C# targeting the .net 3.5 framework. It includes a simple database with support for SQL 2005 Express or SQL Server 2005. It can leverage SQL server for membership and role based security or integrate with windows authentication and Active Directory groups. A: RT - Request Tracker handles inbound mail. I'm working to add inbound mail support to TicketDesk, but that might be a little while before that makes it into a release. A: FogBugz is great as others have mentioned. I use it for my bug/feature tracking system, but I like to separate out my support ticketing system for my support staff to use. Another tool that has great email integration also is called HelpSpot, they have hosted and non-hosted versions for purchase, depending on your budget. It has a lot of great features, that make the prices worth it. Take the tour and see for yourself. A: Scope out SmarterTrack, Help Desk Software from SmarterTools: A: We use FogBugz...er, "fooogzeeebugzo"...and while it may be a bit expensive for your needs, it works very well. A: bugzilla is more of an issue tracker than a request tracker, but it can be configured to handle email-based status tracking. That said, I think Steven has it- RT is the standard recommendation for this that I've seen. A: The on-demand version of Fogbugz is a pretty cheap option for just a few people, and works really well. We did that for a while before moving it inhouse. A: I've used fogbugz for over 12 months now and more and more I'm finding one of the most valuable features is the in built email support. I've got an on demand account and I'm finding more and more that I don't even check my email in the morning as all my business correspondence is put straight into fogbugz. A: I realize that FOSS is your primary desire and I definitely agree with this. If I were to limit myself to FOSS, I would go with RT 3.8, http://blog.bestpractical.com/2008/07/today-were-rele.html#screenshots However, if you are willing to entertain commercial solutions and are looking for a Helpdesk-"ish" application. I just deployed WebHelpDesk with great success at my current employment, where I am the primary sysadmin and Corporate IT person. They just released a new version, 9.1.1 and it is very well done. The email integration is superb and beyond what I have seen with most other FOSS and commercial issue/bug trackers, given that it is built to run a Helpdesk and not be a software or source code issue tracker. It runs on Windows and *nix, they have a great demo and you can obtain a 30 day trial installer. I have become a big fan of this software and think it has a reasonable price of $250/year/technician (support person). If you want more info on how we deployed it, please email me and I'd be happy to discuss it at length. I have no more connection with them than I am a very happy customer. A: Thanks for all the tips. For the moment, I am looking heavily at eTicket as it was trivial to setup and seems to be developing nicely at the moment. I may look at RT as well, though. A: I'll second the suggestion for RT. See my post here for more thoughts and details on our setup. A: From my personal experience I can recommend using Bridgetrak. It works pretty smooth in our environment and includes rich helpdesk functionality for powerful tickets tracking. I have a lot of experience using this tools - feel free to ask any questions! A: As most of the answers are a little bit outdated, I would definitely recommend OsTicket (http://osticket.com/), a great open source project that offers lots of customization and a user friendly interface. I have been using it for the last two years and I would rather choose OsTicket than OTRS or RT.
{ "language": "en", "url": "https://stackoverflow.com/questions/40773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to change Build Numbering format in Visual Studio I've inherited a .NET application that automatically updates it's version number with each release. The problem, as I see it, is the length and number of digits in the version number. An example of the current version number format is 3.5.3167.26981 which is a mouthful for the users to say when they are reporting bugs. What I would like is something more like this: 3.5 (build 3198). I would prefer to manually update the major and minor versions, but have the build number update automatically. Even better, I don't want the build number to increment unless I am compiling in RELEASE mode. Anyone know if there is a way to do this -- and how? A: In one of the project files, probably AssemblyInfo.cs, the assembly version attribute is set to [assembly: AssemblyVersion("3.5.*")] or something similar. The * basically means it lets Visual Studio automatically set the build and revision number. You can change this to a hard coded value in the format <major version>.<minor version>.<build number>.<revision> You are allowed to use any or all of the precision. For instance 3.5 or 3.5.3167 or 3.5.3167.10000. You can also use compiler conditions to change the versioning based on whether you're doing a debug build or release build. A: At a previous company we did something like this by writing an Ant task to get the current Subversion changeset string, which we used as the build number, appended after the major, minor, and tertiary numbers. You could do something like this with Visual Studio as well. A: Use a '*' wildcard in the AssemblyVersion attribute. Documentation is here. Note that if the application is built from multiple assemblies, the version you care most about is the one for the .exe.
{ "language": "en", "url": "https://stackoverflow.com/questions/40779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I increment a value in a textfile using the regular Windows command-line? I'd like to keep a "compile-counter" for one of my projects. I figured a quick and dirty way to do this would be to keep a text file with a plain number in it, and then simply call upon a small script to increment this each time I compile. How would I go about doing this using the regular Windows command line? I don't really feel like installing some extra shell to do this but if you have any other super simple suggestions that would accomplish just this, they're naturally appreciated as well. A: It would be an new shell (but I think it is worth it), but from PowerShell it would be [int](get-content counter.txt) + 1 | out-file counter.txt A: I'd suggest just appending the current datetime of the build to a log file. date >> builddates.txt That way you get a build count via the # of lines, and you may also get some interesting statistics if you can be bothered analysing the dates and times later on. The extra size & time to count the number of lines in the file will be insignificant unless you are doing seriously fast project iterations! A: You can try a plain old batchfile. @echo off for /f " delims==" %%i in (counter.txt) do set /A temp_counter= %%i+1 echo %temp_counter% > counter.txt assuming the count.bat and counter.txt are located in the same directory. A: If you don't mind running a Microscoft Windows Based Script then this jscript will work OK. just save it as a .js file and run it from dos with "wscript c:/script.js". var fso, f, fileCount; var ForReading = 1, ForWriting = 2; var filename = "c:\\testfile.txt"; fso = new ActiveXObject("Scripting.FileSystemObject"); //create file if its not found if (! fso.FileExists(filename)) { f = fso.OpenTextFile(filename, ForWriting, true); f.Write("0"); f.Close(); } f = fso.OpenTextFile(filename, ForReading); fileCount = parseInt(f.ReadAll()); //make sure the input is a whole number if (isNaN(fileCount)) { fileCount = 0; } fileCount = fileCount + 1; f = fso.OpenTextFile(filename, ForWriting, true); f.Write(fileCount); f.Close();
{ "language": "en", "url": "https://stackoverflow.com/questions/40787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Execute a large SQL script (with GO commands) I need to execute a large set of SQL statements (creating a bunch of tables, views and stored procedures) from within a C# program. These statements need to be separated by GO statements, but SqlCommand.ExecuteNonQuery() does not like GO statements. My solution, which I suppose I'll post for reference, was to split the SQL string on GO lines, and execute each batch separately. Is there an easier/better way? A: I look at this a few times at the end decided with EF implementation A bit modified for SqlConnection public static void ExecuteSqlScript(this SqlConnection sqlConnection, string sqlBatch) { // Handle backslash utility statement (see http://technet.microsoft.com/en-us/library/dd207007.aspx) sqlBatch = Regex.Replace(sqlBatch, @"\\(\r\n|\r|\n)", string.Empty); // Handle batch splitting utility statement (see http://technet.microsoft.com/en-us/library/ms188037.aspx) var batches = Regex.Split( sqlBatch, string.Format(CultureInfo.InvariantCulture, @"^\s*({0}[ \t]+[0-9]+|{0})(?:\s+|$)", BatchTerminator), RegexOptions.IgnoreCase | RegexOptions.Multiline); for (int i = 0; i < batches.Length; ++i) { // Skip batches that merely contain the batch terminator if (batches[i].StartsWith(BatchTerminator, StringComparison.OrdinalIgnoreCase) || (i == batches.Length - 1 && string.IsNullOrWhiteSpace(batches[i]))) { continue; } // Include batch terminator if the next element is a batch terminator if (batches.Length > i + 1 && batches[i + 1].StartsWith(BatchTerminator, StringComparison.OrdinalIgnoreCase)) { int repeatCount = 1; // Handle count parameter on the batch splitting utility statement if (!string.Equals(batches[i + 1], BatchTerminator, StringComparison.OrdinalIgnoreCase)) { repeatCount = int.Parse(Regex.Match(batches[i + 1], @"([0-9]+)").Value, CultureInfo.InvariantCulture); } for (int j = 0; j < repeatCount; ++j) { var command = sqlConnection.CreateCommand(); command.CommandText = batches[i]; command.ExecuteNonQuery(); } } else { var command = sqlConnection.CreateCommand(); command.CommandText = batches[i]; command.ExecuteNonQuery(); } } } A: The "GO" batch separator keyword is actually used by SQL Management Studio itself, so that it knows where to terminate the batches it is sending to the server, and it is not passed to SQL server. You can even change the keyword in Management Studio, should you so desire. A: Based on Blorgbeard's solution. foreach (var sqlBatch in commandText.Split(new[] { "GO" }, StringSplitOptions.RemoveEmptyEntries)) { sqlCommand.CommandText = sqlBatch; sqlCommand.ExecuteNonQuery(); } A: This is what I knocked together to solve my immediate problem. private void ExecuteBatchNonQuery(string sql, SqlConnection conn) { string sqlBatch = string.Empty; SqlCommand cmd = new SqlCommand(string.Empty, conn); conn.Open(); sql += "\nGO"; // make sure last batch is executed. try { foreach (string line in sql.Split(new string[2] { "\n", "\r" }, StringSplitOptions.RemoveEmptyEntries)) { if (line.ToUpperInvariant().Trim() == "GO") { cmd.CommandText = sqlBatch; cmd.ExecuteNonQuery(); sqlBatch = string.Empty; } else { sqlBatch += line + "\n"; } } } finally { conn.Close(); } } It requires GO commands to be on their own line, and will not detect block-comments, so this sort of thing will get split, and cause an error: ExecuteBatchNonQuery(@" /* GO */", conn); A: If you don't want to install SMO objects you can use gplex tool (see this answer) A: If you don't want to use SMO, for example because you need to be cross-platform, you can also use the ScriptSplitter class from SubText. Here's the implementation in C# & VB.NET Usage: string strSQL = @" SELECT * FROM INFORMATION_SCHEMA.columns GO SELECT * FROM INFORMATION_SCHEMA.views "; foreach(string Script in new Subtext.Scripting.ScriptSplitter(strSQL )) { Console.WriteLine(Script); } If you have problems with multiline c-style comments, remove the comments with regex: static string RemoveCstyleComments(string strInput) { string strPattern = @"/[*][\w\d\s]+[*]/"; //strPattern = @"/\*.*?\*/"; // Doesn't work //strPattern = "/\\*.*?\\*/"; // Doesn't work //strPattern = @"/\*([^*]|[\r\n]|(\*+([^*/]|[\r\n])))*\*+/ "; // Doesn't work //strPattern = @"/\*([^*]|[\r\n]|(\*+([^*/]|[\r\n])))*\*+/ "; // Doesn't work // http://stackoverflow.com/questions/462843/improving-fixing-a-regex-for-c-style-block-comments strPattern = @"/\*(?>(?:(?>[^*]+)|\*(?!/))*)\*/"; // Works ! string strOutput = System.Text.RegularExpressions.Regex.Replace(strInput, strPattern, string.Empty, System.Text.RegularExpressions.RegexOptions.Multiline); Console.WriteLine(strOutput); return strOutput; } // End Function RemoveCstyleComments Removing single-line comments is here: https://stackoverflow.com/questions/9842991/regex-to-remove-single-line-sql-comments A: I also faced the same problem, and I could not find any other way but splitting the single SQL operation in separate files, then executing all of them in sequence. Obviously the problem is not with lists of DML commands, they can be executed without GO in between; different story with DDL (create, alter, drop...) A: If you don't want to go the SMO route you can search and replace "GO" for ";" and the query as you would. Note that soly the the last result set will be returned. A: I accomplished this today by loading my SQL from a text file into one string. I then used the string Split function to separate the string into individual commands which were then sent to the server individually. Simples :) Just realised that you need to split on \nGO just in case the letters GO appear in any of your table names etc. Guess I was lucky there! A: If you don't want to use SMO (which is better than the solution below, but i want to give an alternative...) you can split your query with this function. It is: * *Comment proof (example --GO or /* GO */) *Only works on a new line, just as in SSMS (example /* test /* GO works and select 1 as go not *String proof (example print 'no go ') private List<string> SplitScriptGo(string script) { var result = new List<string>(); int pos1 = 0; int pos2 = 0; bool whiteSpace = true; bool emptyLine = true; bool inStr = false; bool inComment1 = false; bool inComment2 = false; while (true) { while (pos2 < script.Length && Char.IsWhiteSpace(script[pos2])) { if (script[pos2] == '\r' || script[pos2] == '\n') { emptyLine = true; inComment1 = false; } pos2++; } if (pos2 == script.Length) break; bool min2 = (pos2 + 1) < script.Length; bool min3 = (pos2 + 2) < script.Length; if (!inStr && !inComment2 && min2 && script.Substring(pos2, 2) == "--") inComment1 = true; if (!inStr && !inComment1 && min2 && script.Substring(pos2, 2) == "/*") inComment2 = true; if (!inComment1 && !inComment2 && script[pos2] == '\'') inStr = !inStr; if (!inStr && !inComment1 && !inComment2 && emptyLine && (min2 && script.Substring(pos2, 2).ToLower() == "go") && (!min3 || char.IsWhiteSpace(script[pos2 + 2]) || script.Substring(pos2 + 2, 2) == "--" || script.Substring(pos2 + 2, 2) == "/*")) { if (!whiteSpace) result.Add(script.Substring(pos1, pos2 - pos1)); whiteSpace = true; emptyLine = false; pos2 += 2; pos1 = pos2; } else { pos2++; whiteSpace = false; if (!inComment2) emptyLine = false; } if (!inStr && inComment2 && pos2 > 1 && script.Substring(pos2 - 2, 2) == "*/") inComment2 = false; } if (!whiteSpace) result.Add(script.Substring(pos1)); return result; } A: Use SQL Server Management Objects (SMO) which understands GO separators. See my blog post here: http://weblogs.asp.net/jongalloway/Handling-_2200_GO_2200_-Separators-in-SQL-Scripts-2D00-the-easy-way Sample code: public static void Main() { string scriptDirectory = "c:\\temp\\sqltest\\"; string sqlConnectionString = "Integrated Security=SSPI;" + "Persist Security Info=True;Initial Catalog=Northwind;Data Source=(local)"; DirectoryInfo di = new DirectoryInfo(scriptDirectory); FileInfo[] rgFiles = di.GetFiles("*.sql"); foreach (FileInfo fi in rgFiles) { FileInfo fileInfo = new FileInfo(fi.FullName); string script = fileInfo.OpenText().ReadToEnd(); using (SqlConnection connection = new SqlConnection(sqlConnectionString)) { Server server = new Server(new ServerConnection(connection)); server.ConnectionContext.ExecuteNonQuery(script); } } } If that won't work for you, see Phil Haack's library which handles that: http://haacked.com/archive/2007/11/04/a-library-for-executing-sql-scripts-with-go-separators-and.aspx A: You can use SQL Management Objects to perform this. These are the same objects that Management Studio uses to execute queries. I believe Server.ConnectionContext.ExecuteNonQuery() will perform what you need. A: use the following method to split the string and execute batch by batch using System; using System.IO; using System.Text.RegularExpressions; namespace RegExTrial { class Program { static void Main(string[] args) { string sql = String.Empty; string path=@"D:\temp\sample.sql"; using (StreamReader reader = new StreamReader(path)) { sql = reader.ReadToEnd(); } //Select any GO (ignore case) that starts with at least //one white space such as tab, space,new line, verticle tab etc string pattern="[\\s](?i)GO(?-i)"; Regex matcher = new Regex(pattern, RegexOptions.Compiled); int start = 0; int end = 0; Match batch=matcher.Match(sql); while (batch.Success) { end = batch.Index; string batchQuery = sql.Substring(start, end - start).Trim(); //execute the batch ExecuteBatch(batchQuery); start = end + batch.Length; batch = matcher.Match(sql,start); } } private static void ExecuteBatch(string command) { //execute your query here } } } A: To avoid third parties, regexes, memory overheads and fast work with large scripts I created my own stream-based parser. It * *checks syntax before *can recognize comments with -- or /**/ -- some commented text /* drop table Users; GO */ *can recognize string literals with ' or " set @s = 'create table foo(...); GO create index ...'; *preserves LF and CR formatting *preserves comments block in object bodies (stored procedures, views etc.) *and other constructions such as gO -- commented text How to use try { using (SqlConnection connection = new SqlConnection("Integrated Security=SSPI;Persist Security Info=True;Initial Catalog=DATABASE-NAME;Data Source=SERVER-NAME")) { connection.Open(); int rowsAffected = SqlStatementReader.ExecuteSqlFile( "C:\\target-sql-script.sql", connection, // Don't forget to use the correct file encoding!!! Encoding.Default, // Indefinitely (sec) 0 ); } } // implement your handlers catch (SqlStatementReader.SqlBadSyntaxException) { } catch (SqlException) { } catch (Exception) { } Stream-based SQL script reader class SqlStatementReader { public class SqlBadSyntaxException : Exception { public SqlBadSyntaxException(string description) : base(description) { } public SqlBadSyntaxException(string description, int line) : base(OnBase(description, line, null)) { } public SqlBadSyntaxException(string description, int line, string filePath) : base(OnBase(description, line, filePath)) { } private static string OnBase(string description, int line, string filePath) { if (filePath == null) return string.Format("Line: {0}. {1}", line, description); else return string.Format("File: {0}\r\nLine: {1}. {2}", filePath, line, description); } } enum SqlScriptChunkTypes { InstructionOrUnquotedIdentifier = 0, BracketIdentifier = 1, QuotIdentifierOrLiteral = 2, DblQuotIdentifierOrLiteral = 3, CommentLine = 4, CommentMultiline = 5, } StreamReader _sr = null; string _filePath = null; int _lineStart = 1; int _lineEnd = 1; bool _isNextChar = false; char _nextChar = '\0'; public SqlStatementReader(StreamReader sr) { if (sr == null) throw new ArgumentNullException("StreamReader can't be null."); if (sr.BaseStream is FileStream) _filePath = ((FileStream)sr.BaseStream).Name; _sr = sr; } public SqlStatementReader(StreamReader sr, string filePath) { if (sr == null) throw new ArgumentNullException("StreamReader can't be null."); _sr = sr; _filePath = filePath; } public int LineStart { get { return _lineStart; } } public int LineEnd { get { return _lineEnd == 1 ? _lineEnd : _lineEnd - 1; } } public void LightSyntaxCheck() { while (ReadStatementInternal(true) != null) ; } public string ReadStatement() { for (string s = ReadStatementInternal(false); s != null; s = ReadStatementInternal(false)) { // skip empty for (int i = 0; i < s.Length; i++) { switch (s[i]) { case ' ': continue; case '\t': continue; case '\r': continue; case '\n': continue; default: return s; } } } return null; } string ReadStatementInternal(bool syntaxCheck) { if (_isNextChar == false && _sr.EndOfStream) return null; StringBuilder allLines = new StringBuilder(); StringBuilder line = new StringBuilder(); SqlScriptChunkTypes nextChunk = SqlScriptChunkTypes.InstructionOrUnquotedIdentifier; SqlScriptChunkTypes currentChunk = SqlScriptChunkTypes.InstructionOrUnquotedIdentifier; char ch = '\0'; int lineCounter = 0; int nextLine = 0; int currentLine = 0; bool nextCharHandled = false; bool foundGO; int go = 1; while (ReadChar(out ch)) { if (nextCharHandled == false) { currentChunk = nextChunk; currentLine = nextLine; switch (currentChunk) { case SqlScriptChunkTypes.InstructionOrUnquotedIdentifier: if (ch == '[') { currentChunk = nextChunk = SqlScriptChunkTypes.BracketIdentifier; currentLine = nextLine = lineCounter; } else if (ch == '"') { currentChunk = nextChunk = SqlScriptChunkTypes.DblQuotIdentifierOrLiteral; currentLine = nextLine = lineCounter; } else if (ch == '\'') { currentChunk = nextChunk = SqlScriptChunkTypes.QuotIdentifierOrLiteral; currentLine = nextLine = lineCounter; } else if (ch == '-' && (_isNextChar && _nextChar == '-')) { nextCharHandled = true; currentChunk = nextChunk = SqlScriptChunkTypes.CommentLine; currentLine = nextLine = lineCounter; } else if (ch == '/' && (_isNextChar && _nextChar == '*')) { nextCharHandled = true; currentChunk = nextChunk = SqlScriptChunkTypes.CommentMultiline; currentLine = nextLine = lineCounter; } else if (ch == ']') { throw new SqlBadSyntaxException("Incorrect syntax near ']'.", _lineEnd + lineCounter, _filePath); } else if (ch == '*' && (_isNextChar && _nextChar == '/')) { throw new SqlBadSyntaxException("Incorrect syntax near '*'.", _lineEnd + lineCounter, _filePath); } break; case SqlScriptChunkTypes.CommentLine: if (ch == '\r' && (_isNextChar && _nextChar == '\n')) { nextCharHandled = true; currentChunk = nextChunk = SqlScriptChunkTypes.InstructionOrUnquotedIdentifier; currentLine = nextLine = lineCounter; } else if (ch == '\n' || ch == '\r') { currentChunk = nextChunk = SqlScriptChunkTypes.InstructionOrUnquotedIdentifier; currentLine = nextLine = lineCounter; } break; case SqlScriptChunkTypes.CommentMultiline: if (ch == '*' && (_isNextChar && _nextChar == '/')) { nextCharHandled = true; nextChunk = SqlScriptChunkTypes.InstructionOrUnquotedIdentifier; nextLine = lineCounter; } else if (ch == '/' && (_isNextChar && _nextChar == '*')) { throw new SqlBadSyntaxException("Missing end comment mark '*/'.", _lineEnd + currentLine, _filePath); } break; case SqlScriptChunkTypes.BracketIdentifier: if (ch == ']') { nextChunk = SqlScriptChunkTypes.InstructionOrUnquotedIdentifier; nextLine = lineCounter; } break; case SqlScriptChunkTypes.DblQuotIdentifierOrLiteral: if (ch == '"') { if (_isNextChar && _nextChar == '"') { nextCharHandled = true; } else { nextChunk = SqlScriptChunkTypes.InstructionOrUnquotedIdentifier; nextLine = lineCounter; } } break; case SqlScriptChunkTypes.QuotIdentifierOrLiteral: if (ch == '\'') { if (_isNextChar && _nextChar == '\'') { nextCharHandled = true; } else { nextChunk = SqlScriptChunkTypes.InstructionOrUnquotedIdentifier; nextLine = lineCounter; } } break; } } else nextCharHandled = false; foundGO = false; if (currentChunk == SqlScriptChunkTypes.InstructionOrUnquotedIdentifier || go >= 5 || (go == 4 && currentChunk == SqlScriptChunkTypes.CommentLine)) { // go = 0 - break, 1 - begin of the string, 2 - spaces after begin of the string, 3 - G or g, 4 - O or o, 5 - spaces after GO, 6 - line comment after valid GO switch (go) { case 0: if (ch == '\r' || ch == '\n') go = 1; break; case 1: if (ch == ' ' || ch == '\t') go = 2; else if (ch == 'G' || ch == 'g') go = 3; else if (ch != '\n' && ch != '\r') go = 0; break; case 2: if (ch == 'G' || ch == 'g') go = 3; else if (ch == '\n' || ch == '\r') go = 1; else if (ch != ' ' && ch != '\t') go = 0; break; case 3: if (ch == 'O' || ch == 'o') go = 4; else if (ch == '\n' || ch == '\r') go = 1; else go = 0; break; case 4: if (ch == '\r' && (_isNextChar && _nextChar == '\n')) go = 5; else if (ch == '\n' || ch == '\r') foundGO = true; else if (ch == ' ' || ch == '\t') go = 5; else if (ch == '-' && (_isNextChar && _nextChar == '-')) go = 6; else go = 0; break; case 5: if (ch == '\r' && (_isNextChar && _nextChar == '\n')) go = 5; else if (ch == '\n' || ch == '\r') foundGO = true; else if (ch == '-' && (_isNextChar && _nextChar == '-')) go = 6; else if (ch != ' ' && ch != '\t') throw new SqlBadSyntaxException("Incorrect syntax was encountered while parsing go.", _lineEnd + lineCounter, _filePath); break; case 6: if (ch == '\r' && (_isNextChar && _nextChar == '\n')) go = 6; else if (ch == '\n' || ch == '\r') foundGO = true; break; default: go = 0; break; } } else go = 0; if (foundGO) { if (ch == '\r' || ch == '\n') { ++lineCounter; } // clear GO string s = line.Append(ch).ToString(); for (int i = 0; i < s.Length; i++) { switch (s[i]) { case ' ': continue; case '\t': continue; case '\r': continue; case '\n': continue; default: _lineStart = _lineEnd; _lineEnd += lineCounter; return allLines.Append(s.Substring(0, i)).ToString(); } } return string.Empty; } // accumulate by string if (ch == '\r' && (_isNextChar == false || _nextChar != '\n')) { ++lineCounter; if (syntaxCheck == false) allLines.Append(line.Append('\r').ToString()); line.Clear(); } else if (ch == '\n') { ++lineCounter; if (syntaxCheck == false) allLines.Append(line.Append('\n').ToString()); line.Clear(); } else { if (syntaxCheck == false) line.Append(ch); } } // this is the end of the stream, return it without GO, if GO exists switch (currentChunk) { case SqlScriptChunkTypes.InstructionOrUnquotedIdentifier: case SqlScriptChunkTypes.CommentLine: break; case SqlScriptChunkTypes.CommentMultiline: if (nextChunk != SqlScriptChunkTypes.InstructionOrUnquotedIdentifier) throw new SqlBadSyntaxException("Missing end comment mark '*/'.", _lineEnd + currentLine, _filePath); break; case SqlScriptChunkTypes.BracketIdentifier: if (nextChunk != SqlScriptChunkTypes.InstructionOrUnquotedIdentifier) throw new SqlBadSyntaxException("Unclosed quotation mark [.", _lineEnd + currentLine, _filePath); break; case SqlScriptChunkTypes.DblQuotIdentifierOrLiteral: if (nextChunk != SqlScriptChunkTypes.InstructionOrUnquotedIdentifier) throw new SqlBadSyntaxException("Unclosed quotation mark \".", _lineEnd + currentLine, _filePath); break; case SqlScriptChunkTypes.QuotIdentifierOrLiteral: if (nextChunk != SqlScriptChunkTypes.InstructionOrUnquotedIdentifier) throw new SqlBadSyntaxException("Unclosed quotation mark '.", _lineEnd + currentLine, _filePath); break; } if (go >= 4) { string s = line.ToString(); for (int i = 0; i < s.Length; i++) { switch (s[i]) { case ' ': continue; case '\t': continue; case '\r': continue; case '\n': continue; default: _lineStart = _lineEnd; _lineEnd += lineCounter + 1; return allLines.Append(s.Substring(0, i)).ToString(); } } } _lineStart = _lineEnd; _lineEnd += lineCounter + 1; return allLines.Append(line.ToString()).ToString(); } bool ReadChar(out char ch) { if (_isNextChar) { ch = _nextChar; if (_sr.EndOfStream) _isNextChar = false; else _nextChar = Convert.ToChar(_sr.Read()); return true; } else if (_sr.EndOfStream == false) { ch = Convert.ToChar(_sr.Read()); if (_sr.EndOfStream == false) { _isNextChar = true; _nextChar = Convert.ToChar(_sr.Read()); } return true; } else { ch = '\0'; return false; } } public static int ExecuteSqlFile(string filePath, SqlConnection connection, Encoding fileEncoding, int commandTimeout) { int rowsAffected = 0; using (FileStream fs = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Read)) { // Simple syntax check (you can comment out these two lines below) new SqlStatementReader(new StreamReader(fs, fileEncoding)).LightSyntaxCheck(); fs.Seek(0L, SeekOrigin.Begin); // Read statements without GO SqlStatementReader rd = new SqlStatementReader(new StreamReader(fs, fileEncoding)); string stmt; while ((stmt = rd.ReadStatement()) != null) { using (SqlCommand cmd = connection.CreateCommand()) { cmd.CommandText = stmt; cmd.CommandTimeout = commandTimeout; int i = cmd.ExecuteNonQuery(); if (i > 0) rowsAffected += i; } } } return rowsAffected; } } A: I had the same problem in java and I solved it with a bit of logic and regex. I believe the same logic can be applied.First I read from the slq file into memory. Then I apply the following logic. It's pretty much what has been said before however I believe that using regex word bound is safer than expecting a new line char. String pattern = "\\bGO\\b|\\bgo\\b"; String[] splitedSql = sql.split(pattern); for (String chunk : splitedSql) { getJdbcTemplate().update(chunk); } This basically splits the sql string into an array of sql strings. The regex is basically to detect full 'go' words either lower case or upper case. Then you execute the different querys sequentially. A: I hit this same issue and eventually just solved it by a simple string replace, replacing the word GO with a semi-colon (;) All seems to be working fine while executing scripts with in-line comments, block comments, and GO commands public static bool ExecuteExternalScript(string filePath) { using (StreamReader file = new StreamReader(filePath)) using (SqlConnection conn = new SqlConnection(dbConnStr)) { StringBuilder sql = new StringBuilder(); string line; while ((line = file.ReadLine()) != null) { // replace GO with semi-colon if (line == "GO") sql.Append(";"); // remove inline comments else if (line.IndexOf("--") > -1) sql.AppendFormat(" {0} ", line.Split(new string[] { "--" }, StringSplitOptions.None)[0]); // just the line as it is else sql.AppendFormat(" {0} ", line); } conn.Open(); SqlCommand cmd = new SqlCommand(sql.ToString(), conn); cmd.ExecuteNonQuery(); } return true; } A: You can just use ; at the end of each statement as it worked for me. Really don't know if there are any drawbacks to it. A: Here is the most elegant solution I could find. Let's say your SQL script containing GO statements is in a file script.sql. You can do: $query = ((Get-Content -Raw "script.sql") -replace '([\s\n]*)GO([\s\n]+)','$1$2') This will remove all GO statements without removing other occurences of the string go anywhere else (according to my tests). You can then do: $SqlConnection = New-Object System.Data.SqlClient.SqlConnection $SqlCmd = New-Object System.Data.SqlClient.SqlCommand $SqlConnection.ConnectionString = <your_connection_string> $SqlConnection.Open() $SqlCmd.Connection = $SqlConnection $SqlCmd.CommandText = $query $SqlCmd.ExecuteNonQuery() which should normally work without issues. A: Too difficult :) Create array of strings str[] replacing GO with ",@" : string[] str ={ @" USE master; ",@" CREATE DATABASE " +con_str_initdir+ @"; ",@" -- Verify the database files and sizes --SELECT name, size, size*1.0/128 AS [Size in MBs] --SELECT name --FROM sys.master_files --WHERE name = N'" + con_str_initdir + @"'; --GO USE " + con_str_initdir + @"; ",@" SET ANSI_NULLS ON ",@" SET QUOTED_IDENTIFIER ON ",@" IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Customers]') AND type in (N'U')) BEGIN CREATE TABLE [dbo].[Customers]( [CustomerID] [int] IDENTITY(1,1) NOT NULL, [CustomerName] [nvarchar](50) NULL, CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED ( [CustomerID] ASC )WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] END ",@" SET ANSI_NULLS ON ",@" SET QUOTED_IDENTIFIER ON ",@" IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[GOODS]') AND type in (N'U')) BEGIN CREATE TABLE [dbo].[GOODS]( [GoodsID] [int] IDENTITY(1,1) NOT NULL, [GoodsName] [nvarchar](50) NOT NULL, [GoodsPrice] [float] NOT NULL, CONSTRAINT [PK_GOODS] PRIMARY KEY CLUSTERED ( [GoodsID] ASC )WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] END ",@" SET ANSI_NULLS ON ",@" SET QUOTED_IDENTIFIER ON ",@" IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Orders]') AND type in (N'U')) BEGIN CREATE TABLE [dbo].[Orders]( [OrderID] [int] IDENTITY(1,1) NOT NULL, [CustomerID] [int] NOT NULL, [Date] [smalldatetime] NOT NULL, CONSTRAINT [PK_Orders] PRIMARY KEY CLUSTERED ( [OrderID] ASC )WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] END ",@" SET ANSI_NULLS ON ",@" SET QUOTED_IDENTIFIER ON ",@" IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[OrderDetails]') AND type in (N'U')) BEGIN CREATE TABLE [dbo].[OrderDetails]( [OrderID] [int] NOT NULL, [GoodsID] [int] NOT NULL, [Qty] [int] NOT NULL, [Price] [float] NOT NULL, CONSTRAINT [PK_OrderDetails] PRIMARY KEY CLUSTERED ( [OrderID] ASC, [GoodsID] ASC )WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] END ",@" SET ANSI_NULLS ON ",@" SET QUOTED_IDENTIFIER ON ",@" IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[InsertCustomers]') AND type in (N'P', N'PC')) BEGIN EXEC dbo.sp_executesql @statement = N'-- ============================================= -- Author: <Author,,Name> -- Create date: <Create Date,,> -- Description: <Description,,> -- ============================================= create PROCEDURE [dbo].[InsertCustomers] @CustomerName nvarchar(50), @Identity int OUT AS INSERT INTO Customers (CustomerName) VALUES(@CustomerName) SET @Identity = SCOPE_IDENTITY() ' END ",@" IF NOT EXISTS (SELECT * FROM sys.foreign_keys WHERE object_id = OBJECT_ID(N'[dbo].[FK_Orders_Customers]') AND parent_object_id = OBJECT_ID(N'[dbo].[Orders]')) ALTER TABLE [dbo].[Orders] WITH CHECK ADD CONSTRAINT [FK_Orders_Customers] FOREIGN KEY([CustomerID]) REFERENCES [dbo].[Customers] ([CustomerID]) ON UPDATE CASCADE ",@" ALTER TABLE [dbo].[Orders] CHECK CONSTRAINT [FK_Orders_Customers] ",@" IF NOT EXISTS (SELECT * FROM sys.foreign_keys WHERE object_id = OBJECT_ID(N'[dbo].[FK_OrderDetails_GOODS]') AND parent_object_id = OBJECT_ID(N'[dbo].[OrderDetails]')) ALTER TABLE [dbo].[OrderDetails] WITH CHECK ADD CONSTRAINT [FK_OrderDetails_GOODS] FOREIGN KEY([GoodsID]) REFERENCES [dbo].[GOODS] ([GoodsID]) ON UPDATE CASCADE ",@" ALTER TABLE [dbo].[OrderDetails] CHECK CONSTRAINT [FK_OrderDetails_GOODS] ",@" IF NOT EXISTS (SELECT * FROM sys.foreign_keys WHERE object_id = OBJECT_ID(N'[dbo].[FK_OrderDetails_Orders]') AND parent_object_id = OBJECT_ID(N'[dbo].[OrderDetails]')) ALTER TABLE [dbo].[OrderDetails] WITH CHECK ADD CONSTRAINT [FK_OrderDetails_Orders] FOREIGN KEY([OrderID]) REFERENCES [dbo].[Orders] ([OrderID]) ON UPDATE CASCADE ON DELETE CASCADE ",@" ALTER TABLE [dbo].[OrderDetails] CHECK CONSTRAINT [FK_OrderDetails_Orders] "}; for(int i =0; i<str.Length;i++) { myCommand.CommandText=str[i]; try { myCommand.ExecuteNonQuery(); } catch (SystemException ee) { MessageBox.Show("Error "+ee.ToString()); } } That's all, enjoy. A: For anyone still having the problem. You could use official Microsoft SMO https://learn.microsoft.com/en-us/sql/relational-databases/server-management-objects-smo/overview-smo?view=sql-server-2017 using (var connection = new SqlConnection(connectionString)) { var server = new Server(new ServerConnection(connection)); server.ConnectionContext.ExecuteNonQuery(sql); }
{ "language": "en", "url": "https://stackoverflow.com/questions/40814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "103" }
Q: Styling HTML helpers ASP.NET MVC If I have an HTML helper like so: Name:<br /> <%=Html.TextBox("txtName",20) %><br /> How do I apply a CSS class to it? Do I have to wrap it in a span? Or do I need to somehow utilize the HtmlAttributes property of the helper? A: I did some research and came across this article that seems to have a solution to your question. Ajax Control Toolkit with ASP.NET MVC# source: jimzimmerman ARTICLE LINK http://www.ajaxprojects.com/ajax/tutorialdetails.php?itemid=330 QUOTE So basically if you put the class name TextboxWatermark on any textbox input with the title you like to show as the watermark like this: <input type="text" class"TextboxWatermark" name="username" id="username" title="Must be at least 6 chars" /> or <%= Html.TextBox("username", new { @class = "TextboxWatermark", @title = "Must be at least 6 chars" }) %> What is nice about the second option is that you get the added benefit of getting the View Engine to fill out the value of the textbox if there is an item in ViewData of the ViewData.Model that has a var named 'username'. A: You can pass it into the TextBox call as a parameter. Name:<br/> <%= Html.TextBox("txtName", "20", new { @class = "hello" }) %> This line will create a text box with the value 20 and assign the class attribute with the value hello. I put the @ character in front of the class, because class is a reserved keyword. If you want to add other attributes, just separate the key/value pairs with commas. A: Use the htmlAttributes parameter with an anonymous type, like tihs: <%=Html.TextBox("txtName","20", new { @class = "test"}) %> A: This is how to add a class and a style on the same element... "x" being the model passed to the view with a property of TextBoxID @Html.TextBoxFor(x => x.TextBoxID, new { @class = "SearchBarSelect", style = "width: 20px; background-color: green;" }) A: the helper implementation public static class LabelExtensioncs { public static MvcHtmlString Alarm(this HtmlHelper helper, string target, string text) { return MvcHtmlString.Create(string.Format("<p class='alert' style='background-color: #b8f89d;border-radius: 5px;width: 100%;'><b>{0}</b><br /><i>{1}</i></p>", target, text)); } } the usage in view section @Html.Alarm("Title", "please unsure your card no is invisible in your authorized information") the result A: Is it that much more work? A: Theres no need to use span, because its not dynamic. Css: .testClass { color: #1600d3; } View (Index): @Html.TextBox("expression", "Text to show.", new { @class = "testClass" }) if you need dynamic options you can use for example: CSS: .test class{ background: #ffffff; } Controller (Index for test): [HttpGet] public ActionResult Index() { ViewBag.vbColor = "#000000"; return View(); } View (Index): <div> <span> @Html.TextBox("expression", "Text to show.", new { @class = "testClass", @style="color: " + @ViewBag.vbColor }) </span> </div> Hope it helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/40816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Cannot create an environment variable in the registry I have a custom installer action that updates the PATH environment, and creates an additional environment variable. Appending a directory to the existing path variable is working fine, but for some reason my attempts to create a new environment variable have been unsuccessful. The code I am using is: using (RegistryKey reg = Registry.LocalMachine.OpenSubKey(@"SYSTEM\CurrentControlSet\Control\Session Manager\Environment", true)) { reg.SetValue("MYVAR", "SomeVal", RegistryValueKind.ExpandString); } Edit: The OS is 32-bit XP, and as far as I can tell it is failing silently. A: What OS is this? Is it on a 64-bit system? What is the nature of the failure: silent or is an exception thrown? You could try running ProcessMonitor and seeing if it sees the attempt to set the value. A: Is there any reason that you have to do it through the registry? If not, you can use Environment.SetEnvironmentVariable() since .NET 2.0. It allows you to set on a machine, process or user basis. A: Why are you using a CustomAction for this? The Windows Installer supports updating environment variables natively. A: It turns out there was another problem that was preventing the code in my question from being called. However, I was using the Win32 assembly because the example code I was following was written before the Environment assembly became available. So Thanks Peter for pointing out the Environment API.
{ "language": "en", "url": "https://stackoverflow.com/questions/40840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do F# units of measure work? Has anyone had a chance to dig into how F# Units of Measure work? Is it just type-based chicanery, or are there CLR types hiding underneath that could (potentially) be used from other .net languages? Will it work for any numerical unit, or is it limited to floating point values (which is what all the examples use)? A: The best (and I think official) place to find out about this is on Andrew Kennedy's blog. Here are the (current) relevant posts. * *Units of Measure in F#: Part One, Introducing Units *Units of Measure in F#: Part Two, Unit Conversions *Units of Measure in F#: Part Three, Generic Units *Units of Measure in F#: Part Four, Parameterized Types As I said in the post that your answerer referred to, this is most definitely something that you CAN'T do in C# (though I wish you could). A: According to a response on the next related blog post, they are a purely static mechanism in the F# compiler. So there is no CLR representation of the units data. Its not entirely clear whether it currently works with non-float types, but from the perspective of the type system it is theoretically possible.
{ "language": "en", "url": "https://stackoverflow.com/questions/40845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How to store passwords in Winforms application? I have some code like this in a winforms app I was writing to query a user's mail box Storage Quota. DirectoryEntry mbstore = new DirectoryEntry( @"LDAP://" + strhome, m_serviceaccount, [m_pwd], AuthenticationTypes.Secure); No matter what approach I tried (like SecureString), I am easily able to see the password (m_pwd) either using Reflector or using strings tab of Process Explorer for the executable. I know I could put this code on the server or tighten up the security using mechanisms like delegation and giving only the required privileges to the service account. Can somebody suggest a reasonably secure way to store the password in the local application without revealing the password to hackers? Hashing is not possible since I need to know the exact password (not just the hash for matching purpose). Encryption/Decryption mechanisms are not working since they are machine dependent. A: I found this book by keith Brown The .NET Developer's Guide to Windows Security. It has some good samples covering all kinds of security scenarios. Free Online version is also available. A: The sanctified method is to use CryptoAPI and the Data Protection APIs. To encrypt, use something like this (C++): DATA_BLOB blobIn, blobOut; blobIn.pbData=(BYTE*)data; blobIn.cbData=wcslen(data)*sizeof(WCHAR); CryptProtectData(&blobIn, description, NULL, NULL, NULL, CRYPTPROTECT_LOCAL_MACHINE | CRYPTPROTECT_UI_FORBIDDEN, &blobOut); _encrypted=blobOut.pbData; _length=blobOut.cbData; Decryption is the opposite: DATA_BLOB blobIn, blobOut; blobIn.pbData=const_cast<BYTE*>(data); blobIn.cbData=length; CryptUnprotectData(&blobIn, NULL, NULL, NULL, NULL, CRYPTPROTECT_UI_FORBIDDEN, &blobOut); std::wstring _decrypted; _decrypted.assign((LPCWSTR)blobOut.pbData,(LPCWSTR)blobOut.pbData+blobOut.cbData/sizeof(WCHAR)); If you don't specify CRYPTPROTECT_LOCAL_MACHINE then the encrypted password can be securely stored in the registry or config file and only you can decrypt it. If you specify LOCAL_MACHINE, then anyone with access to the machine can get it. A: If you store it as a secure string and save the secure string to a file (possibly using Isolated Storage, the only time you will have a plain text password is when you decrypt it to create your mbstore. Unfortunately, the constructor does not take a SecureString or a Credential object. A: As mentioned, the Data Protection API is a good way to do this. Note that if you're using .NET 2.0 or greater, you don't need to use P/Invoke to invoke the DPAPI. The framework wraps the calls with the System.Security.Cryptography.ProtectedData class.
{ "language": "en", "url": "https://stackoverflow.com/questions/40853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: IDE for use on a Java-enabled smart phone? Is there an IDE that I can load on a Blackberry, E71, or an iPhone? A: Apple released iPhone SDK for XCode a while back, check out developer.apple.com and Nokia also release their own SDK check out forum.nokia.com But for pure Java Midlet goodness, i would recommend Netbeans (netbeans.org) their netbeans mobile application editor is a gem, second to none. To answer your question, i don't think any phone is powerful enough to compile and test the code on themselves, so no ... A: Not that I know of, typically you'll develop apps on a desktop machine (PC/MAC whatever) and download/control the application on the phone. Also I don't think Java is available on a standard (non-cracked) iPhone. A: There was a palm based C compiler. I had some trouble finding it though, but it's called OnBoard-C. It didn't exactly have an IDE, it compiled notes. Considering there's a lack of embedded compilers, I'd be surprised to find full embedded IDEs. Oh... I recall there being a Scheme or Lisp too. This maybe premature but, congrats, you just found a market niche.
{ "language": "en", "url": "https://stackoverflow.com/questions/40859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Business Application UI Design Basically I'm going to go a bit broad here and ask a few questions to get a bit of a picture of how people are handling UI these days. Lately I've found it pretty easy to do some fancy things with UI design and with WPF specifically we're finding new ways to do layouts that are better looking and more functional for the user, but in contrast one of the business focused guys at our local .NET User Group wouldn't even think of using WPF until it had a datagrid that he could use to make Excel like input forms. * *So basically, have you rethought the design of your business apps as you move to Web/WPF/Silverlight designs, because for us at least - in winforms we kept things fairly functional and uniform, or are you trying to keep that "known" UI? *Would a dedicated design guy (for larger teams), or a dev with more design chops rank higher when looking at hiring these days? (Check out what a designer did for Scott Hanselman's BabySmash and Microsoft's Prism demo) *Are there any design hints/tips/guidelines you use for your UI - especially for WPF? *What sites would you recommend for design? A: Here's a great screen cast where Billy Hollis goes into many of these issues: http://www.dnrtv.com/default.aspx?showNum=115 A: I think WPF can greatly improve user experience. However there are not much business oriented controls out there which means you need to do a lot by yourself. As for designers I think it's really hard to find WPF designer now days, it still would be a dedicated programmer rather then design-only guy. I hope that this situation will change in nearest feature. I think it's worth at least start experimenting with WPF to be able to compete with upcoming solutions. A: @aku "I think WPF can greatly improve user experience." I believe that WPF has amazing potential as a tool to make UIs more creative and better suited to the actual data that is being displayed, BUT.............. Just the mere act of using WPF isn't going to make great UIs appear out of nowhere. A great carpenter may use the best wood working tools, but that doesn't mean that if you picked up his tools you'd all of a sudden be popping out fine furniture. Using WPF over HTML/Flash/WinForms/etc just increases your potential . If that's potential for ugliness or potential for beauty is up to you. A: The whole concept of re-thinking a UI of an existing application is dependent on the target audience. For a boring business application, like accounting or budgeting, it may even be counter-productive. For one, users of those kinds of apps may have used a similar looking and feeling UI for years and years, and second, looking too "cute" and colorful can even bring a perception of toy-ishness (is that a word?) with it. We have done several new projects with the latest & greatest UI gadgets, and for the most part for new applications it seems to be a good chance to get some feedback from a live audience. Then it gets easier to translate that feedback into existing applications. We also have some apps which are still actively developed (and used obviously), where the UI looks almost like in Windows 3.1. They're awful, gray, clunky, and our only real designer is always trying to get a permission to bring it to the current centrury - but the biggest customer actively refuses this. They say it's just fine, people know how to use it, and it works even in their oldest computers. A: I recommend that you read Steve Krug's Don't Make Me Think first. The book has a great checklist of things that you have to take into consideration when designing your UIs. While it's focused on web usability, a lot of the lessons therein are valuable even to desktop application designers. That being said, whether you use Windows forms or WPF or Flash or whatever new and shiny thing that comes around is, it is of utmost importance to hire either a) a real designer, or b) a development guy with a lot of UI design experience, either of which who can provide you a serious URL for their design portfolio. It will help a lot not only in improving the design of your application but also unburdening your developers from thinking about UI design, and allow them to focus on the back-end code. As for "business focused" guys -- it would be really great if you would get the opinion of actual customers and stake holders, and have them do some usability testing for your application. It's their opinion that would matter most. I think it would not be difficult to get a good designer up to speed on Microsoft Expression Blend to whip up some good XAML designs that your team could use to come up with a really good product. A: @David H Aust That's part of the reason for asking the question - with these newer tools like WPF that lend themselves to providing newer, more intricate, and at the same time simpler for the user, interfaces that we might need to adapt to new ways of doing things. And trying to find out who else is adapting/interested and what they are doing, and where they get some inspiration, knowledge or help :) IE: This is me being proactive about change in possibly the slackest manner ever, short of actively googling :) ^ That was a joke, to make it clear, I'm actually pretty active about learning new stuff, I'm just finding some of the crowdsourcing stackoverflow vs googling pretty interesting :) A: Microsoft is building a DataGrid for WPF. A CTP can be found here. A: @Lars Truijens - Thanks, but I think for 99% of cases that's a horrible idea, and sure, there are uses - but I've found that with WPF there's typically a much better way to do it. Plus you can use textboxes, and use an Enter as Tab override to move through them easily and swiftly.
{ "language": "en", "url": "https://stackoverflow.com/questions/40863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Searching for phone numbers in mysql I have a table which is full of arbitrarily formatted phone numbers, like this 027 123 5644 021 393-5593 (07) 123 456 042123456 I need to search for a phone number in a similarly arbitrary format ( e.g. 07123456 should find the entry (07) 123 456 The way I'd do this in a normal programming language is to strip all the non-digit characters out of the 'needle', then go through each number in the haystack, strip all non-digit characters out of it, then compare against the needle, eg (in ruby) digits_only = lambda{ |n| n.gsub /[^\d]/, '' } needle = digits_only[input_phone_number] haystack.map(&digits_only).include?(needle) The catch is, I need to do this in MySQL. It has a host of string functions, none of which really seem to do what I want. Currently I can think of 2 'solutions' * *Hack together a franken-query of CONCAT and SUBSTR *Insert a % between every character of the needle ( so it's like this: %0%7%1%2%3%4%5%6% ) However, neither of these seem like particularly elegant solutions. Hopefully someone can help or I might be forced to use the %%%%%% solution Update: This is operating over a relatively fixed set of data, with maybe a few hundred rows. I just didn't want to do something ridiculously bad that future programmers would cry over. If the dataset grows I'll take the 'phoneStripped' approach. Thanks for all the feedback! could you use a "replace" function to strip out any instances of "(", "-" and " ", I'm not concerned about the result being numeric. The main characters I need to consider are +, -, (, ) and space So would that solution look like this? SELECT * FROM people WHERE REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(phonenumber, '('),')'),'-'),' '),'+') LIKE '123456' Wouldn't that be terribly slow? A: I know this is ancient history, but I found it while looking for a similar solution. A simple REGEXP may work: select * from phone_table where phone1 REGEXP "07[^0-9]*123[^0-9]*456" This would match the phonenumber column with or without any separating characters. A: As John Dyer said, you should consider fixing the data in the DB and store only numbers. However, if you are facing the same situation as mine (I cannot run a update query) the workaround I found was combining 2 queries. The "inside" query will retrieve all the phone numbers and format them removing the non-numeric characters. SELECT REGEXP_REPLACE(column_name, '[^0-9]', '') phone_formatted FROM table_name The result of it will be all phone numbers without any special character. After that the "outside" query just need to get the entry you are looking for. The 2 queries will be: SELECT phone_formatted FROM ( SELECT REGEXP_REPLACE(column_name, '[^0-9]', '') phone_formatted FROM table_name ) AS result WHERE phone_formatted = 9999999999 Important: the AS result is not used but it should be there to avoid erros. A: An out-of-the-box idea, but could you use a "replace" function to strip out any instances of "(", "-" and " ", and then use an "isnumeric" function to test whether the resulting string is a number? Then you could do the same to the phone number string you're searching for and compare them as integers. Of course, this won't work for numbers like 1800-MATT-ROCKS. :) A: This is a problem with MySQL - the regex function can match, but it can't replace. See this post for a possible solution. A: Is it possible to run a query to reformat the data to match a desired format and then just run a simple query? That way even if the initial reformatting is slow you it doesn't really matter. A: My solution would be something along the lines of what John Dyer said. I'd add a second column (e.g. phoneStripped) that gets stripped on insert and update. Index this column and search on it (after stripping your search term, of course). You could also add a trigger to automatically update the column, although I've not worked with triggers. But like you said, it's really difficult to write the MySQL code to strip the strings, so it's probably easier to just do it in your client code. (I know this is late, but I just started looking around here :) A: See http://www.mfs-erp.org/community/blog/find-phone-number-in-database-format-independent It is not really an issue that the regular expression would become visually appalling, since only mysql "sees" it. Note that instead of '+' (cfr. post with [\D] from the OP) you should use '*' in the regular expression. Some users are concerned about performance (non-indexed search), but in a table with 100000 customers, this query, when issued from a user interface returns immediately, without noticeable delay. A: i suggest to use php functions, and not mysql patterns, so you will have some code like this: $tmp_phone = ''; for ($i=0; $i < strlen($phone); $i++) if (is_numeric($phone[$i])) $tmp_phone .= '%'.$phone[$i]; $tmp_phone .= '%'; $search_condition .= " and phone LIKE '" . $tmp_phone . "' "; A: This looks like a problem from the start. Any kind of searching you do will require a table scan and we all know that's bad. How about adding a column with a hash of the current phone numbers after stripping out all formatting characters. Then you can at least index the hash values and avoid a full blown table scan. Or is the amount of data small and not expected to grow much? Then maybe just sucking all the numbers into the client and running a search there. A: Here is a working Solution for PHP users. This uses a loop in PHP to build the Regular Expression. Then searches the database in MySQL with the RLIKE operator. $phone = '(456) 584-5874' // can be any format $phone = preg_replace('/[^0-9]/', '', $phone); // strip non-numeric characters $len = strlen($phone); // get length of phone number for ($i = 0; $i < $len - 1; $i++) { $regex .= $phone[$i] . "[^[:digit:]]*"; } $regex .= $phone[$len - 1]; This creates a Regular Expression that looks like this: 4[^[:digit:]]*5[^[:digit:]]*6[^[:digit:]]*5[^[:digit:]]*8[^[:digit:]]*4[^[:digit:]]*5[^[:digit:]]*8[^[:digit:]]*7[^[:digit:]]*4 Now formulate your MySQL something like this: $sql = "SELECT Client FROM tb_clients WHERE Phone RLIKE '$regex'" NOTE: I tried several of the other posted answers but found performance issues. For example, on our large database, it took 16 seconds to run the IsNumeric example. But this solution ran instantly. And this solution is compatible with older MySQL versions. A: MySQL can search based on regular expressions. Sure, but given the arbitrary formatting, if my haystack contained "(027) 123 456" (bear in mind position of spaces can change, it could just as easily be 027 12 3456 and I wanted to match it with 027123456, would my regex therefore need to be this? "^[\D]+0[\D]+2[\D]+7[\D]+1[\D]+2[\D]+3[\D]+4[\D]+5[\D]+6$" (actually it'd be worse as the mysql manual doesn't seem to indicate it supports \D) If that is the case, isn't it more or less the same as my %%%%% idea? A: Just an idea, but couldn't you use Regex to quickly strip out the characters and then compare against that like @Matt Hamilton suggested? Maybe even set up a view (not sure of mysql on views) that would hold all phone numbers stripped by regex to a plain phone number? A: Woe is me. I ended up doing this: mre = mobile_number && ('%' + mobile_number.gsub(/\D/, '').scan(/./m).join('%')) find(:first, :conditions => ['trim(mobile_phone) like ?', mre]) A: if this is something that is going to happen on a regular basis perhaps modifying the data to be all one format and then setup the search form to strip out any non-alphanumeric (if you allow numbers like 310-BELL) would be a good idea. Having data in an easily searched format is half the battle. A: a possible solution can be found at http: //udf-regexp.php-baustelle.de/trac/ additional package need to be installed, then you can play with REGEXP_REPLACE A: Create a user defined function to dynamically creates Regex. DELIMITER // CREATE FUNCTION udfn_GetPhoneRegex ( var_Input VARCHAR(25) ) RETURNS VARCHAR(200) BEGIN DECLARE iterator INT DEFAULT 1; DECLARE phoneregex VARCHAR(200) DEFAULT ''; DECLARE output VARCHAR(25) DEFAULT ''; WHILE iterator < (LENGTH(var_Input) + 1) DO IF SUBSTRING(var_Input, iterator, 1) IN ( '0', '1', '2', '3', '4', '5', '6', '7', '8', '9' ) THEN SET output = CONCAT(output, SUBSTRING(var_Input, iterator, 1)); END IF; SET iterator = iterator + 1; END WHILE; SET output = RIGHT(output,10); SET iterator = 1; WHILE iterator < (LENGTH(output) + 1) DO SET phoneregex = CONCAT(phoneregex,'[^0-9]*',SUBSTRING(output, iterator, 1)); SET iterator = iterator + 1; END WHILE; SET phoneregex = CONCAT(phoneregex,'$'); RETURN phoneregex; END// DELIMITER ; Call that User Defined Function in your stored procedure. DECLARE var_PhoneNumberRegex VARCHAR(200); SET var_PhoneNumberRegex = udfn_GetPhoneRegex('+ 123 555 7890'); SELECT * FROM Customer WHERE phonenumber REGEXP var_PhoneNumberRegex; A: I would use Google's libPhoneNumber to format a number to E164 format. I would add a second column called "e164_number" to store the e164 formatted number and add an index on it. A: In my case, I needed to identify Swiss (CH) mobile phone numbers in the phone column and move them in mobile column. As all mobile phone numbers starts with 07x or +417x here is the regex to use : /^(\+[0-9][0-9]\s*|0|)7.*/mgix It find all numbers like the following : * *+41 79 123 456 78 *+417612345678 *076 123 456 78 *07812345678 *7712345678 and ignore all others like theese : * *+41 47 123 456 78 *+413212345678 *021 123 456 78 *02212345678 *3412345678 In MySQL it gives the following code : UPDATE `contact` SET `mobile` = `phone`, `phone` = '' WHERE `phone` REGEXP '^(\\+[\D+][0-9]\\s*|0|)(7.*)$' You'll need to clean your number from special chars like -/.() before. https://regex101.com/r/AiWFX8/1
{ "language": "en", "url": "https://stackoverflow.com/questions/40873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: vb.net object persisted in database How can I go about storing a vb.net user defined object in a sql database. I am not trying to replicate the properties with columns. I mean something along the lines of converting or encoding my object to a byte array and then storing that in a field in the db. Like when you store an instance of an object in session, but I need the info to persist past the current session. @Orion Edwards It's not a matter of stances. It's because one day, you will change your code. Then you will try de-serialize the old object, and YOUR PROGRAM WILL CRASH. My Program will not "CRASH", it will throw an exception. Lucky for me .net has a whole set of classes dedicated for such an occasion. At which time I will refresh my stale data and put it back in the db. That is the point of this one field (or stance, as the case may be). A: You can use serialization - it allows you to store your object at least in 3 forms: binary (suitable for BLOBs), XML (take advantage of MSSQL's XML data type) or just plain text (store in varchar or text column) A: Before you head down this road towards your own eventual insanity, you should take a look at this (or one day repeat it): http://thedailywtf.com/Articles/The-Mythical-Business-Layer.aspx Persisting objects in a database is not a good idea. It kills all the good things that a database is designed to do. A: You could use the BinaryFormatter class to serialize your object to a binary format, then save the resulting string in your database. A: The XmlSerializer or the DataContractSerializer in .net 3.x will do the job for you. A: @aku, lomaxx and bdukes - your solutions are what I was looking for. @1800 INFORMATION - while i appreciate your stance on the matter, this is a special case of data that I get from a webservice that gets refreshed only about once a month. I dont need the data persisted in db form because thats what the webservice is for. Below is the code I finally got to work. Serialize #'res is my object to serialize Dim xml_serializer As System.Xml.Serialization.XmlSerializer Dim string_writer As New System.IO.StringWriter() xml_serializer = New System.Xml.Serialization.XmlSerializer(res.GetType) xml_serializer.Serialize(string_writer, res) Deserialize #'string_writer and xml_serializer from above Dim serialization As String = string_writer.ToString Dim string_reader As System.IO.StringReader string_reader = New System.IO.StringReader(serialization) Dim res2 As testsedie.EligibilityResponse res2 = xml_serializer.Deserialize(string_reader) A: What you want to do is called "Serializing" your object, and .Net has a few different ways to go about it. One is the XmlSerializer class in the System.Xml.Serialization namespace. Another is in the System.Runtime.Serialization namespace. This has support for a SOAP formatter, a binary formatter, and a base class you can inherit from that all implement a common interface. For what you are talking about, the BinaryFormatter suggested earlier will probably have the best performance. A: I'm backing @1800 Information on this one. Serializing objects for long-term storage is never a good idea while i appreciate your stance on the matter, this is a special case of data that I get from a webservice that gets refreshed only about once a month. It's not a matter of stances. It's because one day, you will change your code. Then you will try de-serialize the old object, and YOUR PROGRAM WILL CRASH. A: If it crashes (or throws an exception) all you are left with is a bunch of binary data to try and sift through to recreate your objects. If you are only persisting binary why not just save straight to disk. You also might want to look at using something like xml as, as has been mentioned, if you alter your object definition you may not be able to unserialise it without some hard work.
{ "language": "en", "url": "https://stackoverflow.com/questions/40884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Does anyone know of a web based IDE? Does anyone know of a web based IDE (Like VS, Eclipse, IDEA)? Besides ECCO? A: Bespin by Mozilla seems to be the right answer. link text A: ShiftEdit Web Based IDE A: Heroku - Ruby on Rails (RoR) AppJet - Javascript CodeIDE - Multi-Language A: Try PHPAnywhere.net
{ "language": "en", "url": "https://stackoverflow.com/questions/40907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How to check if page is postback within reserved function pageLoad on ASP.NET AJAX I'm looking for a way to check within pageLoad() if this method is raised during load event because of a postback/async postback or because of being loaded and access the first time. This is similar to Page.IsPostback property within code behind page. TIA, Ricky A: One way you could do that is to wire up an Application.Load handler in Application.Init, then have that handler unbind itself after running: Sys.Application.add_init(AppInit); function AppInit() { Sys.Application.add_load(RunOnce); } function RunOnce() { // This will only happen once per GET request to the page. Sys.Application.remove_load(RunOnce); } That will execute after Application.Init. It should be the last thing before pageLoad is called. A: @Darren: Thanks for the answer. I had tried to create pageLoad with event argument ApplicationLoadEventArgs as parameter (see below). However according to this: The load event is raised for all postbacks to the server, which includes asynchronous postbacks. As you have indicated, the isPartialLoad property does not cover all postback scenarios. It'll be nice if the event argument also contain isPostback property. function pageLoad(sender, arg) { if (!arg.get_isPartialLoad()) { //code to be executed only on the first load } } @mmattax: I'm looking for property that can be called from client-side (javascript). A: You could have a hidden input that you set to a known value on the server side if it's a postback/callback - and your javascript could check that value. That said, I really hope that there's a client-only solution for this. Edit: @mmattax - I believe he's looking for a client-side solution - the JavaScript equivalent of that. A: What you can do is wire up to the load event of the Sys.Application class. you can then use the isPartialLoad property of the Sys.ApplicationLoadEventArgs class. I believe that would let you know if you are in a async postback or not. To know if you are in a post back, you'll have to handle that in server side code and emit that to the client. A: You can still use Page.IsPostback during an async call. A: Application.Init is probably a more appropriate event to use, if you only want the code to execute on the first load. A: @Dave Ward: This normally would work. However, the code is to attach event on behavior object. Because the creation of behavior object happens during Application.Init, attaching to that event will lead to unpredictable behavior. It will be nice if there is PostInit event. A: @Dave Ward: The use of RunOnce method works perfectly. This solve my problem without having the workaround to check first if handler already exist before attaching to an event. I'll mark your answer as an accepted answer. Thanks again. A: Here's our Ajax equivalent to isPostback which we've been using for a while. public static bool isAjaxRequest(System.Web.HttpRequest request) {//Checks to see if the request is an Ajax request if (request.ServerVariables["HTTP_X_MICROSOFTAJAX"] != null || request.Form["__CALLBACKID"] != null) return true; else return false; }
{ "language": "en", "url": "https://stackoverflow.com/questions/40912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: How do I loop through result objects in Flex? I am having problems manually looping through xml data that is received via an HTTPService call, the xml looks something like this: <DataTable> <Row> <text>foo</text> </Row> <Row> <text>bar</text> </Row> </DataTable> When the webservice result event is fired I do something like this: for(var i:int=0;i&lt;event.result.DataTable.Row.length;i++) { if(event.result.DataTable.Row[i].text == "foo") mx.controls.Alert.show('foo found!'); } This code works then there is more than 1 "Row" nodes returned. However, it seems that if there is only one "Row" node then the event.DataTable.Row object is not an error and the code subsequently breaks. What is the proper way to loop through the HTTPService result object? Do I need to convert it to some type of XMLList collection or an ArrayCollection? I have tried setting the resultFormat to e4x and that has yet to fix the problem... Thanks. A: The problem lies in this statement event.result.DataTable.Row.length length is not a property of XMLList, but a method: event.result.DataTable.Row.length() it's confusing, but that's the way it is. Addition: actually, the safest thing to do is to always use a for each loop when iterating over XMLLists, that way you never make the mistake, it's less code, and easier to read: for each ( var node : XML in event.result.DataTable.Row ) A: Row isn't an array unless there are multiple Row elements. It is annoying. You have to do something like this, but I haven't written AS3 in a while so I forget if there's an exists function. if (exists(event.result.DataTable) && exists(event.result.DataTable.Row)){ if (exists(event.result.DataTable.Row.length)) { for(var i:int=0;i<event.result.DataTable.Row.length;i++) { if (exists(event.result.DataTable.Row[i].text) && "foo" == event.result.DataTable.Row[i].text) mx.controls.Alert.show('foo found!'); } } if (exists(event.result.DataTable.Row.text) && "foo" == event.result.DataTable.Row.text) mx.controls.Alert.show('foo found!'); } A: I would store it in an Xml object and then use its methods to search for the node value you need. var returnedXml:Xml = new Xml(event.result.toString());
{ "language": "en", "url": "https://stackoverflow.com/questions/40913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Saving Perl Windows Environment Keys UPCASES them I have a framework written in Perl that sets a bunch of environment variables to support interprocess (typically it is sub process) communication. We keep a sets of key/value pairs in XML-ish files. We tried to make the key names camel-case somethingLikeThis. This all works well. Recently we have had occasion to pass control (chain) processes from Windows to UNIX. When we spit out the %ENV hash to a file from Windows the somethingLikeThis key becomes SOMETHINGLIKETHIS. When the Unix process picks up the file and reloads the environment and looks up the value of $ENV{somethingLikeThis} it does not exist since UNIX is case sensitive (from the Windows side the same code works fine). We have since gone back and changed all the keys to UPPERCASE and solved the problem, but that was tedious and caused pain to the users. Is there a way to make Perl on Windows preserve the character case of the keys of the environment hash? A: I believe that you'll find the Windows environment variables are actually case insensitive, thus the keys are uppercase in order to avoid confusion. This way Windows scripts which don't have any concept of case sensitivity can use the same variables as everything else. A: As far as I remember, using ALL_CAPS for environment variables is the recommended practice in both Windows and *NIX worlds. My guess is Perl is just using some kind of legacy API to access the environment, and thus only retrieves the upper-case-only name for the variable. In any case, you should never rely on something like that, even more so if you are asking your users to set up the variables, just imagine how much aggravation and confusion a simple misspelt variable would produce! You have to remember that some OSes that will remain nameless have not still learned how to do case sensitive files... A: First, to solve your problem, I believe using backticks around set and parsing it yourself will work. On my Windows system, this script worked just fine. my %env = map {/(.*?)=(.*)/;} `set`; print join(' ', sort keys %env); In the camel book, the advice in Chapter 25: Portable Perl, the System Interaction section is "Don't depend on a specific environment variable existing in %ENV, and don't assume that anything in %ENV will be case sensitive or case preserving. Don't assume Unix inheritance semantics for environment variables; on some systems, they may be visible to all other processes." A: Jack M.: Agreed, it is not a problem on Windows. If I create an environment variable Foo I can reference it in Perl as $ENV{FOO} or $ENV{fOO} or $ENV{foo}. The problem is: I create it as Foo and dump the entire %ENV to a file and then read in the file from *NX to recreate the Environment hash and use the same script to reference $ENV{Foo}, that hash value does not exist (the $ENV{FOO} does exist). We had adopted the all UPPERCASE workaround that davidg suggested. I was just wondering if there was ANY way to "preserve case" when writing out the keys to the %ENV hash from Perl on Windows. A: To the best of my knowledge, there is not. It seems that you may be better off using another hash instead of %ENV. If you are calling many outside modules and want to track the same variables across them, a Factory pattern may work so that you're not breaking DRY, and are able to use a case-sensitive hash across multiple modules. The only trick would then be to keep these variables updated across all objects from the Factory, but I'm sure you can work that out.
{ "language": "en", "url": "https://stackoverflow.com/questions/40923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: VB6 NegotiateMenus I have a vb6 form that I've put an ocx control on. Setting NegotiateMenus on the form displays the ocx's control (which is what I want). I then add my own control to the form. When that control has focus, the menu from the ocx disappears. How can I always keep the menu from the ocx displayed, regardless of who has focus? A: Dan, I remember trying to do something similar many years ago, and could not achieve it. What I ended up doing was adding a empty top level menu with the same caption as the menu on the OCX control, and having it always be disabled. Then, when the OCX got focus I would hide my disabled menu item, making it look as if clicking on the OCX had enabled the menu item. It saves all the ugly jumping around as menus appear and disappear (obviously, once the OCX lost focus I would show the disabled menu again). If you still want the menu item enabled, you would have to replicate it exactly in your form, and hide your version of it when the Ocx is active (you would also have to wire all your own events to replicate the functionality available on the OCX. There is no easier way of doing this at I know of. Apologies not be being more helpful.
{ "language": "en", "url": "https://stackoverflow.com/questions/40935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to Automatically Start a Download in PHP? What code do you need to add in PHP to automatically have the browser download a file to the local machine when a link is visited? I am specifically thinking of functionality similar to that of download sites that prompt the user to save a file to disk once you click on the name of the software? A: Send the following headers before outputting the file: header("Content-Disposition: attachment; filename=\"" . basename($File) . "\""); header("Content-Type: application/octet-stream"); header("Content-Length: " . filesize($File)); header("Connection: close"); @grom: Interesting about the 'application/octet-stream' MIME type. I wasn't aware of that, have always just used 'application/force-download' :) A: Here is an example of sending back a pdf. header('Content-type: application/pdf'); header('Content-Disposition: attachment; filename="' . basename($filename) . '"'); header('Content-Transfer-Encoding: binary'); readfile($filename); @Swish I didn't find application/force-download content type to do anything different (tested in IE and Firefox). Is there a reason for not sending back the actual MIME type? Also in the PHP manual Hayley Watson posted: If you wish to force a file to be downloaded and saved, instead of being rendered, remember that there is no such MIME type as "application/force-download". The correct type to use in this situation is "application/octet-stream", and using anything else is merely relying on the fact that clients are supposed to ignore unrecognised MIME types and use "application/octet-stream" instead (reference: Sections 4.1.4 and 4.5.1 of RFC 2046). Also according IANA there is no registered application/force-download type. A: None of above worked for me! Working on 2021 for WordPress and PHP: <?php $file = ABSPATH . 'pdf.pdf'; // Where ABSPATH is the absolute server path, not url //echo $file; //Be sure you are echoing the absolute path and file name $filename = 'Custom file name for the.pdf'; /* Note: Always use .pdf at the end. */ header('Content-type: application/pdf'); header('Content-Disposition: inline; filename="' . $filename . '"'); header('Content-Transfer-Encoding: binary'); header('Content-Length: ' . filesize($file)); header('Accept-Ranges: bytes'); @readfile($file); Thanks to: https://qastack.mx/programming/4679756/show-a-pdf-files-in-users-browser-via-php-perl A: A clean example. <?php header('Content-Type: application/download'); header('Content-Disposition: attachment; filename="example.txt"'); header("Content-Length: " . filesize("example.txt")); $fp = fopen("example.txt", "r"); fpassthru($fp); fclose($fp); ?> A: my code works for txt,doc,docx,pdf,ppt,pptx,jpg,png,zip extensions and I think its better to use the actual MIME types explicitly. $file_name = "a.txt"; // extracting the extension: $ext = substr($file_name, strpos($file_name,'.')+1); header('Content-disposition: attachment; filename='.$file_name); if(strtolower($ext) == "txt") { header('Content-type: text/plain'); // works for txt only } else { header('Content-type: application/'.$ext); // works for all extensions except txt } readfile($decrypted_file_path);
{ "language": "en", "url": "https://stackoverflow.com/questions/40943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55" }
Q: Keeping development databases in multiple environments in sync I'm early in development on a web application built in VS2008. I have both a desktop PC (where most of the work gets done) and a laptop (for occasional portability) on which I use AnkhSVN to keep the project code synced. What's the best way to keep my development database (SQL Server Express) synced up as well? I have a VS database project in SVN containing create scripts which I re-generate when the schema changes. The original idea was to recreate the DB whenever something changed, but it's quickly becoming a pain. Also, I'd lose all the sample rows I entered to make sure data is being displayed properly. I'm considering putting the .MDF and .LDF files under source control, but I doubt SQL Server Express will handle it gracefully if I do an SVN Update and the files get yanked out from under it, replaced with newer copies. Sticking a couple big binary files into source control doesn't seem like an elegant solution either, even if it is just a throwaway development database. Any suggestions? A: There are obviously a number of ways to approach this, so I am going to list a number of links that should provide a better foundation to build on. These are the links that I've referenced in the past when trying to get others on the bandwagon. * *Database Projects in Visual Studio .NET *Data Schema - How Changes are to be Implemented *Is Your Database Under Version Control? *Get Your Database Under Version Control *Also look for MSDN Webcast: Visual Studio 2005 Team Edition for Database Professionals (Part 4 of 4): Schema Source and Version Control However, with all of that said, if you don't think that you are committed enough to implement some type of version control (either manual or semi-automated), then I HIGHLY recommend you check out the following: * *Red Gate SQL Compare *Red Gate SQL Data Compare Holy cow! Talk about making life easy! I had a project get away from me and had multiple people in making schema changes and had to keep multiple environments in sync. It was trivial to point the Red Gate products at two databases and see the differences and then sync them up. A: You can store backup (.bak file) of you database rather than .MDF & .LDF files. You can restore your db easily using following script: use master go if exists (select * from master.dbo.sysdatabases where name = 'your_db') begin alter database your_db set SINGLE_USER with rollback IMMEDIATE drop database your_db end restore database your_db from disk = 'path\to\your\bak\file' with move 'Name of dat file' to 'path\to\mdf\file', move 'Name of log file' to 'path\to\ldf\file' go You can put above mentioned script in text file restore.sql and call it from batch file using following command: osql -E -i restore.sql That way you can create script file to automate whole process: * *Get latest db backup from SVN repository or any suitable storage *Restore current db using bak file A: In addition to your database CREATE script, why don't you maintain a default data or sample data script as well? This is an approach that we've taken for incremental versions of an application we have been maintaining for more than 2 years now, and it works very well. Having a default data script also allows your QA testers to be able to recreate bugs using the data that you also have? You might also want to take a look at a question I posted some time ago: Best tool for auto-generating SQL change scripts A: We use a combo of, taking backups from higher environments down. As well as using ApexSql to handle initial setup of schema. Recently been using Subsonic migrations, as a coded, source controlled, run through CI way to get change scripts in, there is also "tarantino" project developed by headspring out of texas. Most of these approaches especially the latter, are safe to use on top of most test data. I particularly like the automated last 2 because I can make a change, and next time someone gets latest, they just run the "updater" and they are ushered to latest.
{ "language": "en", "url": "https://stackoverflow.com/questions/40957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I close a parent Form from child form in Windows Forms 2.0? I have a need to close a parent form from within child form from a Windows application. What would be the best way to do this? A: I ran across this blog entry that looks like it will work and it uses the Event Handler concept from D2VIANT Answer http://www.dotnetcurry.com/ShowArticle.aspx?ID=125 Summary: Step 1: Create a new Windows application. Open Visual Studio 2005 or 2008. Go to File > New > Project > Choose Visual Basic or Visual C# in the ‘Project Types’ > Windows Application. Give the project a name and location > OK. Step 2: Add a n ew form to the project. Right click the project > Add > Windows Forms > Form2.cs > Add. Step 3: Now in the Form1, drag and drop a button ‘btnOpenForm’ and double click it to generate an event handler. Write the following code in it. Also add the frm2_FormClosed event handler as shown below: private void btnOpenForm_Click(object sender, EventArgs e) { Form2 frm2 = new Form2(); frm2.FormClosed += new FormClosedEventHandler(frm2_FormClosed); frm2.Show(); this.Hide(); } private void frm2_FormClosed(object sender, FormClosedEventArgs e) { this.Close(); } A: When you close form in WinForms it disposes all of it's children. So it's not a good idea. You need to do it asynchronously, for example you can send a message to parent form. A: The Form class doesn't provide any kind of reference to the 'parent' Form, so there's no direct way to access the parent (unless it happens to be the MDI parent as well, in which case you could access it through the MDIParent property). You'd have to pass a reference to the parent in the constructor of the child, or a property and then remember to set it, and then use that reference to force the parent to close. A: Perhaps consider having the parent subscribe to an event on the child, and the child can fire that event whenever it wants to close the parent. The parent can then handle it's own closing (along with the child's). A: You are clearly noo using the correct way to open and close forms. If you use any form of MVC or MVP this problem would not arise. So use a form of MVP or MVC to solve this problem. A: I agree with davidg; you can add a reference to the parent form to the child form's constructor, and then close the parent form as you need: private Form pForm; public ChildForm(ref Form parentForm) { pForm = parentForm; } private closeParent() { if (this.pForm != null) this.pForm.Close(); this.pForm = null; } A: There's a very easy solution to that. Problem (To be sure) : Starts App(Main Form) > Open Child Form of Main Form VIA Button or Any Event > Closes (Main Form) But Child Form also Closes. Solution : Use : Process.Start("Your_App's_EXE_Full_Path.exe"); Example : Try this to get full path: * *string FullPath = Environment.CurrentDirectory + "\\YourAppName.exe"; *Process.Start(FullPath);. *this.Close(); * *this way you'll get to keep every form you want to keep open.
{ "language": "en", "url": "https://stackoverflow.com/questions/40962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Should I use window.onload or script block? I have a javascript function that manipulates the DOM when it is called (adds CSS classes, etc). This is invoked when the user changes some values in a form. When the document is first loading, I want to invoke this function to prepare the initial state (which is simpler in this case than setting up the DOM from the server side to the correct initial state). Is it better to use window.onload to do this functionality or have a script block after the DOM elements I need to modify? For either case, why is it better? For example: function updateDOM(id) { // updates the id element based on form state } should I invoke it via: window.onload = function() { updateDOM("myElement"); }; or: <div id="myElement">...</div> <script language="javascript"> updateDOM("myElement"); </script> The former seems to be the standard way to do it, but the latter seems to be just as good, perhaps better since it will update the element as soon as the script is hit, and as long as it is placed after the element, I don't see a problem with it. Any thoughts? Is one version really better than the other? A: Definitely use onload. Keep your scripts separate from your page, or you'll go mad trying to disentangle them later. A: Some JavaScript frameworks, such as mootools, give you access to a special event named "domready": Contains the window Event 'domready', which will execute when the DOM has loaded. To ensure that DOM elements exist when the code attempting to access them is executed, they should be placed within the 'domready' event. window.addEvent('domready', function() { alert("The DOM is ready."); }); A: window.onload on IE waits for the binary information to load also. It isn't a strict definition of "when the DOM is loaded". So there can be significant lag between when the page is perceived to be loaded and when your script gets fired. Because of this I would recommend looking into one of the plentiful JS frameworks (prototype/jQuery) to handle the heavy lifting for you. A: The onload event is considered the proper way to do it, but if you don't mind using a javascript library, jQuery's $(document).ready() is even better. $(document).ready(function(){ // manipulate the DOM all you want here }); The advantages are: * *Call $(document).ready() as many times as you want to register additional code to run - you can only set window.onload once. *$(document).ready() actions happen as soon as the DOM is complete - window.onload has to wait for images and such. I hope I'm not becoming The Guy Who Suggests jQuery On Every JavaScript Question, but it really is great. A: I've written lots of Javascript and window.onload is a terrible way to do it. It is brittle and waits until every asset of the page has loaded. So if one image takes forever or a resource doesn't timeout until 30 seconds, your code will not run before the user can see/manipulate the page. Also, if another piece of Javascript decides to use window.onload = function() {}, your code will be blown away. The proper way to run your code when the page is ready is wait for the element you need to change is ready/available. Many JS libraries have this as built-in functionality. Check out: * *http://docs.jquery.com/Events/ready#fn *http://developer.yahoo.com/yui/event/#onavailable A: While I agree with the others about using window.onload if possible for clean code, I'm pretty sure that window.onload will be called again when a user hits the back button in IE, but doesn't get called again in Firefox. (Unless they changed it recently). Edit: I could have that backwards. In some cases, it's necessary to use inline script when you want your script to be evaluated when the user hits the back button from another page, back to your page. Any corrections or additions to this answer are welcome... I'm not a javascript expert. A: @The Geek I'm pretty sure that window.onload will be called again when a user hits the back button in IE, but doesn't get called again in Firefox. (Unless they changed it recently). In Firefox, onload is called when the DOM has finished loading regardless of how you navigated to a page. A: My take is the former becauase you can only have 1 window.onload function, while inline script blocks you have an n number. A: onLoad because it is far easier to tell what code runs when the page loads up than having to read down through scads of html looking for script tags that might execute.
{ "language": "en", "url": "https://stackoverflow.com/questions/40966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Is it possible to build an application for the LinkedIn platform? Do you know if it's possible to build an application for the LinkedIn platform? A: Yes, they have API at http://developer.linkedin.com/index.jspa, allowing access to the profile, connections, messaging and more. A: While LinkedIn has promised a public API for a very long time now, they have yet to deliver. No, there is no public LinkedIn API yet. IMO, their widgets (which there are only two of at the moment, which are very limited) don't count. They say that they are open to being contacted with specific uses for their API and they may give access to parts as needed - but that is if they accept your ideas for integration. They have been very picky with this - and have not accepted my attempts to integrate with LinkedIn yet, they tell me I have to wait with everyone else, apparently my applications are not "high-profile" enough. Sure, you'll find many Google results talking about their "promised" API, but they are empty promises and won't be of much help. A: Yes, Linkedin has an API: * *http://www.programmableweb.com/api/linkedin *http://blog.linkedin.com/blog/2007/12/the-intelligent.html So you could build an application that uses it. Update: (from second link) We’ll be phasing all of this in over the coming months and to get involved with the Intelligent Application Platform either for APIs, widgets, or hosted applications, send us an e-mail to [email protected] telling us what you want to build and what you need to build it. Since there are published Mashups using LinkedIn I would assume that means you can use the API even if the documentation isn't readily available. As a tip, in the future include links to what you found that didn't work, so we know not to give it to you again. I poked around a bit more and I found some more on their widgets which appears to be the main focus of their API.
{ "language": "en", "url": "https://stackoverflow.com/questions/40969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to use chrome to login to same site twice with different credentials? Even though chrome runs tabs as different processes it appears to not support this... any ideas or documentation I might have missed? A: Try two different incognito windows. A: If you want to stay logged in for a period of time, the best thing to do is to create a new profile for each set of credentials. Click the face icon in the top right-hand corner and select "new user." If you just want to log-in with different credentials once, it might be easier to use incognito mode, as rmmh said.
{ "language": "en", "url": "https://stackoverflow.com/questions/40992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is Google Chrome's V8 engine really that good? Does anyone have time to take a look at it? I've read a bit and it promises a lot, if it's half what they say, it'll change web Development a lot A: Perhaps a bit anecdotal but comparing runs between Firefox and Chrome showed a significant difference in benchmarks. http://www2.webkit.org/perf/sunspider-0.9/sunspider.html Try for yourself. A: While in Microsoft: Consuming twice as much RAM as Firefox and saturating the CPU with nearly six times as many execution threads, Microsoft's latest beta release of Internet Explorer 8 is in fact more demanding on your PC than Windows XP itself, research firm Devil Mountain Software found in performance tests. According to the firm, which operates a community-based testing network, IE8 Beta 2 consumed 380MB of RAM and spawned 171 concurrent threads during a multi-tab browsing test of popular Web destinations Slashdot I imagine how @rjrapson came with that conclusion. Every blog post I see, calims it's faster. A: I have compared Mozilla Firefox 3.0.1 and Google Chrome 0.2.149.27 on SunSpider JavaScript Benchmark with the following results: * *Firefox - total: 2900.0ms +/- 1.8% *Chrome - total: 1549.2ms +/- 1.7% and on V8 Benchmark Suite with the following results (higher score is better): * *Firefox - score: 212 *Chrome - score: 1842 and on Web Browser Javascript Benchmark with the following results: * *Firefox - total duration: 362 ms *Chrome - total duration: 349 ms Machine: Windows XP SP2, Intel Core2 DUO T7500 @ 2.2 Ghz, 2 GB RAM All blog posts and articles that I've read so far also claim that V8 is clearly the fastest JavaScript engine out there. See for example - V8, TraceMonkey, SquirrelFish, IE8 BenchMarks "... Needless to say, Chrome’s V8 blows away all the current builds of the next-generation of JavaScript VMs. Just to be clear, WebKit and FireFox engines haven’t even hit beta, but it looks like the performance bar has just been set to an astronomical height by the V8 Team." A: The speed initially seemed substantially improved. One interesting thing is that it keeps locking up the Google REader tab, it's gotten the sad-face at least 5 times over this morning... A: It's really speedy. Visibly so. I was pretty impressed with its performance compared with Firefox 3. Already made it my default browser. A: The browser is incredibly fast in general, and Javascript is very fast in particular. Edit: The benchmark showed Chrome to be 1.73x faster on average than FF3, and 14.8x faster on average than IE 7. String manipulation is IE 7's weak point, which I'm told has been improved greatly in IE 8. A: Yes, V8 is extremely fast on Vista x86 -- up to 50 times as fast as IE 7 for most benchmarks I tried. More impressively, GMail running under Chrome had one-quarter the memory footprint of GMail running under IE 7. This can probably be attributed in large part to V8. A: I am finding it visibly much faster on Vista x64 than IE8 and FF3. A: It's two times faster than Firefox 3 on my Windows XP box. FWIW, the updates in Fx3.1 are supposed to make it an order of magnitude faster. A: I've compared it to Firefox and Internet Explorer using this link: http://celtickane.com/2009/07/javascript-speed-test-2009-browsers/ (was http://celtickane.com/webdesign/jsspeed.php) The difference is impressive. 212ms in Chrome, 341ms in Firefox 3, and 2188ms for Internet Explorer 7. A: I ran the aformentioned sunspider javascript benchmark on FF3 and Chrome and got over a 2x speed increase moving from FF3 to Chrome (on a Vista 64 system - Core 2 duo 6600 2.4GHz, 2GB RAM). The links above show you my results - I'm very interested to see what, if any, difference the underlying OS makes. That being said, I agree with Google that Javascript is becoming more and more important, and that the other browser makers should spend some time on optimizing it. I love being able to drag and drop tabs - that's something I've needed for over 2 years now... -Adam A: It's definitely fast. Gmail, Google Reader and Yahoo mail all load instantly. Can't say that for FF or Opera. A: Yes I have seen the bench marks and V8 does appear to be objectively faster but as for it'll change web programming a lot I personally do not think the bottleneck is currently in javascript, but rather in bandwidth
{ "language": "en", "url": "https://stackoverflow.com/questions/40994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to Convert a StreamReader into an XMLReader object in .Net 2.0/C# Here's a quick question I've been banging my head against today. I'm trying to convert a .Net dataset into an XML stream, transform it with an xsl file in memory, then output the result to a new XML file. Here's the current solution: string transformXML = @"pathToXslDocument"; XmlDocument originalXml = new XmlDocument(); XmlDocument transformedXml = new XmlDocument(); XslCompiledTransform transformer = new XslCompiledTransform(); DataSet ds = new DataSet(); string filepath; originalXml.LoadXml(ds.GetXml()); //data loaded prior StringBuilder sb = new StringBuilder(); XmlWriter writer = XmlWriter.Create(sb); transformer.Load(transformXML); transformer.Transform(originalXml, writer); //no need to select the node transformedXml.LoadXml(sb.ToString()); transformedXml.Save(filepath); writer.Close(); Here's the original code: BufferedStream stream = new BufferedStream(new MemoryStream()); DataSet ds = new DataSet(); da.Fill(ds); ds.WriteXml(stream); StreamReader sr = new StreamReader(stream, true); stream.Position = 0; //I'm not certain if this is necessary, but for the StreamReader to read the text the position must be reset. XmlReader reader = XmlReader.Create(sr, null); //Problem is created here, the XmlReader is created with none of the data from the StreamReader XslCompiledTransform transformer = new XslCompiledTransform(); transformer.Load(@"<path to xsl file>"); transformer.Transform(reader, null, writer); //Exception is thrown here, though the problem originates from the XmlReader.Create(sr, null) For some reason in the transformer.Transform method, the reader has no root node, in fact the reader isn't reading anything from the StreamReader. My questions is what is wrong with this code? Secondarily, is there a better way to convert/transform/store a dataset into XML? Edit: Both answers were helpful and technically aku's was closer. However I am leaning towards a solution that more closely resembles Longhorn's after trying both solutions. A: I'm not sure but it seems that you didn't reset position in stream before passing it to XmlReader. Try to seek at the beginning of your stream before trying to read from it. Also it may be necessary to close\flush stream after you wrote some data to it. EDIT: Just tried following code and it worked perfectly: BufferedStream stream = new BufferedStream(new MemoryStream()); stream.Write(Encoding.ASCII.GetBytes("<xml>foo</xml>"), 0, "<xml>foo</xml>".Length); stream.Seek(0, SeekOrigin.Begin); StreamReader sr = new StreamReader(stream); XmlReader reader = XmlReader.Create(sr); while (reader.Read()) { Console.WriteLine(reader.Value); } stream.Close(); A: You must select the root node. This doesn't use Datasets, but I use this function everyday and it works great. System.Xml.XmlDocument orgDoc = new System.Xml.XmlDocument(); orgDoc.LoadXml(orgXML); // MUST SELECT THE ROOT NODE XmlNode transNode = orgDoc.SelectSingleNode("/"); System.Text.StringBuilder sb = new System.Text.StringBuilder(); XmlWriter writer = XmlWriter.Create(sb); System.IO.StringReader stream = new System.IO.StringReader(transformXML); XmlReader reader = XmlReader.Create(stream); System.Xml.Xsl.XslCompiledTransform trans = new System.Xml.Xsl.XslCompiledTransform(); trans.Load(reader); trans.Transform(transNode, writer); XmlDocument doc = new XmlDocument(); doc.LoadXml(sb.ToString()); return doc; A: please look it and use.. using (MemoryStream memStream = new MemoryStream()) { memStream.Write(Encoding.UTF8.GetBytes(xmlBody), 0, xmlBody.Length); memStream.Seek(0, SeekOrigin.Begin); using (StreamReader reader = new StreamReader(memStream)) { // xml reader setting. XmlReaderSettings xmlReaderSettings = new XmlReaderSettings() { IgnoreComments = true, IgnoreWhitespace = true, }; // xml reader create. using (XmlReader xmlReader = XmlReader.Create(reader, xmlReaderSettings)) { XmlSerializer xmlSerializer = new XmlSerializer(typeof(LoginInfo)); myObject = (LoginInfo)xmlSerializer.Deserialize(xmlReader); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/40999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: IRAPIStream COM Interface in .NET I'm trying to use the OpenNETCF RAPI class to interact with a windows mobile device using the RAPI.Invoke() method. According to the following article: http://blog.opennetcf.com/ncowburn/2007/07/27/HOWTORetrieveTheDeviceIDFromTheDesktop.aspx You can do the communication in either block or stream mode. I have used block mode before, but now I need to do something a bit more complicated with a lot more data and continuous communication and therefore need to use the stream mode. Unfortunately on that article, and basically everywhere else, there is no explaination of how to use IRAPIStream in .NET I have found C/C++ documentation, but my desktop app needs to be written in C# Does anyone know how to properly implement the IRAPIStream COM interface in .NET? And better yet, anyone actually used RAPI.Invoke() with IRAPIStream before? Examples would be much appreciated. Edit: Upon a closer look at the RAPI class documentation, I realized that the Invoke() method doesn't support the stream interface.... so OpenNETCF is likely out, but maybe there is still a way to do it? A: I have found that generally the most performant and stable way to push/pull large amounts of data of a device over activesync is to use a socket. Early on we used CeRapiInvoke and a stream to pull data down of the device but ditched this early on in favour of using tcp/ip over a socket.
{ "language": "en", "url": "https://stackoverflow.com/questions/41009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Smarty integration into the CodeIgniter framework A Little Background Information: I've been looking at a few PHP framework recently, and it came down to two. The Zend Framework or CodeIgniter. I prefer CodeIgniter, because of its simple design. It's very bare bone, and it is just kept simple. The thing I don't like though is the weak template system. The template system is important for me, because I will be working with another designer. Being able to give him a good template system is a big plus. Zend was the second choice, because of the better template system that is built in. Zend is a different beast though compared to CodeIgniter. It emphasis "loose coupling between modules", but is a bigger framework. I don't like to feel like I have many things running under the hood that I never use. That is unnecessary overhead in my opinion, so I thought about putting a template system into CodeIgniter: Smarty. Question(s): How easy/hard is the process to integrate Smarty into CodeIgniter? From my initial scan of the CodeIgniter documentation, I can see that the layout of the framework is easy enough to understand, and I anticipate no problems. I want to know if anyone has used it before, and therefore are aware of any "gotchas" you my have experienced that is going to make this harder than it should be or impossible to pull off. I also want to know if this is a good thing to do at all. Is the template system in CodeIgniter enough for normal use? Are there any other template modules that are good for CodeIgniter aside from Smarty? I better off with Zend Framework? Is any wheel being invented here? A: Sorry to resurrect an old question - but none of the answers have been flagged as "accepted" yet. There's a library called "template" that does a great job of allowing you to use just about any template parser you want: Template CI Library - V1.4.1 The syntax is pretty easy for integrating into your CI application and the smarty integration spot on. A: Slightly OT, hope you don't mind... I'm a Zend Framework user and I think it's worth saying that the loose coupling means you don't need to include any files you're not actively using. Hopefully this negates your concern about unnecessary overhead. With the layouts stuff added in a recent release of ZF, its templating is really hard to fault... and it's completely pluggable as Favio mentions. The more I use ZF, the more I like it; they do things the way I would do them! A: I did a quick google search and found the following: http://devcha.blogspot.com/2007/12/smarty-as-template-engine-in-code.html http://codeigniter.com/forums/viewthread/67127/ If the designer is not familiar with Smarty, I think it's almost the same as if you use the existing CodeIgniter templating system (which leaves everything to PHP actually). It also depends on the complexity of the project at hand. You can also hook Smarty with Zend Framework. It's more complex than with CodeIgniter, but there's already a primer on how to do exactly that in the ZF documentation. http://framework.zend.com/manual/en/zend.view.scripts.html Plus lots of tutorials on the net. In my opinion it's almost the same, you can use pure PHP or Smarty as your template "engine", so it depends on the project. Also, compare a developer who has extensive experience and already has a library of view helpers so she uses pure PHP, versus a designer who doesn't know anything about PHP, but has extensive experience with Smarty. Sometimes decisions have to be based on who is going to do what. A: Check out this custom CodeIgniter templating library. I've already used it on several projects and it is easy to use. I know this post is late but it's worth checking out. A: It doesn't appear there has been an answer selected for this question nor has an up-to-date solution been given to work with the latest version of Codeigniter (2.0) and the latest version of Smarty (3.0.5). This library allows you to use Smarty 3 with Codeigniter 2.0 so you can use Smarty 3 specific features like template inheritance. http://ilikekillnerds.com/2010/11/using-smarty-3-in-codeigniter-2-a-really-tiny-ci-library/ A: Integrating Smarty in CodeIgniter? It is a breeze! The template system in CodeIgniter is very basic. Follow these steps for Smarty 3 in CI 3: Download CodeIgniter 3 Download Smarty 3 and put its content in 'application/third_party/smarty' folder Create 'Custom_smarty.php' file in 'application/libraries' and add this code: <?php if ( ! defined('BASEPATH')) exit('No direct script access allowed'); require_once(APPPATH.'third_party/smarty/Smarty.class.php'); class Custom_smarty extends Smarty { function __construct() { parent::__construct(); $this->setTemplateDir(APPPATH.'views/templates/'); $this->setCompileDir(APPPATH.'views/templates_c/'); } } ?> Create 'templates' & 'templates_c' folders inside 'application/views' folder Create simple 'test.tpl' file in 'application/views/templates' folder Open 'autoload.php' in 'application/config' folder and add: $autoload['libraries'] = array('custom_smarty'); And inside a controller: $this->custom_smarty->display('test.tpl'); If you are working on localhost set the permissions: sudo chmod -R 777 templates_c. Otherwhise contact your hosting service, if you catch the error Unable to write file. First be sure templates_c folder exists. Otherwise you can use another template engine like Twig.
{ "language": "en", "url": "https://stackoverflow.com/questions/41010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How do you access two databases in Grails Grails makes it very easy to configure datasources for different environments (development, test, production) in its DataSources.groovy file, but there seems to be no facility for configuring multiple datasources in one environment. What to I do if I need to access several databases from the same Grails application? A: There is now Grails plugin that enables the use of multiple datasources directly with Grails' GORM layer: http://burtbeckwith.com/blog/?p=70 A: Connecting different databases in different domain classes is very easy in Grails 2.x.x. for example development { dataSource {//DEFAULT data source . . } dataSource_admin { //Convention is dataSource_name url = "//db url" driverClassName = "oracle.jdbc.driver.OracleDriver" username = "test" password = 'test123' } dataSource_users { } } You can use any datasources in your domain classes by class Role{ static mapping = { datasource 'users' } } class Product{ static mapping = { datasource 'admin' } } For more details look at this A: If using Grails 2.0 or higher, there is no need for the plugin, it is supported natively. http://www.grails.org/doc/latest/guide/single.html#multipleDatasources A: Grails 2.0 can handle multiple data sources without a plugin: Example with a different datasource for the dev(h2 dataSource) and test(mysql dataSource_mysql) environments: DataSource.groovy: dataSource { pooled = true driverClassName = "org.h2.Driver" username = "sa" password = "" } dataSource_mysql { dialect = org.hibernate.dialect.MySQLInnoDBDialect driverClassName = 'com.mysql.jdbc.Driver' username = "user" password = "pass" url = "jdbc:mysql://mysqldb.com/DBNAME" } hibernate { cache.use_second_level_cache = true cache.use_query_cache = false cache.region.factory_class = 'net.sf.ehcache.hibernate.EhCacheRegionFactory' } // environment specific settings environments { development { dataSource { configClass = HibernateFilterDomainConfiguration.class dbCreate = "update" // one of 'create', 'create-drop', 'update', 'validate', '' url = "jdbc:h2:file:../devDb;MVCC=TRUE" sqlLogging = true } } test { dataSource_mysql { configClass = HibernateFilterDomainConfiguration.class dbCreate = "create" // one of 'create', 'create-drop', 'update', 'validate', '' sqlLogging = true } } production { dataSource { dbCreate = "update" url = "jdbc:h2:prodDb;MVCC=TRUE;LOCK_TIMEOUT=10000" pooled = true properties { maxActive = -1 minEvictableIdleTimeMillis=1800000 timeBetweenEvictionRunsMillis=1800000 numTestsPerEvictionRun=3 testOnBorrow=true testWhileIdle=true testOnReturn=true validationQuery="SELECT 1" } } } } A: Do you really want to do this? In my experience, the usual scenario here is: * *An application manages its own data in its own database schema *Often, the application will require data from other sources (for example, so reference data doesn't get copied and pasted) I've normally always had the luxury of all the schemas residing on the one database instance. Therefore, my application: * *only has one database connection - which is to the schema it owns and has read/write access *the other applications 'export' their data via views *my application has read access to those views, and has a synonym for that view making it appear local The reason behind using views is so that the application that is exposing the data * *knows explicitly that it is being exported and what is being exported *does not expose the internal structure of the schema (so if the internal structure changes, as long as the view is correct the consuming apps don't know) I haven't actually had to do this with a Grails application, but the approach should work. Another approach to sharing data across applications is to create a web service to expose the data. Grails makes this easy. Hope that helps, but this approach may not be applicable for all situations. A: The following post seems to be the best source of information on the subject: How to get mutli-dataSource in grails It boils down to: * *Define datasource1 in DevelopmentDataSource *Define datasource2 in resources.xml *Write a DAO for CRUD of the domain objects using datasource2 *In hibernate.cfg.xml, list all domain objects. Only the first datasource will have dynamic finder methods. If its a really simple query you are after and don't mind not having the ORM features you could use Groovy SQL or the native SQL features of Hibernate.
{ "language": "en", "url": "https://stackoverflow.com/questions/41018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: FXRuby FXFileDialog box default directory In FXRuby; how do I set the FXFileDialog to be at the home directory when it opens? A: Here's an exceedingly lazy way to do it: #!/usr/bin/ruby require 'rubygems' require 'fox16' include Fox theApp = FXApp.new theMainWindow = FXMainWindow.new(theApp, "Hello") theButton = FXButton.new(theMainWindow, "Hello, World!") theButton.tipText = "Push Me!" iconFile = File.open("icon.jpg", "rb") theButton.icon = FXJPGIcon.new(theApp, iconFile.read) theButton.iconPosition = ICON_ABOVE_TEXT iconFile.close theButton.connect(SEL_COMMAND) { fileToOpen = FXFileDialog.getOpenFilename(theMainWindow, "window name goes here", `echo $HOME`.chomp + "/") } FXToolTip.new(theApp) theApp.create theMainWindow.show theApp.run This relies on you being on a *nix box (or having the $HOME environment variable set). The lines that specifically answer your question are: theButton.connect(SEL_COMMAND) { fileToOpen = FXFileDialog.getOpenFilename(theMainWindow, "window name goes here", `echo $HOME`.chomp + "/") } Here, the first argument is the window that owns the dialog box, the second is the title of the window, and the third is the default path to start at (you need the "/" at the end otherwise it'll start a directory higher with the user's home folder selected). Check out this link for more info on FXFileDialog.
{ "language": "en", "url": "https://stackoverflow.com/questions/41024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What's a good bit of JS or JQuery for horizontally scrolling news ticker I am looking for a little bit of JQuery or JS that allows me to produce a horizontally scrolling "news ticker" list. The produced HTML needs to be standards compliant as well. I have tried liScroll but this has a habit of breaking (some content ends up on a second line at the start of the scroll), especially with longer lists. I have also tried this News Ticker but when a DOCTYPE is included the scrolling will jolt rather than cycle smoothly at the end of each cycle. Any suggestions are appreciated. Edit So thanks to Matt Hinze's suggestion I realised I could do what I wanted to do with JQuery animate (I require continuous scrolling not discrete scrolling like the example). However, I quickly ran into similar problems to those I was having with liScroll and after all that realised a CSS issue (as always) was responsible. Solution: liScroll - change the default 'var stripWidth = 0' to something like 100, to give a little space and avoid new line wrapping. A: Smooth Div Scroll can also be used as a news ticker/stock ticker. It can pause on mouse over or mouse down and it can loop endlessly if you want it to. Here's the example with a running ticker. A: http://www.emrecamdere.com/news_scroller_jquery.html A: Here's 2 other solutions that seem a bit simpler to implement: * *newsticker *News ticker (BBC style) A: An Alternative solution would also be the jQuery webTicker; its very similar to liscroll however resolves the problem with the ticker stopping after the whole list completes; whilst also adding some new fancy features like; direction of movement; speed; and ability to use multiple tickers per page. A: The second line bug arising in liscroll can be "fixed" by adding a listitem containing a non-breakable space entitiy <li>&nbsp;</li> at the end of each list ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/41027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Find in Files: Search all code in Team Foundation Server Is there a way to search the latest version of every file in TFS for a specific string or regex? This is probably the only thing I miss from Visual Source Safe... Currently I perform a Get Latest on the entire codebase and use Windows Search, but this gets quite painful with over 1GB of code in 75,000 files. EDIT: Tried the powertools mentioned, but the "Wildcard Search" option appears to only search filenames and not contents. UPDATE: We have implemented a customised search option in an existing MOSS (Search Server) installation. A: Team Foundation Server 2015 (on-premises) and Visual Studio Team Services (cloud version) include built-in support for searching across all your code and work items. You can do simple string searches like foo, boolean operations like foo OR bar or more complex language-specific things like class:WebRequest You can read more about it here: https://www.visualstudio.com/en-us/docs/search/overview A: We have set up a solution for Team Foundation Server Source Control (not SourceSafe as you mention) similar to what Grant suggests; scheduled TF Get, Search Server Express. However the IFilter used for C# files (text) was not giving the results we wanted, so we convert source files to .htm files. We can now add additional meta-data to the files such as: * *Author (we define it as the person that last checked in the file) *Color coding (on our todo-list) *Number of changes indicating potential design problems (on our todo-list) *Integrate with the VSTS IDE like Koders SmartSearch feature *etc. We would however prefer a protocolhandler for TFS Source Control, and a dedicated source code IFilter for a much more targeted solution. A: Okay, * *TFS2008 Power Tools do not have a find-in-files function. "The Find in Source Control tools provide the ability to locate files and folders in source control by the item’s status or with a wildcard expression." *There is a Windows program with this functionality posted on CodePlex. I just installed and tested this and it works well. A: This is now possible as of TFS 2015 by using the Code Search plugin. https://marketplace.visualstudio.com/items?itemName=ms.vss-code-search The search is done via the web interface, and does not require you to download the code to your local machine which is nice. A: Another solution is to use "ctrl+shift+F". You can change the search location to a local directory rather than a solution or project. This will just take the place of the desktop search and you'll still need to get the latest code, but it will allow you to remain within Visual Studio to do your searching. A: In my case, writing a small utility in C# helped. Links that helped me - http://pascallaurin42.blogspot.com/2012/05/tfs-queries-searching-in-all-files-of.html How to list files of a team project using tfs api? using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.VersionControl.Client; using Microsoft.TeamFoundation.Framework.Client; using System.IO; namespace TFSSearch { class Program { static string[] textPatterns = new[] { "void main(", "exception", "RegisterScript" }; //Text to search static string[] filePatterns = new[] { "*.cs", "*.xml", "*.config", "*.asp", "*.aspx", "*.js", "*.htm", "*.html", "*.vb", "*.asax", "*.ashx", "*.asmx", "*.ascx", "*.master", "*.svc"}; //file extensions static void Main(string[] args) { try { var tfs = TfsTeamProjectCollectionFactory .GetTeamProjectCollection(new Uri("http://{tfsserver}:8080/tfs/}")); // one some servers you also need to add collection path (if it not the default collection) tfs.EnsureAuthenticated(); var versionControl = tfs.GetService<VersionControlServer>(); StreamWriter outputFile = new StreamWriter(@"C:\Find.txt"); var allProjs = versionControl.GetAllTeamProjects(true); foreach (var teamProj in allProjs) { foreach (var filePattern in filePatterns) { var items = versionControl.GetItems(teamProj.ServerItem + "/" + filePattern, RecursionType.Full).Items .Where(i => !i.ServerItem.Contains("_ReSharper")); //skipping resharper stuff foreach (var item in items) { List<string> lines = SearchInFile(item); if (lines.Count > 0) { outputFile.WriteLine("FILE:" + item.ServerItem); outputFile.WriteLine(lines.Count.ToString() + " occurence(s) found."); outputFile.WriteLine(); } foreach (string line in lines) { outputFile.WriteLine(line); } if (lines.Count > 0) { outputFile.WriteLine(); } } } outputFile.Flush(); } } catch (Exception e) { string ex = e.Message; Console.WriteLine("!!EXCEPTION: " + e.Message); Console.WriteLine("Continuing... "); } Console.WriteLine("========"); Console.Read(); } // Define other methods and classes here private static List<string> SearchInFile(Item file) { var result = new List<string>(); try { var stream = new StreamReader(file.DownloadFile(), Encoding.Default); var line = stream.ReadLine(); var lineIndex = 0; while (!stream.EndOfStream) { if (textPatterns.Any(p => line.IndexOf(p, StringComparison.OrdinalIgnoreCase) >= 0)) result.Add("=== Line " + lineIndex + ": " + line.Trim()); line = stream.ReadLine(); lineIndex++; } } catch (Exception e) { string ex = e.Message; Console.WriteLine("!!EXCEPTION: " + e.Message); Console.WriteLine("Continuing... "); } return result; } } } A: There is another alternative solution, that seems to be more attractive. * *Setup a search server - could be any windows machine/server *Setup a TFS notification service* (Bissubscribe) to get, delete, update files everytime a checkin happens. So this is a web service that acts like a listener on the TFS server, and updates/syncs the files and folders on the Search server. - this will dramatically improve the accuracy (live search), and avoid the one-time load of making periodic gets *Setup an indexing service/windows indexed search on the Search server for the root folder *Expose a web service to return search results Now with all the above setup, you have a few options for the client: * *Setup a web page to call the search service and format the results to show on the webpage - you can also integrate this webpage inside visual studio (through a macro or a add-in) *Create a windows client interface(winforms/wpf) to call the search service and format the results and show them on the UI - you can also integrate this client tool inside visual studio via VSPackages or add-in Update: I did go this route, and it has been working nicely. Just wanted to add to this. Reference links: * *Use this tool instead of bissubscribe.exe *Handling TFS events *Team System Notifications A: If you install TFS 2008 PowerTools you will get a "Find in Source Control" action in the Team Explorer right click menu. TFS2008 Power Tools A: There is currently no way to do this out of the box, but there is a User Voice suggestion for adding it: http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/2037649-implement-indexed-full-text-search-of-work-items While I doubt it is as simple as flipping a switch, if everyone that has viewed this question voted for it, MS would probably implement something. Update: Just read Brian Harry's blog, which shows this request as being on their radar, and the Online version of Visual Studio has limited support for searching where git is used as the vcs: http://blogs.msdn.com/b/visualstudioalm/archive/2015/02/13/announcing-limited-preview-for-visual-studio-online-code-search.aspx. From this I think it's fair to say it is just a matter of time... Update 2: There is now a Microsoft provided extension,Code Search which enables searching in code as well as in work items. A: This search for a file link explains how to find a file. I did have to muck around with the advice to make it work. * *cd "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE" *tf dir "$/*.sql" /recursive /server:http://mytfsserver:8080/tfs In the case of the cd command, I performed the cd command because I was looking for the tf.exe file. It was easier to just start from that directory verses adding the whole path. Now that I understand how to make this work, I'd use the absolute path in quotes. In case of the tf search, I started at the root of the server with $/ and I searched for all files that ended with sql i.e. *.sql. If you don't want to start at the root, then use "$/myproject/*.sql" instead. Oh! This does not solve the search in file part of the question but my Google search brought me here to find files among other links. A: Assuming you have Notepad++, an often-missed feature is 'Find in files', which is extremely fast and comes with filters, regular expressions, replace and all the N++ goodies. A: This add-in claims to have the functionality that I believe you seek: Team Foundation Sidekicks
{ "language": "en", "url": "https://stackoverflow.com/questions/41039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "103" }
Q: XML dataset in Crystal Reports I am trying to print a report from within an InfoPath template. So my dataset is an XML DOM that I will load into the Crystal Report at runtime. But how do I define the dataset off which the Crystal Report is developed? Crystal Reports has a great tool to build a dataset from an SQL database. Is there something similar for XML schema that I am missing? A: right now i don't have crystal reports installed on my machine but if i remember correctly you can select as the source of your report an xml file. i believe that you can also select the xsd with the data definition for your xml file. in my case, since i was working with a dataset i would run my application and save the xml representation of the dataset with dataset.writexml(true) so that i would end up with an xml file that include the data definition A: I see the ability to pick XML in articles about CR on the web, but I don't have it available on mine. Could this be because I am using the CR that is bundled with Visual Studio 2008?
{ "language": "en", "url": "https://stackoverflow.com/questions/41042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can I have polymorphic containers with value semantics in C++? As a general rule, I prefer using value rather than pointer semantics in C++ (ie using vector<Class> instead of vector<Class*>). Usually the slight loss in performance is more than made up for by not having to remember to delete dynamically allocated objects. Unfortunately, value collections don't work when you want to store a variety of object types that all derive from a common base. See the example below. #include <iostream> using namespace std; class Parent { public: Parent() : parent_mem(1) {} virtual void write() { cout << "Parent: " << parent_mem << endl; } int parent_mem; }; class Child : public Parent { public: Child() : child_mem(2) { parent_mem = 2; } void write() { cout << "Child: " << parent_mem << ", " << child_mem << endl; } int child_mem; }; int main(int, char**) { // I can have a polymorphic container with pointer semantics vector<Parent*> pointerVec; pointerVec.push_back(new Parent()); pointerVec.push_back(new Child()); pointerVec[0]->write(); pointerVec[1]->write(); // Output: // // Parent: 1 // Child: 2, 2 // But I can't do it with value semantics vector<Parent> valueVec; valueVec.push_back(Parent()); valueVec.push_back(Child()); // gets turned into a Parent object :( valueVec[0].write(); valueVec[1].write(); // Output: // // Parent: 1 // Parent: 2 } My question is: Can I have have my cake (value semantics) and eat it too (polymorphic containers)? Or do I have to use pointers? A: You might also consider boost::any. I've used it for heterogeneous containers. When reading the value back, you need to perform an any_cast. It will throw a bad_any_cast if it fails. If that happens, you can catch and move on to the next type. I believe it will throw a bad_any_cast if you try to any_cast a derived class to its base. I tried it: // But you sort of can do it with boost::any. vector<any> valueVec; valueVec.push_back(any(Parent())); valueVec.push_back(any(Child())); // remains a Child, wrapped in an Any. Parent p = any_cast<Parent>(valueVec[0]); Child c = any_cast<Child>(valueVec[1]); p.write(); c.write(); // Output: // // Parent: 1 // Child: 2, 2 // Now try casting the child as a parent. try { Parent p2 = any_cast<Parent>(valueVec[1]); p2.write(); } catch (const boost::bad_any_cast &e) { cout << e.what() << endl; } // Output: // boost::bad_any_cast: failed conversion using boost::any_cast All that being said, I would also go the shared_ptr route first! Just thought this might be of some interest. A: While searching for an answer to this problem, I came across both this and a similar question. In the answers to the other question you will find two suggested solutions: * *Use std::optional or boost::optional and a visitor pattern. This solution makes it hard to add new types, but easy to add new functionality. *Use a wrapper class similar to what Sean Parent presents in his talk. This solution makes it hard to add new functionality, but easy to add new types. The wrapper defines the interface you need for your classes and holds a pointer to one such object. The implementation of the interface is done with free functions. Here is an example implementation of this pattern: class Shape { public: template<typename T> Shape(T t) : container(std::make_shared<Model<T>>(std::move(t))) {} friend void draw(const Shape &shape) { shape.container->drawImpl(); } // add more functions similar to draw() here if you wish // remember also to add a wrapper in the Concept and Model below private: struct Concept { virtual ~Concept() = default; virtual void drawImpl() const = 0; }; template<typename T> struct Model : public Concept { Model(T x) : m_data(move(x)) { } void drawImpl() const override { draw(m_data); } T m_data; }; std::shared_ptr<const Concept> container; }; Different shapes are then implemented as regular structs/classes. You are free to choose if you want to use member functions or free functions (but you will have to update the above implementation to use member functions). I prefer free functions: struct Circle { const double radius = 4.0; }; struct Rectangle { const double width = 2.0; const double height = 3.0; }; void draw(const Circle &circle) { cout << "Drew circle with radius " << circle.radius << endl; } void draw(const Rectangle &rectangle) { cout << "Drew rectangle with width " << rectangle.width << endl; } You can now add both Circle and Rectangle objects to the same std::vector<Shape>: int main() { std::vector<Shape> shapes; shapes.emplace_back(Circle()); shapes.emplace_back(Rectangle()); for (const auto &shape : shapes) { draw(shape); } return 0; } The downside of this pattern is that it requires a large amount of boilerplate in the interface, since each function needs to be defined three times. The upside is that you get copy-semantics: int main() { Shape a = Circle(); Shape b = Rectangle(); b = a; draw(a); draw(b); return 0; } This produces: Drew rectangle with width 2 Drew rectangle with width 2 If you are concerned about the shared_ptr, you can replace it with a unique_ptr. However, it will no longer be copyable and you will have to either move all objects or implement copying manually. Sean Parent discusses this in detail in his talk and an implementation is shown in the above mentioned answer. A: Since the objects of different classes will have different sizes, you would end up running into the slicing problem if you store them as values. One reasonable solution is to store container safe smart pointers. I normally use boost::shared_ptr which is safe to store in a container. Note that std::auto_ptr is not. vector<shared_ptr<Parent>> vec; vec.push_back(shared_ptr<Parent>(new Child())); shared_ptr uses reference counting so it will not delete the underlying instance until all references are removed. A: Take a look at static_cast and reinterpret_cast In C++ Programming Language, 3rd ed, Bjarne Stroustrup describes it on page 130. There's a whole section on this in Chapter 6. You can recast your Parent class to Child class. This requires you to know when each one is which. In the book, Dr. Stroustrup talks about different techniques to avoid this situation. Do not do this. This negates the polymorphism that you're trying to achieve in the first place! A: Most container types want to abstract the particular storage strategy, be it linked list, vector, tree-based or what have you. For this reason, you're going to have trouble with both possessing and consuming the aforementioned cake (i.e., the cake is lie (NB: someone had to make this joke)). So what to do? Well there are a few cute options, but most will reduce to variants on one of a few themes or combinations of them: picking or inventing a suitable smart pointer, playing with templates or template templates in some clever way, using a common interface for containees that provides a hook for implementing per-containee double-dispatch. There's basic tension between your two stated goals, so you should decide what you want, then try to design something that gets you basically what you want. It is possible to do some nice and unexpected tricks to get pointers to look like values with clever enough reference counting and clever enough implementations of a factory. The basic idea is to use reference counting and copy-on-demand and constness and (for the factor) a combination of the preprocessor, templates, and C++'s static initialization rules to get something that is as smart as possible about automating pointer conversions. I have, in the past, spent some time trying to envision how to use Virtual Proxy / Envelope-Letter / that cute trick with reference counted pointers to accomplish something like a basis for value semantic programming in C++. And I think it could be done, but you'd have to provide a fairly closed, C#-managed-code-like world within C++ (though one from which you could break through to underlying C++ when needed). So I have a lot of sympathy for your line of thought. A: Just to add one thing to all 1800 INFORMATION already said. You might want to take a look at "More Effective C++" by Scott Mayers "Item 3: Never treat arrays polymorphically" in order to better understand this issue. A: I just wanted to point out that vector<Foo> is usually more efficient than vector<Foo*>. In a vector<Foo>, all the Foos will be adjacent to each other in memory. Assuming a cold TLB and cache, the first read will add the page to the TLB and pull a chunk of the vector into the L# caches; subsequent reads will use the warm cache and loaded TLB, with occasional cache misses and less frequent TLB faults. Contrast this with a vector<Foo*>: As you fill the vector, you obtain Foo*'s from your memory allocator. Assuming your allocator is not extremely smart, (tcmalloc?) or you fill the vector slowly over time, the location of each Foo is likely to be far apart from the other Foos: maybe just by hundreds of bytes, maybe megabytes apart. In the worst case, as you scan through a vector<Foo*> and dereferencing each pointer you will incur a TLB fault and cache miss -- this will end up being a lot slower than if you had a vector<Foo>. (Well, in the really worst case, each Foo has been paged out to disk, and every read incurs a disk seek() and read() to move the page back into RAM.) So, keep on using vector<Foo> whenever appropriate. :-) A: Yes, you can. The boost.ptr_container library provides polymorphic value semantic versions of the standard containers. You only have to pass in a pointer to a heap-allocated object, and the container will take ownership and all further operations will provide value semantics , except for reclaiming ownership, which gives you almost all the benefits of value semantics by using a smart pointer. A: I'm using my own templated collection class with exposed value type semantics, but internally it stores pointers. It's using a custom iterator class that when dereferenced gets a value reference instead of a pointer. Copying the collection makes deep item copies, instead of duplicated pointers, and this is where most overhead lies (a really minor issue, considered what I get instead). That's an idea that could suit your needs.
{ "language": "en", "url": "https://stackoverflow.com/questions/41045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: Where do you store your database connectionstring? I usually store my connectionstring in web.config or in the application settings of my Visual Studio project. The application I'm currently working on makes a lot of trips to the database which means it will look up the connectionstring every time. Should I be putting the connectionstring in the cache or should I be looking at storing the whole SqlConnection object in the cache to eliminate the need to open and close them all the time? Update: Seems like the consensus is to store the connection string in a configuration file and leave the caching in the trusting hand of ADO.NET A: I wouldn't cache the connection object, that will defeat the built-in connection pooling -- ADO.NET will handle connections (assuming you instantiate and close them) efficiently by itself. As far as the connection string itself, you shouldn't need to cache it if you load it from connection -- the connection manager object in the .NET 2.0 framework loads the config into memory when you first access it, so there are no repeat trips to the file system. A: The web.config is cached. But even if it wasn't, don't forget that ado.net maintains a connection pool - its not opening a new connection every time you make a call to the db. A: I usually cache the connection string in a global configuration object in my application. This value is loaded up at the beginning of program execution from where ever it is stored -- file, encrypted file, config file, etc. ADO.NET is very good at caching connection objects to the database so I would not cache the SqlConnection object. A: Keep it in a configuration file. Use a robust data access strategy provided by tools like NHibernate or Linq to Sql. A: From what I can recall the contents of the .config file are held in memory anyway... I'll get back to you. Edit: What HE said A: A possible solution: Store the initial encrypted connection string ( in Web.Config or App.Config) for a login allowed to run only one stored procedure for authentication. Than switch the login dynamically from encrypted values stored in a config table in the db.
{ "language": "en", "url": "https://stackoverflow.com/questions/41050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Best java tools for emacs I'm a long-time emacs user, and I'm now working about 1/2 time in Java. What are the best emacs libraries for * *Debugging Java *Code Completion/Intellisense *Javadoc browsing ? A: For javadoc I found http://javadochelp.sourceforge.net/index.html to be the best. Exuberant ctags is your best friend when it comes to navigation. A: I've used JDEE on several projects. It handles Code Completion. I've never used it for debugging or browsing docs, but it's a big step up from a basic text editor. A: I've had good success with jdibug for debugging Java code with Emacs.
{ "language": "en", "url": "https://stackoverflow.com/questions/41056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: .NET Mass Downloader with VS.NET 2005? After downloading all .NET framework symbols and sources using NetMassDownloader, is it possible to setup the VS.NET 2005 for debugging into .NET 2.0 source files? A: It looks like you can download the symbols, though they're not available for browsing.
{ "language": "en", "url": "https://stackoverflow.com/questions/41073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do I click a button on a vb6 form? I have a vb6 form with an ocx control on it. The ocx control has a button on it that I want to press from code. How do I do this? I have: Dim b As CommandButton Set b = ocx.GetButton("btnPrint") SendMessage ocx.hwnd, WM_COMMAND, GetWindowLong(b.hwnd, GWL_ID), b.hwnd but it doesn't seem to work. A: I believe the following will work: Dim b As CommandButton Set b = ocx.GetButton("btnPrint") b = True CommandButtons actually have two functions. One is the usual click button and the other is a toggle button that acts similar to a CheckBox. The default property of the CommandButton is actually the Value property that indicates whether a button is toggled. By setting the property, the Click event is generated. This is done even if the button is not styled as a ToggleButton and therefore doesn't change its state. A: If you have access to the OCX code, you could expose the associated event handler and invoke it directly. Don't know if an equivalent of .Net Button's Click() method existed back in VB6 days A: Do you have access to the OCX code? You shouldn't really be directly invoking the click of a button. You should refactor the code so that the OCX button click code calls a function, e.g. CMyWindow::OnLButtonDown() { this->FooBar(); } Then from your VB6 app, directly call the FooBar method. If you can't directly call functions from VB6 you can wrap the FooBar() method with a windows message proc function, e.g. #define WM_FOOBAR WM_APP + 1 Then use SendMessage in the VB6, like SendMessage(WM_FOOBAR, ...) A: This: Dim b As CommandButton Set b = ocx.GetButton("btnPrint") b = True does work. Completely unintuitive. I'd expect it to throw an error since a bool is not a valid CommandButton, but it is because of the default property thing. WM_LBUTTONDOWN would be a mouse click, what I want is a button click (button as in a hwnd button, not a mouse button). I don't have access to the source of the ocx (it's a 3rd party control). If I did, I would expose the function that I wanted to call (the original writer of the ocx should have exposed it). A: For keypress you can also use sendmessage sending both keydown and keyup: Private Declare Function SendMessage Lib "user32" Alias "SendMessageA" (ByVal hWnd As Long, ByVal wMsg As Long, ByVal wParam As Long, lParam As Long) As Long Const WM_KEYDOWN As Integer = &H100 Const WM_KEYUP As Integer = &H101 Const VK_SPACE = &H20 Private Sub cmdCommand1_Click() Dim b As CommandButton Set b = ocx.GetButton("btnPrint") SendMessage b.hWnd, WM_KEYDOWN, VK_SPACE, 0& SendMessage b.hWnd, WM_KEYUP, VK_SPACE, 0& End Sub
{ "language": "en", "url": "https://stackoverflow.com/questions/41089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What are some compact algorithms for generating interesting time series data? The question sort of says it all. Whether it's for code testing purposes, or you're modeling a real-world process, or you're trying to impress a loved one, what are some algorithms that folks use to generate interesting time series data? Are there any good resources out there with a consolidated list? No constraints on values (except plus or minus infinity) or dimensions, but I'm looking for examples that people have found useful or exciting in practice. Bonus points for parsimonious and readable code samples. A: Don't have an answer for the algorithm part but you can see how "realistic" your data is with Benford's law A: There are a ton of PRN generators out there, and you can always get free random bits, or even buy them on CD or DVD. I've used simple sine wave generators mixed together with some phase and amplitude noise thrown in to get signals that sound and look interesting to humans when put through speakers or lights, but I don't know what you mean by interesting. There are ways to generate data that looks interesting in a chart form, but that would be different than data used on a stock chart, and neither would make a nice "static" image such as produced by an analog television tuned to a null channel. You can use Conway's game of life as a PRN, and "listen" to cells (or run all the cells through a logic circuit) to get some interesting time based signals. It would be interesting to look at the graph of DB updates/inserts for Stackoverflow over time, and you could mine that data. There really are infinite ways to generate an "interesting" time series data. Can you narrow the scope of your question? A: Try the kind of recurrences that can give variously simple or chaotic series based on the part of their phase spaces you explore: the simplest I can think of is the logistic map x(n+1) = r * x(n) * ( 1 - x(n) ). With r approx. 3.57 you get chaotic results that depend on the initial point. If you graph this versus time you can get lots of different series just by manipulating that parameter r. If you were to graph it as x(n+1) v. x(n) without connecting dots, you see a simple parabola take shape over time. This is one of the most basic functions from chaos theory and trying more interesting polynomials, graphing them as x(n+1) v. x(n) and watching a shape form, and then graphing x(n) v. n is a fun and interesting way to create series. Graphing x(n+1) v. x(n) makes it quickly obvious if you're only visiting a small number of points. Deeper recurrences become more interesting as well, and using different values of x(0) to check on sensitivity to initial conditions is also of interest. But for simplicity, control by a single parameter, and ability to find something to read about your recurrence, it'll be hard to beat the logistic map. I recommend: http://en.wikipedia.org/wiki/Logistic_map. It has a nice description of what to expect from different values of r.
{ "language": "en", "url": "https://stackoverflow.com/questions/41097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to generate a random alpha-numeric string I've been looking for a simple Java algorithm to generate a pseudo-random alpha-numeric string. In my situation it would be used as a unique session/key identifier that would "likely" be unique over 500K+ generation (my needs don't really require anything much more sophisticated). Ideally, I would be able to specify a length depending on my uniqueness needs. For example, a generated string of length 12 might look something like "AEYGF7K0DM1X". A: I found this solution that generates a random hex encoded string. The provided unit test seems to hold up to my primary use case. Although, it is slightly more complex than some of the other answers provided. /** * Generate a random hex encoded string token of the specified length * * @param length * @return random hex string */ public static synchronized String generateUniqueToken(Integer length){ byte random[] = new byte[length]; Random randomGenerator = new Random(); StringBuffer buffer = new StringBuffer(); randomGenerator.nextBytes(random); for (int j = 0; j < random.length; j++) { byte b1 = (byte) ((random[j] & 0xf0) >> 4); byte b2 = (byte) (random[j] & 0x0f); if (b1 < 10) buffer.append((char) ('0' + b1)); else buffer.append((char) ('A' + (b1 - 10))); if (b2 < 10) buffer.append((char) ('0' + b2)); else buffer.append((char) ('A' + (b2 - 10))); } return (buffer.toString()); } @Test public void testGenerateUniqueToken(){ Set set = new HashSet(); String token = null; int size = 16; /* Seems like we should be able to generate 500K tokens * without a duplicate */ for (int i=0; i<500000; i++){ token = Utility.generateUniqueToken(size); if (token.length() != size * 2){ fail("Incorrect length"); } else if (set.contains(token)) { fail("Duplicate token generated"); } else{ set.add(token); } } } A: This is easily achievable without any external libraries. 1. Cryptographic Pseudo Random Data Generation (PRNG) First you need a cryptographic PRNG. Java has SecureRandom for that and typically uses the best entropy source on the machine (e.g. /dev/random). Read more here. SecureRandom rnd = new SecureRandom(); byte[] token = new byte[byteLength]; rnd.nextBytes(token); Note: SecureRandom is the slowest, but most secure way in Java of generating random bytes. I do however recommend not considering performance here since it usually has no real impact on your application unless you have to generate millions of tokens per second. 2. Required Space of Possible Values Next you have to decide "how unique" your token needs to be. The whole and only point of considering entropy is to make sure that the system can resist brute force attacks: the space of possible values must be so large that any attacker could only try a negligible proportion of the values in non-ludicrous time1. Unique identifiers such as random UUID have 122 bit of entropy (i.e., 2^122 = 5.3x10^36) - the chance of collision is "*(...) for there to be a one in a billion chance of duplication, 103 trillion version 4 UUIDs must be generated2". We will choose 128 bits since it fits exactly into 16 bytes and is seen as highly sufficient for being unique for basically every, but the most extreme, use cases and you don't have to think about duplicates. Here is a simple comparison table of entropy including simple analysis of the birthday problem. For simple requirements, 8 or 12 byte length might suffice, but with 16 bytes you are on the "safe side". And that's basically it. The last thing is to think about encoding so it can be represented as a printable text (read, a String). 3. Binary to Text Encoding Typical encodings include: * *Base64 every character encodes 6 bit, creating a 33% overhead. Fortunately there are standard implementations in Java 8+ and Android. With older Java you can use any of the numerous third-party libraries. If you want your tokens to be URL safe use the URL-safe version of RFC4648 (which usually is supported by most implementations). Example encoding 16 bytes with padding: XfJhfv3C0P6ag7y9VQxSbw== *Base32 every character encodes 5 bit, creating a 40% overhead. This will use A-Z and 2-7, making it reasonably space efficient while being case-insensitive alpha-numeric. There isn't any standard implementation in the JDK. Example encoding 16 bytes without padding: WUPIL5DQTZGMF4D3NX5L7LNFOY *Base16 (hexadecimal) every character encodes four bit, requiring two characters per byte (i.e., 16 bytes create a string of length 32). Therefore hexadecimal is less space efficient than Base32, but it is safe to use in most cases (URL) since it only uses 0-9 and A to F. Example encoding 16 bytes: 4fa3dd0f57cb3bf331441ed285b27735. See a Stack Overflow discussion about converting to hexadecimal here. Additional encodings like Base85 and the exotic Base122 exist with better/worse space efficiency. You can create your own encoding (which basically most answers in this thread do), but I would advise against it, if you don't have very specific requirements. See more encoding schemes in the Wikipedia article. 4. Summary and Example * *Use SecureRandom *Use at least 16 bytes (2^128) of possible values *Encode according to your requirements (usually hex or base32 if you need it to be alpha-numeric) Don't * *... use your home brew encoding: better maintainable and readable for others if they see what standard encoding you use instead of weird for loops creating characters at a time. *... use UUID: it has no guarantees on randomness; you are wasting 6 bits of entropy and have a verbose string representation Example: Hexadecimal Token Generator public static String generateRandomHexToken(int byteLength) { SecureRandom secureRandom = new SecureRandom(); byte[] token = new byte[byteLength]; secureRandom.nextBytes(token); return new BigInteger(1, token).toString(16); // Hexadecimal encoding } //generateRandomHexToken(16) -> 2189df7475e96aa3982dbeab266497cd Example: Base64 Token Generator (URL Safe) public static String generateRandomBase64Token(int byteLength) { SecureRandom secureRandom = new SecureRandom(); byte[] token = new byte[byteLength]; secureRandom.nextBytes(token); return Base64.getUrlEncoder().withoutPadding().encodeToString(token); //base64 encoding } //generateRandomBase64Token(16) -> EEcCCAYuUcQk7IuzdaPzrg Example: Java CLI Tool If you want a ready-to-use CLI tool you may use dice: Example: Related issue - Protect Your Current Ids If you already have an id you can use (e.g., a synthetic long in your entity), but don't want to publish the internal value, you can use this library to encrypt it and obfuscate it: https://github.com/patrickfav/id-mask IdMask<Long> idMask = IdMasks.forLongIds(Config.builder(key).build()); String maskedId = idMask.mask(id); // Example: NPSBolhMyabUBdTyanrbqT8 long originalId = idMask.unmask(maskedId); A: Java supplies a way of doing this directly. If you don't want the dashes, they are easy to strip out. Just use uuid.replace("-", "") import java.util.UUID; public class randomStringGenerator { public static void main(String[] args) { System.out.println(generateString()); } public static String generateString() { String uuid = UUID.randomUUID().toString(); return "uuid = " + uuid; } } Output uuid = 2d7428a6-b58c-4008-8575-f05549f16316 A: * *Change String characters as per as your requirements. *String is immutable. Here StringBuilder.append is more efficient than string concatenation. public static String getRandomString(int length) { final String characters = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJLMNOPQRSTUVWXYZ1234567890!@#$%^&*()_+"; StringBuilder result = new StringBuilder(); while(length > 0) { Random rand = new Random(); result.append(characters.charAt(rand.nextInt(characters.length()))); length--; } return result.toString(); } A: import java.util.Date; import java.util.Random; public class RandomGenerator { private static Random random = new Random((new Date()).getTime()); public static String generateRandomString(int length) { char[] values = {'a','b','c','d','e','f','g','h','i','j', 'k','l','m','n','o','p','q','r','s','t', 'u','v','w','x','y','z','0','1','2','3', '4','5','6','7','8','9'}; String out = ""; for (int i=0;i<length;i++) { int idx=random.nextInt(values.length); out += values[idx]; } return out; } } A: I don't really like any of these answers regarding a "simple" solution :S I would go for a simple ;), pure Java, one liner (entropy is based on random string length and the given character set): public String randomString(int length, String characterSet) { return IntStream.range(0, length).map(i -> new SecureRandom().nextInt(characterSet.length())).mapToObj(randomInt -> characterSet.substring(randomInt, randomInt + 1)).collect(Collectors.joining()); } @Test public void buildFiveRandomStrings() { for (int q = 0; q < 5; q++) { System.out.println(randomString(10, "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789")); // The character set can basically be anything } } Or (a bit more readable old way) public String randomString(int length, String characterSet) { StringBuilder sb = new StringBuilder(); // Consider using StringBuffer if needed for (int i = 0; i < length; i++) { int randomInt = new SecureRandom().nextInt(characterSet.length()); sb.append(characterSet.substring(randomInt, randomInt + 1)); } return sb.toString(); } @Test public void buildFiveRandomStrings() { for (int q = 0; q < 5; q++) { System.out.println(randomString(10, "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789")); // The character set can basically be anything } } But on the other hand you could also go with UUID which has a pretty good entropy: UUID.randomUUID().toString().replace("-", "") A: I'm using a library from Apache Commons to generate an alphanumeric string: import org.apache.commons.lang3.RandomStringUtils; String keyLength = 20; RandomStringUtils.randomAlphanumeric(keylength); It's fast and simple! A: static final String AB = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"; static SecureRandom rnd = new SecureRandom(); String randomString(int len){ StringBuilder sb = new StringBuilder(len); for(int i = 0; i < len; i++) sb.append(AB.charAt(rnd.nextInt(AB.length()))); return sb.toString(); } A: import java.util.*; import javax.swing.*; public class alphanumeric { public static void main(String args[]) { String nval, lenval; int n, len; nval = JOptionPane.showInputDialog("Enter number of codes you require: "); n = Integer.parseInt(nval); lenval = JOptionPane.showInputDialog("Enter code length you require: "); len = Integer.parseInt(lenval); find(n, len); } public static void find(int n, int length) { String str1 = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"; StringBuilder sb = new StringBuilder(length); Random r = new Random(); System.out.println("\n\t Unique codes are \n\n"); for(int i=0; i<n; i++) { for(int j=0; j<length; j++) { sb.append(str1.charAt(r.nextInt(str1.length()))); } System.out.println(" " + sb.toString()); sb.delete(0, length); } } } A: You mention "simple", but just in case anyone else is looking for something that meets more stringent security requirements, you might want to take a look at jpwgen. jpwgen is modeled after pwgen in Unix, and is very configurable. A: If you're happy to use Apache classes, you could use org.apache.commons.text.RandomStringGenerator (Apache Commons Text). Example: RandomStringGenerator randomStringGenerator = new RandomStringGenerator.Builder() .withinRange('0', 'z') .filteredBy(CharacterPredicates.LETTERS, CharacterPredicates.DIGITS) .build(); randomStringGenerator.generate(12); // toUpperCase() if you want Since Apache Commons Lang 3.6, RandomStringUtils is deprecated. A: Here is the one-liner by abacus-common: String.valueOf(CharStream.random('0', 'z').filter(c -> N.isLetterOrDigit(c)).limit(12).toArray()) Random doesn't mean it must be unique. To get unique strings, use: N.uuid() // E.g.: "e812e749-cf4c-4959-8ee1-57829a69a80f". length is 36. N.guid() // E.g.: "0678ce04e18945559ba82ddeccaabfcd". length is 32 without '-' A: Using Dollar should be as simple as: // "0123456789" + "ABCDE...Z" String validCharacters = $('0', '9').join() + $('A', 'Z').join(); String randomString(int length) { return $(validCharacters).shuffle().slice(length).toString(); } @Test public void buildFiveRandomStrings() { for (int i : $(5)) { System.out.println(randomString(12)); } } It outputs something like this: DKL1SBH9UJWC JH7P0IT21EA5 5DTI72EO6SFU HQUMJTEBNF7Y 1HCR6SKYWGT7 A: You can use the following code, if your password mandatory contains numbers and alphabetic special characters: private static final String NUMBERS = "0123456789"; private static final String UPPER_ALPHABETS = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; private static final String LOWER_ALPHABETS = "abcdefghijklmnopqrstuvwxyz"; private static final String SPECIALCHARACTERS = "@#$%&*"; private static final int MINLENGTHOFPASSWORD = 8; public static String getRandomPassword() { StringBuilder password = new StringBuilder(); int j = 0; for (int i = 0; i < MINLENGTHOFPASSWORD; i++) { password.append(getRandomPasswordCharacters(j)); j++; if (j == 3) { j = 0; } } return password.toString(); } private static String getRandomPasswordCharacters(int pos) { Random randomNum = new Random(); StringBuilder randomChar = new StringBuilder(); switch (pos) { case 0: randomChar.append(NUMBERS.charAt(randomNum.nextInt(NUMBERS.length() - 1))); break; case 1: randomChar.append(UPPER_ALPHABETS.charAt(randomNum.nextInt(UPPER_ALPHABETS.length() - 1))); break; case 2: randomChar.append(SPECIALCHARACTERS.charAt(randomNum.nextInt(SPECIALCHARACTERS.length() - 1))); break; case 3: randomChar.append(LOWER_ALPHABETS.charAt(randomNum.nextInt(LOWER_ALPHABETS.length() - 1))); break; } return randomChar.toString(); } A: You can use the UUID class with its getLeastSignificantBits() message to get 64 bit of random data, and then convert it to a radix 36 number (i.e. a string consisting of 0-9,A-Z): Long.toString(Math.abs( UUID.randomUUID().getLeastSignificantBits(), 36)); This yields a string up to 13 characters long. We use Math.abs() to make sure there isn't a minus sign sneaking in. A: Here it is in Java: import static java.lang.Math.round; import static java.lang.Math.random; import static java.lang.Math.pow; import static java.lang.Math.abs; import static java.lang.Math.min; import static org.apache.commons.lang.StringUtils.leftPad public class RandomAlphaNum { public static String gen(int length) { StringBuffer sb = new StringBuffer(); for (int i = length; i > 0; i -= 12) { int n = min(12, abs(i)); sb.append(leftPad(Long.toString(round(random() * pow(36, n)), 36), n, '0')); } return sb.toString(); } } Here's a sample run: scala> RandomAlphaNum.gen(42) res3: java.lang.String = uja6snx21bswf9t89s00bxssu8g6qlu16ffzqaxxoy A: A short and easy solution, but it uses only lowercase and numerics: Random r = new java.util.Random (); String s = Long.toString (r.nextLong () & Long.MAX_VALUE, 36); The size is about 12 digits to base 36 and can't be improved further, that way. Of course you can append multiple instances. A: Surprising, no one here has suggested it, but: import java.util.UUID UUID.randomUUID().toString(); Easy. The benefit of this is UUIDs are nice, long, and guaranteed to be almost impossible to collide. Wikipedia has a good explanation of it: " ...only after generating 1 billion UUIDs every second for the next 100 years, the probability of creating just one duplicate would be about 50%." The first four bits are the version type and two for the variant, so you get 122 bits of random. So if you want to, you can truncate from the end to reduce the size of the UUID. It's not recommended, but you still have loads of randomness, enough for your 500k records easy. A: Here it is a Scala solution: (for (i <- 0 until rnd.nextInt(64)) yield { ('0' + rnd.nextInt(64)).asInstanceOf[Char] }) mkString("") A: Using an Apache Commons library, it can be done in one line: import org.apache.commons.lang.RandomStringUtils; RandomStringUtils.randomAlphanumeric(64); Documentation A: public static String randomSeriesForThreeCharacter() { Random r = new Random(); String value = ""; char random_Char ; for(int i=0; i<10; i++) { random_Char = (char) (48 + r.nextInt(74)); value = value + random_char; } return value; } A: I think this is the smallest solution here, or nearly one of the smallest: public String generateRandomString(int length) { String randomString = ""; final char[] chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz01234567890".toCharArray(); final Random random = new Random(); for (int i = 0; i < length; i++) { randomString = randomString + chars[random.nextInt(chars.length)]; } return randomString; } The code works just fine. If you are using this method, I recommend you to use more than 10 characters. A collision happens at 5 characters / 30362 iterations. This took 9 seconds. A: public class Utils { private final Random RANDOM = new SecureRandom(); private final String ALPHABET = "0123456789QWERTYUIOPASDFGHJKLZXCVBNMqwertyuiopasdfghjklzxcvbnm"; private String generateRandomString(int length) { StringBuffer buffer = new StringBuffer(length); for (int i = 0; i < length; i++) { buffer.append(ALPHABET.charAt(RANDOM.nextInt(ALPHABET.length()))); } return new String(buffer); } } A: public static String getRandomString(int length) { String randomStr = UUID.randomUUID().toString(); while(randomStr.length() < length) { randomStr += UUID.randomUUID().toString(); } return randomStr.substring(0, length); } A: public static String getRandomString(int length) { char[] chars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRST".toCharArray(); StringBuilder sb = new StringBuilder(); Random random = new Random(); for (int i = 0; i < length; i++) { char c = chars[random.nextInt(chars.length)]; sb.append(c); } String randomStr = sb.toString(); return randomStr; } A: An alternative in Java 8 is: static final Random random = new Random(); // Or SecureRandom static final int startChar = (int) '!'; static final int endChar = (int) '~'; static String randomString(final int maxLength) { final int length = random.nextInt(maxLength + 1); return random.ints(length, startChar, endChar + 1) .collect(StringBuilder::new, StringBuilder::appendCodePoint, StringBuilder::append) .toString(); } A: Algorithm To generate a random string, concatenate characters drawn randomly from the set of acceptable symbols until the string reaches the desired length. Implementation Here's some fairly simple and very flexible code for generating random identifiers. Read the information that follows for important application notes. public class RandomString { /** * Generate a random string. */ public String nextString() { for (int idx = 0; idx < buf.length; ++idx) buf[idx] = symbols[random.nextInt(symbols.length)]; return new String(buf); } public static final String upper = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; public static final String lower = upper.toLowerCase(Locale.ROOT); public static final String digits = "0123456789"; public static final String alphanum = upper + lower + digits; private final Random random; private final char[] symbols; private final char[] buf; public RandomString(int length, Random random, String symbols) { if (length < 1) throw new IllegalArgumentException(); if (symbols.length() < 2) throw new IllegalArgumentException(); this.random = Objects.requireNonNull(random); this.symbols = symbols.toCharArray(); this.buf = new char[length]; } /** * Create an alphanumeric string generator. */ public RandomString(int length, Random random) { this(length, random, alphanum); } /** * Create an alphanumeric strings from a secure generator. */ public RandomString(int length) { this(length, new SecureRandom()); } /** * Create session identifiers. */ public RandomString() { this(21); } } Usage examples Create an insecure generator for 8-character identifiers: RandomString gen = new RandomString(8, ThreadLocalRandom.current()); Create a secure generator for session identifiers: RandomString session = new RandomString(); Create a generator with easy-to-read codes for printing. The strings are longer than full alphanumeric strings to compensate for using fewer symbols: String easy = RandomString.digits + "ACEFGHJKLMNPQRUVWXYabcdefhijkprstuvwx"; RandomString tickets = new RandomString(23, new SecureRandom(), easy); Use as session identifiers Generating session identifiers that are likely to be unique is not good enough, or you could just use a simple counter. Attackers hijack sessions when predictable identifiers are used. There is tension between length and security. Shorter identifiers are easier to guess, because there are fewer possibilities. But longer identifiers consume more storage and bandwidth. A larger set of symbols helps, but might cause encoding problems if identifiers are included in URLs or re-entered by hand. The underlying source of randomness, or entropy, for session identifiers should come from a random number generator designed for cryptography. However, initializing these generators can sometimes be computationally expensive or slow, so effort should be made to re-use them when possible. Use as object identifiers Not every application requires security. Random assignment can be an efficient way for multiple entities to generate identifiers in a shared space without any coordination or partitioning. Coordination can be slow, especially in a clustered or distributed environment, and splitting up a space causes problems when entities end up with shares that are too small or too big. Identifiers generated without taking measures to make them unpredictable should be protected by other means if an attacker might be able to view and manipulate them, as happens in most web applications. There should be a separate authorization system that protects objects whose identifier can be guessed by an attacker without access permission. Care must be also be taken to use identifiers that are long enough to make collisions unlikely given the anticipated total number of identifiers. This is referred to as "the birthday paradox." The probability of a collision, p, is approximately n2/(2qx), where n is the number of identifiers actually generated, q is the number of distinct symbols in the alphabet, and x is the length of the identifiers. This should be a very small number, like 2‑50 or less. Working this out shows that the chance of collision among 500k 15-character identifiers is about 2‑52, which is probably less likely than undetected errors from cosmic rays, etc. Comparison with UUIDs According to their specification, UUIDs are not designed to be unpredictable, and should not be used as session identifiers. UUIDs in their standard format take a lot of space: 36 characters for only 122 bits of entropy. (Not all bits of a "random" UUID are selected randomly.) A randomly chosen alphanumeric string packs more entropy in just 21 characters. UUIDs are not flexible; they have a standardized structure and layout. This is their chief virtue as well as their main weakness. When collaborating with an outside party, the standardization offered by UUIDs may be helpful. For purely internal use, they can be inefficient. A: You can use an Apache Commons library for this, RandomStringUtils: RandomStringUtils.randomAlphanumeric(20).toUpperCase(); A: public static String generateSessionKey(int length){ String alphabet = new String("0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"); // 9 int n = alphabet.length(); // 10 String result = new String(); Random r = new Random(); // 11 for (int i=0; i<length; i++) // 12 result = result + alphabet.charAt(r.nextInt(n)); //13 return result; } A: In one line: Long.toHexString(Double.doubleToLongBits(Math.random())); Source: Java - generating a random string A: import java.util.Random; public class passGen{ // Version 1.0 private static final String dCase = "abcdefghijklmnopqrstuvwxyz"; private static final String uCase = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; private static final String sChar = "!@#$%^&*"; private static final String intChar = "0123456789"; private static Random r = new Random(); private static StringBuilder pass = new StringBuilder(); public static void main (String[] args) { System.out.println ("Generating pass..."); while (pass.length () != 16){ int rPick = r.nextInt(4); if (rPick == 0){ int spot = r.nextInt(26); pass.append(dCase.charAt(spot)); } else if (rPick == 1) { int spot = r.nextInt(26); pass.append(uCase.charAt(spot)); } else if (rPick == 2) { int spot = r.nextInt(8); pass.append(sChar.charAt(spot)); } else { int spot = r.nextInt(10); pass.append(intChar.charAt(spot)); } } System.out.println ("Generated Pass: " + pass.toString()); } } This just adds the password into the string and... yeah, it works well. Check it out... It is very simple; I wrote it. A: Using UUIDs is insecure, because parts of the UUID aren't random at all. The procedure of erickson is very neat, but it does not create strings of the same length. The following snippet should be sufficient: /* * The random generator used by this class to create random keys. * In a holder class to defer initialization until needed. */ private static class RandomHolder { static final Random random = new SecureRandom(); public static String randomKey(int length) { return String.format("%"+length+"s", new BigInteger(length*5/*base 32,2^5*/, random) .toString(32)).replace('\u0020', '0'); } } Why choose length*5? Let's assume the simple case of a random string of length 1, so one random character. To get a random character containing all digits 0-9 and characters a-z, we would need a random number between 0 and 35 to get one of each character. BigInteger provides a constructor to generate a random number, uniformly distributed over the range 0 to (2^numBits - 1). Unfortunately 35 is not a number which can be received by 2^numBits - 1. So we have two options: Either go with 2^5-1=31 or 2^6-1=63. If we would choose 2^6 we would get a lot of "unnecessary" / "longer" numbers. Therefore 2^5 is the better option, even if we lose four characters (w-z). To now generate a string of a certain length, we can simply use a 2^(length*numBits)-1 number. The last problem, if we want a string with a certain length, random could generate a small number, so the length is not met, so we have to pad the string to its required length prepending zeros. A: Best Random String Generator Method public class RandomStringGenerator{ private static int randomStringLength = 25 ; private static boolean allowSpecialCharacters = true ; private static String specialCharacters = "!@$%*-_+:"; private static boolean allowDuplicates = false ; private static boolean isAlphanum = false; private static boolean isNumeric = false; private static boolean isAlpha = false; private static final String alphabet = "abcdefghijklmnopqrstuvwxyz"; private static boolean mixCase = false; private static final String capAlpha = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; private static final String num = "0123456789"; public static String getRandomString() { String returnVal = ""; int specialCharactersCount = 0; int maxspecialCharacters = randomStringLength/4; try { StringBuffer values = buildList(); for (int inx = 0; inx < randomStringLength; inx++) { int selChar = (int) (Math.random() * (values.length() - 1)); if (allowSpecialCharacters) { if (specialCharacters.indexOf("" + values.charAt(selChar)) > -1) { specialCharactersCount ++; if (specialCharactersCount > maxspecialCharacters) { while (specialCharacters.indexOf("" + values.charAt(selChar)) != -1) { selChar = (int) (Math.random() * (values.length() - 1)); } } } } returnVal += values.charAt(selChar); if (!allowDuplicates) { values.deleteCharAt(selChar); } } } catch (Exception e) { returnVal = "Error While Processing Values"; } return returnVal; } private static StringBuffer buildList() { StringBuffer list = new StringBuffer(0); if (isNumeric || isAlphanum) { list.append(num); } if (isAlpha || isAlphanum) { list.append(alphabet); if (mixCase) { list.append(capAlpha); } } if (allowSpecialCharacters) { list.append(specialCharacters); } int currLen = list.length(); String returnVal = ""; for (int inx = 0; inx < currLen; inx++) { int selChar = (int) (Math.random() * (list.length() - 1)); returnVal += list.charAt(selChar); list.deleteCharAt(selChar); } list = new StringBuffer(returnVal); return list; } } A: There are lots of use of StringBuilder in previous answers. I guess it's easy, but it requires a function call per character, growing an array, etc... If using the stringbuilder, a suggestion is to specify the required capacity of the string, i.e., new StringBuilder(int capacity); Here's a version that doesn't use a StringBuilder or String appending, and no dictionary. public static String randomString(int length) { SecureRandom random = new SecureRandom(); char[] chars = new char[length]; for(int i=0; i<chars.length; i++) { int v = random.nextInt(10 + 26 + 26); char c; if (v < 10) { c = (char)('0' + v); } else if (v < 36) { c = (char)('a' - 10 + v); } else { c = (char)('A' - 36 + v); } chars[i] = c; } return new String(chars); } A: You can create a character array which includes all the letters and numbers, and then you can randomly select from this character array and create your own string password. char[] chars = new char[62]; // Sum of letters and numbers int i = 0; for(char c = 'a'; c <= 'z'; c++) { // For letters chars[i++] = c; } for(char c = '0'; c <= '9';c++) { // For numbers chars[i++] = c; } for(char c = 'A'; c <= 'Z';c++) { // For capital letters chars[i++] = c; } int numberOfCodes = 0; String code = ""; while (numberOfCodes < 1) { // Enter how much you want to generate at one time int numChars = 8; // Enter how many digits you want in your password for(i = 0; i < numChars; i++) { char c = chars[(int)(Math.random() * chars.length)]; code = code + c; } System.out.println("Code is:" + code); } A: Maybe this is helpful package password.generater; import java.util.Random; /** * * @author dell */ public class PasswordGenerater { /** * @param args the command line arguments */ public static void main(String[] args) { int length= 11; System.out.println(generatePswd(length)); // TODO code application logic here } static char[] generatePswd(int len){ System.out.println("Your Password "); String charsCaps="ABCDEFGHIJKLMNOPQRSTUVWXYZ"; String Chars="abcdefghijklmnopqrstuvwxyz"; String nums="0123456789"; String symbols="!@#$%^&*()_+-=.,/';:?><~*/-+"; String passSymbols=charsCaps + Chars + nums +symbols; Random rnd=new Random(); char[] password=new char[len]; for(int i=0; i<len;i++){ password[i]=passSymbols.charAt(rnd.nextInt(passSymbols.length())); } return password; } } A: I have developed an application to develop an autogenerated alphanumberic string for my project. In this string, the first three characters are alphabetical and the next seven are integers. public class AlphaNumericGenerator { public static void main(String[] args) { java.util.Random r = new java.util.Random(); int i = 1, n = 0; char c; String str = ""; for (int t = 0; t < 3; t++) { while (true) { i = r.nextInt(10); if (i > 5 && i < 10) { if (i == 9) { i = 90; n = 90; break; } if (i != 90) { n = i * 10 + r.nextInt(10); while (n < 65) { n = i * 10 + r.nextInt(10); } } break; } } c = (char)n; str = String.valueOf(c) + str; } while(true){ i = r.nextInt(10000000); if(i > 999999) break; } str = str + i; System.out.println(str); } } A: Here's a simple one-liner using UUIDs as the character base and being able to specify (almost) any length. (Yes, I know that using a UUID has been suggested before.) public static String randString(int length) { return UUID.randomUUID().toString().replace("-", "").substring(0, Math.min(length, 32)) + (length > 32 ? randString(length - 32) : ""); } A: Here is a Java 8 solution based on streams. public String generateString(String alphabet, int length) { return generateString(alphabet, length, new SecureRandom()::nextInt); } // nextInt = bound -> n in [0, bound) public String generateString(String source, int length, IntFunction<Integer> nextInt) { StringBuilder sb = new StringBuilder(); IntStream.generate(source::length) .boxed() .limit(length) .map(nextInt::apply) .map(source::charAt) .forEach(sb::append); return sb.toString(); } Use it like String alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"; int length = 12; String generated = generateString(alphabet, length); System.out.println(generated); The function nextInt should accept an int bound and return a random number between 0 and bound - 1. A: public static String RandomAlphanum(int length) { String charstring = "abcdefghijklmnopqrstuvwxyz0123456789"; String randalphanum = ""; double randroll; String randchar; for (double i = 0; i < length; i++) { randroll = Math.random(); randchar = ""; for (int j = 1; j <= 35; j++) { if (randroll <= (1.0 / 36.0 * j)) { randchar = Character.toString(charstring.charAt(j - 1)); break; } } randalphanum += randchar; } return randalphanum; } I used a very primitive algorithm using Math.random(). To increase randomness, you can directly implement the util.Date class. Nevertheless, it works. A: Given the some characters (AllCharacters), you could pickup a character in the string randomly. Then use-for loop to get random charcter repeatedly. public class MyProgram { static String getRandomString(int size) { String AllCharacters = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"; StringBuilder sb = new StringBuilder(size); int length = AllCharacters.length(); for (int i = 0; i < size; i++) { sb.append(AllCharacters.charAt((int)(length * Math.random()))); } return sb.toString(); } public static void main(String[] args) { System.out.println(MyProgram.getRandomString(30)); } } * *Try it on the sandbox *Also see other languages implement random string generator A: You can do this in one line with no external libraries. int length = 12; String randomString = new Random().ints(48, 122).filter(i -> (i < 58 || i > 64) && (i < 91 || i > 96)).limit(length).collect(StringBuilder::new, StringBuilder::appendCodePoint, StringBuilder::append).toString(); System.out.print(randomString); I have separated out the length into a parameter, and added a line to print the result. This code creates a stream of random integers bounded to the alphanumeric ascii range. It then filters out some symbols, because the alphanumeric range is not continuous. It then limits the length and collects the result into a string. Because this approach discards something like 20% of the numbers/characters it generates (as they are symbols), there is a small performance impact. I don't find it particularly readable, but I don't think anyone else has suggested a native Java solution in one line. A: Yet another solution... public static String generatePassword(int passwordLength) { int asciiFirst = 33; int asciiLast = 126; Integer[] exceptions = { 34, 39, 96 }; List<Integer> exceptionsList = Arrays.asList(exceptions); SecureRandom random = new SecureRandom(); StringBuilder builder = new StringBuilder(); for (int i=0; i<passwordLength; i++) { int charIndex; do { charIndex = random.nextInt(asciiLast - asciiFirst + 1) + asciiFirst; } while (exceptionsList.contains(charIndex)); builder.append((char) charIndex); } return builder.toString(); } A: Also, you can generate any lowercase or uppercase letters or even special characters through data from the ASCII table. For example, generate upper case letters from A (DEC 65) to Z (DEC 90): String generateRandomStr(int min, int max, int size) { String result = ""; for (int i = 0; i < size; i++) { result += String.valueOf((char)(new Random().nextInt((max - min) + 1) + min)); } return result; } Generated output for generateRandomStr(65, 90, 100));: TVLPFQJCYFXQDCQSLKUKKILKKHAUFYEXLUQFHDWNMRBIRRRWNXNNZQTINZPCTKLHGHVYWRKEOYNSOFPZBGEECFMCOKWHLHCEWLDZ A: Efficient and short. /** * Utility class for generating random Strings. */ public interface RandomUtil { int DEF_COUNT = 20; Random RANDOM = new SecureRandom(); /** * Generate a password. * * @return the generated password */ static String generatePassword() { return generate(true, true); } /** * Generate an activation key. * * @return the generated activation key */ static String generateActivationKey() { return generate(false, true); } /** * Generate a reset key. * * @return the generated reset key */ static String generateResetKey() { return generate(false, true); } static String generate(boolean letters, boolean numbers) { int start = ' ', end = 'z' + 1, count = DEF_COUNT, gap = end - start; StringBuilder builder = new StringBuilder(count); while (count-- != 0) { int codePoint = RANDOM.nextInt(gap) + start; switch (getType(codePoint)) { case UNASSIGNED: case PRIVATE_USE: case SURROGATE: count++; continue; } int numberOfChars = charCount(codePoint); if (count == 0 && numberOfChars > 1) { count++; continue; } if (letters && isLetter(codePoint) || numbers && isDigit(codePoint) || !letters && !numbers) { builder.appendCodePoint(codePoint); if (numberOfChars == 2) count--; } else count++; } return builder.toString(); } } A: I am using a very simple solution using Java 8. Just customize it according to your needs. ... import java.security.SecureRandom; ... //Generate a random String of length between 10 to 20. //Length is also randomly generated here. SecureRandom random = new SecureRandom(); String sampleSet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_"; int stringLength = random.ints(1, 10, 21).mapToObj(x -> x).reduce((a, b) -> a).get(); String randomString = random.ints(stringLength, 0, sampleSet.length() - 1) .mapToObj(x -> sampleSet.charAt(x)) .collect(Collector .of(StringBuilder::new, StringBuilder::append, StringBuilder::append, StringBuilder::toString)); We can use this to generate an alphanumeric random String like this (the returned String will mandatorily have some non-numeric characters as well as some numeric characters): public String generateRandomString() { String sampleSet = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz_"; String sampleSetNumeric = "0123456789"; String randomString = getRandomString(sampleSet, 10, 21); String randomStringNumeric = getRandomString(sampleSetNumeric, 10, 21); randomString = randomString + randomStringNumeric; //Convert String to List<Character> List<Character> list = randomString.chars() .mapToObj(x -> (char)x) .collect(Collectors.toList()); Collections.shuffle(list); //This is needed to force a non-numeric character as the first String //Skip this for() if you don't need this logic for(;;) { if(Character.isDigit(list.get(0))) Collections.shuffle(list); else break; } //Convert List<Character> to String randomString = list.stream() .map(String::valueOf) .collect(Collectors.joining()); return randomString; } //Generate a random number between the lower bound (inclusive) and upper bound (exclusive) private int getRandomLength(int min, int max) { SecureRandom random = new SecureRandom(); return random.ints(1, min, max).mapToObj(x -> x).reduce((a, b) -> a).get(); } //Generate a random String from the given sample string, having a random length between the lower bound (inclusive) and upper bound (exclusive) private String getRandomString(String sampleSet, int min, int max) { SecureRandom random = new SecureRandom(); return random.ints(getRandomLength(min, max), 0, sampleSet.length() - 1) .mapToObj(x -> sampleSet.charAt(x)) .collect(Collector .of(StringBuilder::new, StringBuilder::append, StringBuilder::append, StringBuilder::toString)); }
{ "language": "en", "url": "https://stackoverflow.com/questions/41107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1943" }
Q: How do I install the mysql ruby gem under OS X 10.5.4 Here is the deal. $ gem --version 1.1.0 $ sudo gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config Bulk updating Gem source index for: http://gems.rubyforge.org/ ERROR: could not find mysql locally or in a repository $ sudo gem update Updating installed gems Bulk updating Gem source index for: http://gems.rubyforge.org/ Updating RedCloth ERROR: While executing gem ... (Gem::GemNotFoundException) could not find RedCloth locally or in a repository I've tried this, this, this, this, and a ton of others. None of them have worked for me. Is anyone else having this problem? If so what did you do to fix it that is not mentioned above? A: First of all as Orion Edwards said make sure you have rubygems 1.2. Unfortunately, gem update --system did not work for me. Instead I had to: * *Manually download rubygems-update-1.2.0 from rubyforge. *$ sudo gem install /path/to/rubygems-update-1.2.0.gem *$ update_rubygems Now that I had rubygems 1.2 I ran $ sudo gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config Everything is working. Thanks Orion Edwards for steering me in the right direction. A: Step 1 gem update --system It probably won't provide the fix itself, but You really want rubygems 1.2. It will save you about 8 days waiting as it doesn't need to do the 'Bulk updating 102304 gems' rubbish any more It actually looks like it can't find the mysql gem at all, let alone download or install it. You're not behind a proxy server or something weird like that? If it's something to do with your rubygems or the net, rather than mysql specifically, then the gem update --system should reveal it too A: Do you have different ruby versions on your system? If you're running the Darwin-supplied ruby binary, but installed ruby gems under /usr/local, then you'll get errors like this. Even if you've aliased ruby to point to /usr/local, the gem command may fail if the proper ruby binary is not resolved correctly by your shell's $PATH. Also, if /usr/local/bin is located physically after /usr/bin in your path, gem will use /usr/bin/ruby to load the gems from /Library/Ruby/Gems/1.8/gems/. You may want to symlink /usr/lib/ruby/gems/1.8/gems to /Library/Ruby/Gems/1.8/gems/ to prevent this sort of thing. A: Ryan Grove has a blog post with the answer: sudo env ARCHFLAGS="-arch i386" gem install mysql -- \ --with-mysql-dir=/usr/local/mysql --with-mysql-lib=/usr/local/mysql/lib \ --with-mysql-include=/usr/local/mysql/include A: Installing XCode solved this problem for me. This is because it includes the make and gcc tools, which are required by the gem.
{ "language": "en", "url": "https://stackoverflow.com/questions/41134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: WCF Service Returning "Method Not Allowed" In the process of developing my first WCF service and when I try to use it I get "Method not Allowed" with no other explanation. I've got my interface set up with the ServiceContract and OperationContract: [OperationContract] void FileUpload(UploadedFile file); Along with the actual method: public void FileUpload(UploadedFile file) {}; To access the Service I enter http://localhost/project/myService.svc/FileUpload but I get the "Method not Allowed" error Am I missing something? A: If you are using the [WebInvoke(Method="GET")] attribute on the service method, make sure that you spell the method name as "GET" and not "Get" or "get" since it is case sensitive! I had the same error and it took me an hour to figure that one out. A: Your browser is sending an HTTP GET request: Make sure you have the WebGet attribute on the operation in the contract: [ServiceContract] public interface IUploadService { [WebGet()] [OperationContract] string TestGetMethod(); // This method takes no arguments, returns a string. Perfect for testing quickly with a browser. [OperationContract] void UploadFile(UploadedFile file); // This probably involves an HTTP POST request. Not so easy for a quick browser test. } A: The basic intrinsic types (e.g. byte, int, string, and arrays) will be serialized automatically by WCF. Custom classes, like your UploadedFile, won't be. So, a silly question (but I have to ask it...): is UploadedFile marked as a [DataContract]? If not, you'll need to make sure that it is, and that each of the members in the class that you want to send are marked with [DataMember]. Unlike remoting, where marking a class with [XmlSerializable] allowed you to serialize the whole class without bothering to mark the members that you wanted serialized, WCF needs you to mark up each member. (I believe this is changing in .NET 3.5 SP1...) A tremendous resource for WCF development is what we know in our shop as "the fish book": Programming WCF Services by Juval Lowy. Unlike some of the other WCF books around, which are a bit dry and academic, this one takes a practical approach to building WCF services and is actually useful. Thoroughly recommended. A: I ran into this exact same issue today. I had installed IIS, but did not have the activate WCF Services Enabled under .net framework 4.6. A: It sounds like you're using an incorrect address: To access the Service I enter http://localhost/project/myService.svc/FileUpload Assuming you mean this is the address you give your client code then I suspect it should actually be: http://localhost/project/myService.svc A: I've been having this same problem for over a day now - finally figured it out. Thanks to @Sameh for the hint. Your service is probably working just fine. Testing POST messages using the address bar of a browser won't work. You need to use Fiddler to test a POST message. Fiddler instructions... http://www.ehow.com/how_8788176_do-post-using-fiddler.html A: Only methods with WebGet can be accessed from browser IE ; you can access other http verbs by just typing address You can either try Restful service startup kit of codeples or use fiddler to test your other http verbs A: you need to add in web.config <endpoint address="customBinding" binding="customBinding" bindingConfiguration="basicConfig" contract="WcfRest.IService1"/> <bindings> <customBinding> <binding name="basicConfig"> <binaryMessageEncoding/> <httpTransport transferMode="Streamed" maxReceivedMessageSize="67108864"/> </binding> </customBinding> A: My case: configuring the service on new server. ASP.NET 4.0 was not installed/registered properly; svc extension was not recognized.
{ "language": "en", "url": "https://stackoverflow.com/questions/41155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: Fastest way to find common items across multiple lists in C# Given the following: List<List<Option>> optionLists; what would be a quick way to determine the subset of Option objects that appear in all N lists? Equality is determined through some string property such as option1.Value == option2.Value. So we should end up with List<Option> where each item appears only once. A: Ok, this will find the list of Option objects that have a Value appearing in every list. var x = from list in optionLists from option in list where optionLists.All(l => l.Any(o => o.Value == option.Value)) orderby option.Value select option; It doesn't do a "distinct" select so it'll return multiple Option objects, some of them with the same Value. A: Here's a much more efficent implementation: static SortedDictionary<T,bool>.KeyCollection FindCommon<T> (List<List<T>> items) { SortedDictionary<T, bool> current_common = new SortedDictionary<T, bool> (), common = new SortedDictionary<T, bool> (); foreach (List<T> list in items) { if (current_common.Count == 0) { foreach (T item in list) { common [item] = true; } } else { foreach (T item in list) { if (current_common.ContainsKey(item)) common[item] = true; else common[item] = false; } } if (common.Count == 0) { current_common.Clear (); break; } SortedDictionary<T, bool> swap = current_common; current_common = common; common = swap; common.Clear (); } return current_common.Keys; } It works by creating a set of all items common to all lists processed so far and comparing each list with this set, creating a temporary set of the items common to the current list and the list of common items so far. Effectively an O(n.m) where n is the number of lists and m the number of items in the lists. An example of using it: static void Main (string [] args) { Random random = new Random(); List<List<int>> items = new List<List<int>>(); for (int i = 0 ; i < 10 ; ++i) { List<int> list = new List<int> (); items.Add (list); for (int j = 0 ; j < 100 ; ++j) { list.Add (random.Next (70)); } } SortedDictionary<int, bool>.KeyCollection common = FindCommon (items); foreach (List<int> list in items) { list.Sort (); } for (int i = 0 ; i < 100 ; ++i) { for (int j = 0 ; j < 10 ; ++j) { System.Diagnostics.Trace.Write (String.Format ("{0,-4:D} ", items [j] [i])); } System.Diagnostics.Trace.WriteLine (""); } foreach (int item in common) { System.Diagnostics.Trace.WriteLine (String.Format ("{0,-4:D} ", item)); } } A: Building on Matt's answer, since we are only interested in options that all lists have in common, we can simply check for any options in the first list that the others share: var sharedOptions = from option in optionLists.First( ).Distinct( ) where optionLists.Skip( 1 ).All( l => l.Contains( option ) ) select option; If an option list cannot contain duplicate entires, the Distinct call is unnecessary. If the lists vary greatly in size, it would be better to iterate over the options in the shortest list, rather than whatever list happens to be First. Sorted or hashed collections could be used to improve the lookup time of the Contains call, though it should not make much difference for a moderate number of items. A: Fastest to write :) var subset = optionLists.Aggregate((x, y) => x.Intersect(y)) A: what about using a hashSet? that way you can do what you want in O(n) where n is the number of items in all the lists combined, and I think that's the fastest way to do it. you just have to iterate over every list and insert the values you find into the hashset When you insert a key that already exists you will receive false as the return value of the .add method, otherwise true is returned A: Sort, then do something akin to a merge-sort. Basically you would do this: * *Retrieve the first item from each list *Compare the items, if equal, output *If any of the items are before the others, sort-wise, retrieve a new item from the corresponding list to replace it, otherwise, retrieve new items to replace them all, from all the list *As long as you still got items, go back to 2. A: I don't have the performance stats, but if you don't want to roll your own method, various collections libraries have a 'Set' or 'Set(T)' object that offer the usual set procedures. (listed in the order I would use them). * *IESI Collections (literally just Set classes) *PowerCollections (not updated in a while) *C5 (never personally used) A: You can do this by counting occurrences of all items in all lists - those items whose occurrence count is equal to the number of lists, are common to all lists: static List<T> FindCommon<T>(IEnumerable<List<T>> lists) { Dictionary<T, int> map = new Dictionary<T, int>(); int listCount = 0; // number of lists foreach (IEnumerable<T> list in lists) { listCount++; foreach (T item in list) { // Item encountered, increment count int currCount; if (!map.TryGetValue(item, out currCount)) currCount = 0; currCount++; map[item] = currCount; } } List<T> result= new List<T>(); foreach (KeyValuePair<T,int> kvp in map) { // Items whose occurrence count is equal to the number of lists are common to all the lists if (kvp.Value == listCount) result.Add(kvp.Key); } return result; } A: /// <summary> /// The method FindCommonItems, returns a list of all the COMMON ITEMS in the lists contained in the listOfLists. /// The method expects lists containing NO DUPLICATE ITEMS. /// </summary> /// <typeparam name="T"></typeparam> /// <param name="allSets"></param> /// <returns></returns> public static List<T> FindCommonItems<T>(IEnumerable<List<T>> allSets) { Dictionary<T, int> map = new Dictionary<T, int>(); int listCount = 0; // Number of lists. foreach (IEnumerable<T> currentSet in allSets) { int itemsCount = currentSet.ToList().Count; HashSet<T> uniqueItems = new HashSet<T>(); bool duplicateItemEncountered = false; listCount++; foreach (T item in currentSet) { if (!uniqueItems.Add(item)) { duplicateItemEncountered = true; } if (map.ContainsKey(item)) { map[item]++; } else { map.Add(item, 1); } } if (duplicateItemEncountered) { uniqueItems.Clear(); List<T> duplicateItems = new List<T>(); StringBuilder currentSetItems = new StringBuilder(); List<T> currentSetAsList = new List<T>(currentSet); for (int i = 0; i < itemsCount; i++) { T currentItem = currentSetAsList[i]; if (!uniqueItems.Add(currentItem)) { duplicateItems.Add(currentItem); } currentSetItems.Append(currentItem); if (i < itemsCount - 1) { currentSetItems.Append(", "); } } StringBuilder duplicateItemsNamesEnumeration = new StringBuilder(); int j = 0; foreach (T item in duplicateItems) { duplicateItemsNamesEnumeration.Append(item.ToString()); if (j < uniqueItems.Count - 1) { duplicateItemsNamesEnumeration.Append(", "); } } throw new Exception("The list " + currentSetItems.ToString() + " contains the following duplicate items: " + duplicateItemsNamesEnumeration.ToString()); } } List<T> result= new List<T>(); foreach (KeyValuePair<T, int> itemAndItsCount in map) { if (itemAndItsCount.Value == listCount) // Items whose occurrence count is equal to the number of lists are common to all the lists. { result.Add(itemAndItsCount.Key); } } return result; } A: @Skizz The method is not correct. It returns also items that are not common to all the lists in items. Here is the corrected method: /// <summary>. /// The method FindAllCommonItemsInAllTheLists, returns a HashSet that contains all the common items in the lists contained in the listOfLists, /// regardless of the order of the items in the various lists. /// </summary> /// <typeparam name="T"></typeparam> /// <param name="listOfLists"></param> /// <returns></returns> public static HashSet<T> FindAllCommonItemsInAllTheLists<T>(List<List<T>> listOfLists) { if (listOfLists == null || listOfLists.Count == 0) { return null; } HashSet<T> currentCommon = new HashSet<T>(); HashSet<T> common = new HashSet<T>(); foreach (List<T> currentList in listOfLists) { if (currentCommon.Count == 0) { foreach (T item in currentList) { common.Add(item); } } else { foreach (T item in currentList) { if (currentCommon.Contains(item)) { common.Add(item); } } } if (common.Count == 0) { currentCommon.Clear(); break; } currentCommon.Clear(); // Empty currentCommon for a new iteration. foreach (T item in common) /* Copy all the items contained in common to currentCommon. * currentCommon = common; * does not work because thus currentCommon and common would point at the same object and * the next statement: * common.Clear(); * will also clear currentCommon. */ { if (!currentCommon.Contains(item)) { currentCommon.Add(item); } } common.Clear(); } return currentCommon; } A: After searching the 'net and not really coming up with something I liked (or that worked), I slept on it and came up with this. My SearchResult is similar to your Option. It has an EmployeeId in it and that's the thing I need to be common across lists. I return all records that have an EmployeeId in every list. It's not fancy, but it's simple and easy to understand, just what I like. For small lists (my case) it should perform just fine—and anyone can understand it! private List<SearchResult> GetFinalSearchResults(IEnumerable<IEnumerable<SearchResult>> lists) { Dictionary<int, SearchResult> oldList = new Dictionary<int, SearchResult>(); Dictionary<int, SearchResult> newList = new Dictionary<int, SearchResult>(); oldList = lists.First().ToDictionary(x => x.EmployeeId, x => x); foreach (List<SearchResult> list in lists.Skip(1)) { foreach (SearchResult emp in list) { if (oldList.Keys.Contains(emp.EmployeeId)) { newList.Add(emp.EmployeeId, emp); } } oldList = new Dictionary<int, SearchResult>(newList); newList.Clear(); } return oldList.Values.ToList(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/41159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Javascript spinning wait hourglass-type thing I'd like to indicate to the user of a web app that a long-running task is being performed. Once upon a time, this concept would have been communicated to the user by displaying an hourglass. Nowadays, it seems to be an animated spinning circle. (e.g., when you are loading a new tab in Firefox, or booting in Mac OS X. Coincidentally, the overflowing stack in the stackoverflow logo looks like one quarter of the circle). Is there a simple way to create this effect using Javascript (in particular, JQuery)? Ideally, I'd like to have one of these little spinners as elements in a table, to indicate to the user that the system is still active in processing a pending task (i.e., it hasn't forgotten or crashed). (Of course, I realize it's possible that the back-end has crashed and the front-end still show as an animating spinning thing, it's more for the psychological purpose of the user seeing activity). And what do you call that spinning thing, anyways? A: this site will do it for you: ajaxload and on OS X it's called the "Beachball" and I like to add "Of Death". A: http://preloaders.net/en/letters_numbers_words is nice and has lots of categories down the left hand menu that offer more than http://ajaxload.info does, plus options of size and background... ajaxload is looking pretty outdated these days. A: Google Ajax activity indicator to find lots of images and image generators (the "spinning" image itself is an animated GIF). Here is one link to get you started. With the image in hand, use JQuery to toggle the visibility of the image (or perhaps its parent DIV tag). See this link for some more info. rp A: I assume you meant something to indicate background activity during an Ajax call. I tend to have a CSS class which sets the background image to a little animated GIF, with appropriate padding and positioning (remember to turn background-repeat off), and then add and remove that class using a couple of JavaScript helpers called when the Ajax call is started, and when the response callback runs.
{ "language": "en", "url": "https://stackoverflow.com/questions/41162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do I write Firefox add-on that automatically enters proxy passwords? Suppose someone worked for a company that put up an HTTP proxy preventing internet access without password authentication (NTLM, I think). Also suppose that this password rotated on a daily basis, which added very little security, but mostly served to annoy the employees. How would one get started writing a Firefox add-on that automatically entered these rotating passwords? To clarify: This add-on would not just submit the password; the add-on would programmatically generate it with some knowledge of the password rotation scheme. A: This is built into Firefox. Open up about:config, search for 'ntlm' The setting you're looking for is called network.automatic-ntlm-auth.trusted-uris and accepts a comma-space delimited list of your proxy server uris. This will make FireFox automatically send hashed copies of your windows password to the proxy, which is disabled by default for obvious reasons. IE can do this automatically because it can use security zones to figure out whether a proxy server is trusted or not. Blog post discussing this A: It's your lucky day - no need for an add-on! How to configure Firefox for automatic NTLM authentication * *In Firefox, type about:config into the address bar and hit enter. You should see a huge list of configuration properties. *Find the setting named network.negotiate-auth.delegation-uris (the easiest way to do this is to type that into the filter box at top). *Double-click this line, and enter the names of all servers for which network authentication is desired, separated by commas. Then press ‘OK’ to confirm. *Find the setting network.negotiate-auth.trusted-uris, and set it to the same value used in #3. *Find the setting network.ntlm.send-lm-response, and set it to true. *Skip steps 7 and 8 if you aren't using a proxy. *Open the options dialog (Tools->Options menu), and on the Advanced page, Network tab, press the Connection Settings button to get the proxy configuration dialog: *Make sure the correct proxy server is configured, and that the same list of servers is listed in the No Proxy for: entryfield as were set in step #3. *Done.
{ "language": "en", "url": "https://stackoverflow.com/questions/41169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Can you use the JavaScript engine in web browsers to process local files? I have a number of users with multi-megabyte files that need to be processed before they can be uploaded. I am trying to find a way to do this without having to install any executable software on their machines. If every machine shipped with, say, Python it would be easy. I could have a Python script do everything. The only scripting language I can think of that's on every machine is JavaScript. However I know there are security restrictions that prevent reading and writing local files from web browsers. Is there any way to use this extremely pervasive scripting language for general purpose computing tasks? EDIT: To clarify the requirements, this needs to be a cross platform, cross browser solution. I believe that HTA is an Internet Explorer only technology (or that the Firefox equivalent is broken). A: Would Google Gears work here? Yes, users have to install something, but I think the experience is fairly frictionless. And once it's installed, no more worries. A: The application that I maintain and develop for work is an HTML Application or HTA, linked with a SQL Server 2005 backend. This allows various security restrictions to be "avoided". All the client-side components in the application are done with javascript, including writing files to locally mapped network drives and loading data into screens/pages in an AJAXy way. Perhaps HTA could be helpful for your situation. A: For an example of javascript accessing a local file, you might try taking a look at the source of TiddlyWiki, specifically the saveFile, mozillaSaveFile, and ieSaveFile functions. Just click the download link, open the html file it sends you, and search for those functions. Of course, tiddlywiki is supposed to be used as a local file, not served over the web, so the methods it uses may only work locally.. But it might be a start. A: Why not use a flash uploader? http://swfupload.org/ A: Adobe Flex 4 lets you to open and process a file on a local machine: http://livedocs.adobe.com/flex/3/langref/flash/net/FileReference.html#load() It's not exactly JavaScript, but hope that helps. A: I believe you can accomplish this using the HTML5 File API. It is supported in Opera, IE, Safari, Firefox, and Chrome. A: you can use fs module from nodeJS to manipulate with filesystem nowadays!
{ "language": "en", "url": "https://stackoverflow.com/questions/41179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Vista Console App? I'm doing a fair bit of work in Ruby recently, and using ruby script/console Is absolutely critical. However, I'm really disappointed with the default Windows console in Vista, especially in that there's a really annoying bug where moving the cursor back when at the bottom of the screen irregularly causes it to jump back. Anyone have a decent console app they use in Windows? A: I use Console2. I like the tabbed interface and that copy works properly if text breaks at the end of a line. A: Are you resizing the console window? I've found that the ruby scripts (irb, etc) that use the readline library don't work correctly with resized console windows (in XP or Vista). Effectively I believe that the readline library expects the console window to be 80 characters wide, anything else and it goes bezerk. So far I haven't found a way to fix it on windows without giving up other nice features. A: I have had some pleasant experiences with rxvt (comes with cygwin, does not need an x server running). Putty is also often mentioned as a good alternative. You could also try to get xterm working :) A: Powershell Windows PowerShell is Microsoft's task automation framework, consisting of a command-line shell and associated scripting language built on top of, and integrated with the .NET Framework. PowerShell provides full access to COM and WMI, enabling administrators to perform administrative tasks on both local and remote Windows systems.
{ "language": "en", "url": "https://stackoverflow.com/questions/41185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Waveform Visualization in Ruby I'm about to start a project that will record and edit audio files, and I'm looking for a good library (preferably Ruby, but will consider anything other than Java or .NET) for on-the-fly visualization of waveforms. Does anybody know where I should start my search? A: That's a lot of data to be streaming into a browser. Flash or Flex charts is probably the only solution that will be memory efficient. Javascript charting tends to break-down for large data sets. A: When displaying an audio waveform, you will want to do some sort of data reduction on the original data, because there is usually more data available in an audio file than pixels on the screen. Most audio editors build a separate file (called a peak file or overview file) which stores a subset of the audio data (usually the peaks and valleys of a waveform) for use at different zoom levels. Then as you zoom in past a certain point you start referencing the raw audio data itself. Here are some good articles on this: Waveform Display Build an Audio Waveform Display As far as source code goes, I would recommend looking through the Audacity source code. Audacity's waveform display is pretty good and mostly likely does a similar sort of data reduction when rendering the waveforms. A: i wrote one: http://github.com/pangdudu/rude/tree/master/lib/waveform_narray_testing.rb ,nick A: The other option is generating the waveforms on the server-side with GD or RMagick. But good luck getting RubyGD to compile. A: Processing is often used for visualization, and it has a Ruby port: https://github.com/jashkenas/ruby-processing/wiki
{ "language": "en", "url": "https://stackoverflow.com/questions/41188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Getting image to stretch a div How can I get an image to stretch the height of a DIV class? Currently it looks like this: However, I would like the DIV to be stretched so the image fits properly, but I do not want to resize the `image. Here is the CSS for the DIV (the grey box): .product1 { width: 100%; padding: 5px; margin: 0px 0px 15px -5px; background: #ADA19A; color: #000000; min-height: 100px; } The CSS being applied on the image: .product{ display: inline; float: left; } So, how can I fix this? A: In the markup after the image, insert something like <div style="clear:left"/>. A bit messy, but it's the easiest way I've found. And while you're at it, put a bit of margin on that image so the text doesn't butt up against it. A: Assuming @John Millikin is correct, the code .product + * { clear: left; } would suffice to do the same thing without forcing you to manually adjust the code after the div. A: One trick you can use is to set the <div>'s overflow property to hidden. This forces browsers to calculate the physical size of the box, and fixes the weird overlap problem with the floated image. It will save you from adding in any extra HTML markup. Here's how the class should look: .product1 { width: 100%; padding: 5px; margin: 0px 0px 15px -5px; background: #ADA19A; color: #000000; min-height: 100px; overflow: hidden; } A: Add overflow:auto; to .product1 A: This looks like a job for clearfix to me ... A: Try the following: .Strech { background:url(image.jpg); background-size:100% 100%; background-repeat:no-repeat; width:500px; height:500px; } A: display:inline float:left is your problem Floating makes the parents width not be stretched by the child, try placing the image without the float. If you take the float off, it should give you the desired effect. Another approach would be to make sure you are clearing your floats at the end of the parent element so that they don't scope creep. Update: After viewing your link Your height issue as displayed, is because the floats are not being cleared.
{ "language": "en", "url": "https://stackoverflow.com/questions/41198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Can Ruby convert an acts_as_nested_set to a JSON hash cleanly without recursion? Is there a fast and clean way of returning a JSON hash back from any node in a Ruby on Rails' acts_as_nested_set without using recursion? Here's the recursive solution for reference: class Node < ActiveRecord::Base has_many :products def json_hash if children.size > 0 children.collect { |node| { node.name => node.json_hash }.to_json else { node.name => node.products.find(:all).collect(&:name) }.to_json end end end A: There is a wikipedia article on tree traversal which shows different alternatives to the recursive solution you are using. It may be tricky to use them in your specific case, but it should be possible. However, my question to you is, is there a specific reason you want to use iteration instead of recursion? I don't think any of the iterative solutions will be nearly as clean. Are your trees so big that you are running out of stack space (they would have to be pretty big)? Otherwise, I'm not so sure an iterative solution will really be faster. I see one potential for improvement though, if you are seeing performance issues... but I don't know rails, so I'm not sure if it is accurate: Does the find method return a new array? If so, you probably want to invoke .collect! instead of .collect, because if find creates an array, you are just creating an array and then throwing it away to the call to collect (which also creates an array), which surely is not going to be very efficient and may slow you down a lot if you have a big tree there. So { node.name => node.products.find(:all).collect(&:name) }.to_json might become { node.name => node.products.find(:all).collect!(&:name) }.to_json EDIT: Also, it may be more efficient to create your hash of hashes, and then convert the whole thing to json in 1 fell swoop, rather than converting it piecemail like you are doing. So class Node < ActiveRecord::Base has_many :products def json_hash if children.size > 0 children.collect { |node| { node.name => node.json_hash }.to_json else { node.name => node.products.find(:all).collect!(&:name) }.to_json end end end might become class Node < ActiveRecord::Base has_many :products def json_hash to_hash.to_json end def to_hash if children.size > 0 children.collect { |node| { node.name => node.to_hash } else { node.name => node.products.find(:all).collect!(&:name) } end end end Whether this works and is more efficient I leave as an exercise for you ;-) A: JSONifier! node.to_json(:include=>{:products=>{:include=>:product_parts}})
{ "language": "en", "url": "https://stackoverflow.com/questions/41204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: JavaScript interactive shell with completion For debugging and testing I'm searching for a JavaScript shell with auto completion and if possible object introspection (like ipython). The online JavaScript Shell is really nice, but I'm looking for something local, without the need for an browser. So far I have tested the standalone JavaScript interpreter rhino, spidermonkey and google V8. But neither of them has completion. At least Rhino with jline and spidermonkey have some kind of command history via key up/down, but nothing more. Any suggestions? This question was asked again here. It might contain an answer that you are looking for. A: In Windows, you can run this file from the command prompt in cscript.exe, and it provides an simple interactive shell. No completion. // shell.js // ------------------------------------------------------------------ // // implements an interactive javascript shell. // // from // http://kobyk.wordpress.com/2007/09/14/a-jscript-interactive-interpreter-shell-for-the-windows-script-host/ // // Sat Nov 28 00:09:55 2009 // var GSHELL = (function () { var numberToHexString = function (n) { if (n >= 0) { return n.toString(16); } else { n += 0x100000000; return n.toString(16); } }; var line, scriptText, previousLine, result; return function() { while(true) { WScript.StdOut.Write("js> "); if (WScript.StdIn.AtEndOfStream) { WScript.Echo("Bye."); break; } line = WScript.StdIn.ReadLine(); scriptText = line + "\n"; if (line === "") { WScript.Echo( "Enter two consecutive blank lines to terminate multi-line input."); do { if (WScript.StdIn.AtEndOfStream) { break; } previousLine = line; line = WScript.StdIn.ReadLine(); line += "\n"; scriptText += line; } while(previousLine != "\n" || line != "\n"); } try { result = eval(scriptText); } catch (error) { WScript.Echo("0x" + numberToHexString(error.number) + " " + error.name + ": " + error.message); } if (result) { try { WScript.Echo(result); } catch (error) { WScript.Echo("<<>>"); } } result = null; } }; })(); GSHELL(); If you want, you can augment that with other utility libraries, with a .wsf file. Save the above to "shell.js", and save the following to "shell.wsf": <job> <reference object="Scripting.FileSystemObject" /> <script language="JavaScript" src="util.js" /> <script language="JavaScript" src="shell.js" /> </job> ...where util.js is: var quit = function(x) { WScript.Quit(x);} var say = function(s) { WScript.Echo(s); }; var echo = say; var exit = quit; var sleep = function(n) { WScript.Sleep(n*1000); }; ...and then run shell.wsf from the command line. A: edit: after using the node REPL a bit more, I've discovered this evaluation to be overly positive. There are some serious problems with its implementation, including an inability to yank killed text, issues with editing lines that are longer than the terminal width, and some other problems. It might be better to just use rhino. The node.js REPL (node-repl on a system with node installed) is the best terminal-based, system-context shell I've seen so far. I'm comparing it to rhino and the built-in v8 shell. It provides tab-completion and line editing history, as well as syntax-colouring of evaluations. You can also import CommonJS modules, or at least those modules implemented by node. Downside is that you have to build node. This is not a huge deal, as building apps goes, but might be a challenge if you don't normally do such things. A: Isn't Rhino Shell what you are looking for? A: Rhino Shell since 1.7R2 has support for completion as well. You can find more information here. A: This post by John Resig says that there are shells for Tamarin (Firefox 4?) and JavaScriptCore (Safari 3). I'm not sure if they have auto completion though. A: jslibs (a standalone javascript runtime) could also be suitable for this purpose.
{ "language": "en", "url": "https://stackoverflow.com/questions/41207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Testing HTTPS files with MAMP I am running MAMP locally on my laptop, and I like to test as much as I can locally. Unfortunately, since I work on e-commerce stuff (PHP), I normally force ssl in most of the checkout forms and it just fails on my laptop. Is there any easy configuration that I might be missing to allow "https" to run under MAMP? Please note, I know that I could configure Apache by hand, re-compile PHP, etc. but I'm just wondering if there's an easier way for a lazy programmer. Thanks A: First, make a duplicate of /Applications/MAMP. Open /Applications/MAMP/conf/apache/httpd.conf Below the line # LoadModule foo_module modules/mod_foo.so you add LoadModule ssl_module modules/mod_ssl.so Remove all lines <IfDefine SSL> as well as </IfDefine SSL>. Open /Applications/MAMP/conf/apache/ssl.conf Remove all lines <IfDefine SSL> as well as </IfDefine SSL>. Find the line defining SSLCertificateFile and SSLCertificateKeyFile, set it to SSLCertificateFile /Applications/MAMP/conf/apache/ssl/server.crt SSLCertificateKeyFile /Applications/MAMP/conf/apache/ssl/server.key Create a new folder /Applications/MAMP/conf/apache/ssl Drop into the terminal an navigate to the new folder cd /Applications/MAMP/conf/apache/ssl Create a private key, giving a password openssl genrsa -des3 -out server.key 1024 Remove the password cp server.key server-pw.key openssl rsa -in server-pw.key -out server.key Create a certificate signing request, pressing return for default values openssl req -new -key server.key -out server.csr Create a certificate openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt Restart your server. If you encounter any problems check the system log file. The first time you visit https://localhost/ you will be asked to accept the certificate. A: There doesn't seem to be an easier way, unless you're willing to buy MAMP Pro. As far as I know, the only way to use SSL with MAMP is to configure mod_ssl for Apache. mod_ssl is bundled with MAMP, and I found configuration to be pretty straightforward. Note that you'll probably have to start Apache from the command line to use it: /Applications/MAMP/bin/apache2/bin$ ./apachectl stop /Applications/MAMP/bin/apache2/bin$ sudo ./apachectl startssl A: NOTE: startssl is no longer supported after version 2+ of MAMP. You have to update the config files (httpd.conf) to enable ssl. You can modify the free version of MAMP to enable ssl by default very easily. Once you have setup all the SSL parts of apache and have it working so that calling apachectl startssl works, just edit the file /Applications/MAMP/startApache.sh in your favorite text editor and change the start argument to startssl and you will have the MAMP launcher starting apache in ssl mode for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/41218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Is there a best .NET algorithm for credit card encryption? The .NET System.Security.Cryptography namespace has a rather bewildering collection of algorithms that I could use for encryption of credit card details. Which is the best? It clearly needs to be secure for a relatively short string. EDIT: I'm in the UK, where I understand we're OK storing encrypted credit card details so long as the three-digit CVV number is never stored. And thanks all for the great responses. A: I'd add to the view that you just plain shouldn't store them unless you have a really really good reason to, and storing them in a cookie is a really bad idea - they're just too easy to get hold of (what happens if someone steals a cookie - then it won't matter how encrypted it is). If you need to do repeat payments, most CC providers will offer a way to do this by storing some kind of token from the initial payment, without keeping the card number at all (you could just keep the last 4 digits to display to the customer so that they know which card is stored). Really, just don't do it! Also you should never ever ever ever keep the CCV code. A: As per PCI DSS compliance rules, any industry leading encryption standard is enough. So a 3DES with a 256 bit key is good enough (although other standards can be used). Check this out http://pcianswers.com/2006/08/09/methods-of-encrypting-data/ A: No offense, but the question is a little "misguided". There is no "silver bullet" solution. I would recommend to read up on cryptography in general and then do some threat modeling. Some questions (by no means a comprehensive list) you should ask yourself: * *Is the module doing the encryption the one which needs to decrypt it (in this case use symmetric crypto) or will it send data to an other module (on an other machine) which will use it (in which case you should consider public-key crypto) *What do you want to protect against? Someone accessing the database but not having the sourcecode (in which case you can hardcode the encryption key directly into the source)? Someone sniffing your local network (you should consider transparent solutions like IPSec)? Someone stealing your server (it can happen even in data centers - in which case full disk encryption should be considered)? *Do you really need to keep the data? Can't you directly pass it to the credit card processor and erase it after you get the confirmation? Can't you store it locally at the client in a cookie or Flash LSO? If you store it at the client, make sure that you encrypt it at the server side before putting it in a cookie. Also, if you are using cookies, make sure that you make them http only. *Is it enough to compare the equality of the data (ie the data that the client has given me is the same data that I have)? If so, consider storing a hash of it. Because credit card numbers are relatively short and use a reduced set of symbols, a unique salt should be generated for each before hashing. Later edit: note that standard encryption algorithms from the same category (for example 3DES and AES - both being symmetric block cyphers) are of comparable strength. Most (commercial) systems are not broken because somebody bruteforced their encryption, but because their threat modelling was not detailed enough (or flat out they didn't have any). For example you can encrypt all the data, but if you happen to have a public facing web interface which is vulnerable to SQL injection, it won't help you much. A: It it doesn't matter. Full card numbers should never touch disk. All that matters is the auth code. For traces etc you will only use the last 4 digits xxxx xxxx xxxx 1234 and expire date. If you are to store card numbers the cryptography choice will be mandated by the acquiring bank. Unless you are the acquirer, which case there should be an old unix programmer/db2 guy that you should ask. "Can't you store it locally at the client in a cookie" <-- NEVER A: Don't forget about integrity here. There are forgery attacks against out-of-the-box crypto when the attacker doesn't know the key, but can manipulate the ciphertext. These can be particularly nasty when: * *encrypting short strings, *with known substrings That's exactly the case for credit cards. So using System.Security.Cryptography AES or 3DES in CBC mode without rolling your own checksum can be dangerous. Read: there's some chance an attacker without the secret key could replace one credit card number with another. A: If you are using a 3rd party payment gateway, you don't need to store the numbers. There is no point. A: 3des is pretty good, store the salt along side, and keep a standard key somewhere not in the database or a config file. That way if you get pwned, they can't decrypt it. A: There's also the legal aspect to consider. I don't know the situation elsewhere but in Germany you're simply not allowed to store credit card numbers1). Period. It doesn't matter whether you encrypt them or not and in what format you store them. All you may do (and here I'm referring from memory, without any judicial knowledge) is store a strong hash (SHA-256?) of the credit card number, along with the last four digits and the account number. And yes, it's trivial to rebuild the complete number from these information alone. Laws aren't always logical. 1) Except if you're a federally certified credit card institute. A: Hint: You should investigate if it's legal to store credit card numbers. In Sweden for example you will have to be certified by PCI (Payment Card Industry), where your internal and external security will be tested (a long with a lot of other things). You should think both once or twice before storing credit card information, since the legal costs of doing it wrong might be expensive. A: Encrypt the credit card with a public key. Give the private key to the payment processor machine only. The payment processor machine then can query the database and do the work, and nobody else, even the machine which added the entry, will be able to decrypt it. Something like PHP's openssl_seal, though if you want perhaps with a different algorithm.
{ "language": "en", "url": "https://stackoverflow.com/questions/41220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Java and SQLite I'm attracted to the neatness that a single file database provides. What driver/connector library is out there to connect and use SQLite with Java. I've discovered a wrapper library, http://www.ch-werner.de/javasqlite, but are there other more prominent projects available? A: I understand you asked specifically about SQLite, but maybe HSQL database would be a better fit with Java. It is written in Java itself, runs in the JVM, supports in-memory tables etc. and all that features make it quite usable for prototyping and unit-testing. A: When you compile and run the code, you should set the classpath options value. Just like the following: javac -classpath .;sqlitejdbc-v056.jar Text.java java -classpath .;sqlitejdbc-v056.jar Text Please pay attention to "." and the sparate ";"(win, the linux is ":") A: sqlitejdbc code can be downloaded using git from https://github.com/crawshaw/sqlitejdbc. # git clone https://github.com/crawshaw/sqlitejdbc.git sqlitejdbc ... # cd sqlitejdbc # make Note: Makefile requires curl binary to download sqlite libraries/deps. A: I found your question while searching for information with SQLite and Java. Just thought I'd add my answer which I also posted on my blog. I have been coding in Java for a while now. I have also known about SQLite but never used it… Well I have used it through other applications but never in an app that I coded. So I needed it for a project this week and it's so simple use! I found a Java JDBC driver for SQLite. Just add the JAR file to your classpath and import java.sql.* His test app will create a database file, send some SQL commands to create a table, store some data in the table, and read it back and display on console. It will create the test.db file in the root directory of the project. You can run this example with java -cp .:sqlitejdbc-v056.jar Test. package com.rungeek.sqlite; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.Statement; public class Test { public static void main(String[] args) throws Exception { Class.forName("org.sqlite.JDBC"); Connection conn = DriverManager.getConnection("jdbc:sqlite:test.db"); Statement stat = conn.createStatement(); stat.executeUpdate("drop table if exists people;"); stat.executeUpdate("create table people (name, occupation);"); PreparedStatement prep = conn.prepareStatement( "insert into people values (?, ?);"); prep.setString(1, "Gandhi"); prep.setString(2, "politics"); prep.addBatch(); prep.setString(1, "Turing"); prep.setString(2, "computers"); prep.addBatch(); prep.setString(1, "Wittgenstein"); prep.setString(2, "smartypants"); prep.addBatch(); conn.setAutoCommit(false); prep.executeBatch(); conn.setAutoCommit(true); ResultSet rs = stat.executeQuery("select * from people;"); while (rs.next()) { System.out.println("name = " + rs.getString("name")); System.out.println("job = " + rs.getString("occupation")); } rs.close(); conn.close(); } } A: The example code leads to a memory leak in Tomcat (after undeploying the webapp, the classloader still remains in memory) which will cause an outofmemory eventually. The way to solve it is to use the sqlite-jdbc-3.7.8.jar; it's a snapshot, so it doesn't appear for maven yet. A: The wiki lists some more wrappers: * *Java wrapper (around a SWIG interface): http://tk-software.home.comcast.net/ *A good tutorial to use JDBC driver for SQLite. (it works at least !) http://www.ci.uchicago.edu/wiki/bin/view/VDS/VDSDevelopment/UsingSQLite *Cross-platform JDBC driver which uses embedded native SQLite libraries on Windows, Linux, OS X, and falls back to pure Java implementation on other OSes: https://github.com/xerial/sqlite-jdbc (formerly zentus) *Another Java - SWIG wrapper. It only works on Win32. http://rodolfo_3.tripod.com/index.html *sqlite-java-shell: 100% pure Java port of the sqlite3 commandline shell built with NestedVM. (This is not a JDBC driver). *SQLite JDBC Driver for Mysaifu JVM: SQLite JDBC Driver for Mysaifu JVM and SQLite JNI Library for Windows (x86) and Linux (i386/PowerPC). A: David Crawshaw project(sqlitejdbc-v056.jar) seems out of date and last update was Jun 20, 2009, source here I would recomend Xerials fork of Crawshaw sqlite wrapper. I replaced sqlitejdbc-v056.jar with Xerials sqlite-jdbc-3.7.2.jar file without any problem. Uses same syntax as in Bernie's answer and is much faster and with latest sqlite library. What is different from Zentus's SQLite JDBC? The original Zentus's SQLite JDBC driver http://www.zentus.com/sqlitejdbc/ itself is an excellent utility for using SQLite databases from Java language, and our SQLiteJDBC library also relies on its implementation. However, its pure-java version, which totally translates c/c++ codes of SQLite into Java, is significantly slower compared to its native version, which uses SQLite binaries compiled for each OS (win, mac, linux). To use the native version of sqlite-jdbc, user had to set a path to the native codes (dll, jnilib, so files, which are JNDI C programs) by using command-line arguments, e.g., -Djava.library.path=(path to the dll, jnilib, etc.), or -Dorg.sqlite.lib.path, etc. This process was error-prone and bothersome to tell every user to set these variables. Our SQLiteJDBC library completely does away these inconveniences. Another difference is that we are keeping this SQLiteJDBC libray up-to-date to the newest version of SQLite engine, because we are one of the hottest users of this library. For example, SQLite JDBC is a core component of UTGB (University of Tokyo Genome Browser) Toolkit, which is our utility to create personalized genome browsers. EDIT : As usual when you update something, there will be problems in some obscure place in your code(happened to me). Test test test =) A: There is a new project SQLJet that is a pure Java implementation of SQLite. It doesn't support all of the SQLite features yet, but may be a very good option for some of the Java projects that work with SQLite databases. A: Typo: java -cp .:sqlitejdbc-v056.jar Test should be: java -cp .:sqlitejdbc-v056.jar; Test notice the semicolon after ".jar" i hope that helps people, could cause a lot of hassle
{ "language": "en", "url": "https://stackoverflow.com/questions/41233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "335" }
Q: Apache serving files that should not be served Today I discovered that my fresh installation of Apache HTTP Server is able to serve files from my C:\uploads\ directory. I have two folders in C:\uploads: * *C:\uploads\templates *C:\uploads\sites Both folders contain testimage.jpg. I found that Apache will serve the files from the templates folder if I request: http://localhost/templates/testimage.jpg However, http://localhost/sites/testimage.jpg 404's! OMG - firstly, why does Apache serve the templates folder in the first place? Is it special? Secondly, by what arbitrary set of rules does apache disallow access to other folders such the sites? I'm so confused. Perhaps I've taken a wrong turn somewhere during the installation. A: Did you look through your httpd.conf file to see what rules are in place for what is being served? Alternatively, are there .htaccess files that may be changing what is being served? You might have templates exposed in one or the other, but not sites... that's the first thing that comes to mind. I would suggest going through these configuration files with a fine toothed comb to see what may cause the behavior you see.
{ "language": "en", "url": "https://stackoverflow.com/questions/41234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: I am having trouble getting phpBB to authenticate with our Active Directory I am pretty sure that the settings that I am using are correct, so all possible things can be wrong which I should check out so that I can make authentication with our Active Directory work. A: Try test if PHP can connect to active directory <?php $ds = ldap_connect('host.ad.lan', 389); ldap_set_option($ds, LDAP_OPT_PROTOCOL_VERSION, 3); ldap_set_option($ds, LDAP_OPT_REFERRALS, 0); ldap_bind($ds, '[email protected]', 'xxx'); $sr = ldap_search($ds, 'CN=Cameron Zemek,OU=Users,OU=BRC,DC=ad,DC=lan', '(objectclass=*)', array('cn')); $entryID = ldap_first_entry($ds, $sr); $data = ldap_get_attributes($ds, $entryID); print_r($data); ldap_close($ds); What do you have has your $config['ldap_user'] and $config['ldap_uid'] ? You want to set $config['ldap_uid'] to sAMAccountName A: There is a trick to do activeDirectory auth with phpbb3. You should: * *make an account in phpBB with a name identical to some AD-name *give this account admin/founder rights in phpBB *login with this account *set up auth parameters from within this account By the way, what error messages do you get from phpBB? A: @grom... thanks but, yes PHP is working just fine. I have a WordPress and a MediaWiki installation on the same server, and they are both authenticating against the same active directory just fine. A: phpBB3 does not offer much info about how to enable LDAPS, so I hope this helps someone... Note that you may need to actually clear all phpBB3 cookies immediately after the base installation. This will allow the admin user to see the ACP. Once you are able to consistently log into phpBB3 as an admin, and want to enable LDAPS authentication, do the following (tested with AD and Debian stretch): * *Obtain the root TLS certificate from your AD/LDAP Administrator, or get it yourself with something like: # openssl s_client -showcerts -connect google.com:443 See the MediaWiki documentation, as phpBB3 docs are quite sparse: https://www.mediawiki.org/wiki/Extension:LDAP_Authentication/Requirements * *Install the PEM formatted certificate with a .crt name into your OS certificate store. For Debian based systems, that would be /usr/local/share/ca-certificates then run # dpkg-reconfigure ca-certificates *Configure /etc/ldap/ldap.conf to your local settings. Note that port 3268 may not have in-built limits like 686 with AD. YMMV. *Create a special AD user for binding. Give it permissions to lookup, but not to change, attributes. Confirm that the credentials work with ldapsearch. eg: ldapsearch -x -LLL -h ad.mydomain.com -D binduser -W -z 30 -b "dc=mydomain,dc=com" searchString *Create a phpBB3 user with the same username as the above AD bind user. As the phpBB3 admin, grant the AD bind user Founder permissions. *Using a different browser, log into phpBB3 as the binduser, then set up the LDAP Authentication as that user. (As noted in the above post). *Test it! Logout of phpBB3 and then login again using the LDAP/AD credentials. If that does not work, the PHP dev documentation is quite good, and offers many comments with examples and example code to try.
{ "language": "en", "url": "https://stackoverflow.com/questions/41239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Dynamic LINQ OrderBy on IEnumerable / IQueryable I found an example in the VS2008 Examples for Dynamic LINQ that allows you to use a SQL-like string (e.g. OrderBy("Name, Age DESC")) for ordering. Unfortunately, the method included only works on IQueryable<T>. Is there any way to get this functionality on IEnumerable<T>? A: Just stumbled into this oldie... To do this without the dynamic LINQ library, you just need the code as below. This covers most common scenarios including nested properties. To get it working with IEnumerable<T> you could add some wrapper methods that go via AsQueryable - but the code below is the core Expression logic needed. public static IOrderedQueryable<T> OrderBy<T>( this IQueryable<T> source, string property) { return ApplyOrder<T>(source, property, "OrderBy"); } public static IOrderedQueryable<T> OrderByDescending<T>( this IQueryable<T> source, string property) { return ApplyOrder<T>(source, property, "OrderByDescending"); } public static IOrderedQueryable<T> ThenBy<T>( this IOrderedQueryable<T> source, string property) { return ApplyOrder<T>(source, property, "ThenBy"); } public static IOrderedQueryable<T> ThenByDescending<T>( this IOrderedQueryable<T> source, string property) { return ApplyOrder<T>(source, property, "ThenByDescending"); } static IOrderedQueryable<T> ApplyOrder<T>( IQueryable<T> source, string property, string methodName) { string[] props = property.Split('.'); Type type = typeof(T); ParameterExpression arg = Expression.Parameter(type, "x"); Expression expr = arg; foreach(string prop in props) { // use reflection (not ComponentModel) to mirror LINQ PropertyInfo pi = type.GetProperty(prop); expr = Expression.Property(expr, pi); type = pi.PropertyType; } Type delegateType = typeof(Func<,>).MakeGenericType(typeof(T), type); LambdaExpression lambda = Expression.Lambda(delegateType, expr, arg); object result = typeof(Queryable).GetMethods().Single( method => method.Name == methodName && method.IsGenericMethodDefinition && method.GetGenericArguments().Length == 2 && method.GetParameters().Length == 2) .MakeGenericMethod(typeof(T), type) .Invoke(null, new object[] {source, lambda}); return (IOrderedQueryable<T>)result; } Edit: it gets more fun if you want to mix that with dynamic - although note that dynamic only applies to LINQ-to-Objects (expression-trees for ORMs etc can't really represent dynamic queries - MemberExpression doesn't support it). But here's a way to do it with LINQ-to-Objects. Note that the choice of Hashtable is due to favorable locking semantics: using Microsoft.CSharp.RuntimeBinder; using System; using System.Collections; using System.Collections.Generic; using System.Dynamic; using System.Linq; using System.Runtime.CompilerServices; static class Program { private static class AccessorCache { private static readonly Hashtable accessors = new Hashtable(); private static readonly Hashtable callSites = new Hashtable(); private static CallSite<Func<CallSite, object, object>> GetCallSiteLocked( string name) { var callSite = (CallSite<Func<CallSite, object, object>>)callSites[name]; if(callSite == null) { callSites[name] = callSite = CallSite<Func<CallSite, object, object>> .Create(Binder.GetMember( CSharpBinderFlags.None, name, typeof(AccessorCache), new CSharpArgumentInfo[] { CSharpArgumentInfo.Create( CSharpArgumentInfoFlags.None, null) })); } return callSite; } internal static Func<dynamic,object> GetAccessor(string name) { Func<dynamic, object> accessor = (Func<dynamic, object>)accessors[name]; if (accessor == null) { lock (accessors ) { accessor = (Func<dynamic, object>)accessors[name]; if (accessor == null) { if(name.IndexOf('.') >= 0) { string[] props = name.Split('.'); CallSite<Func<CallSite, object, object>>[] arr = Array.ConvertAll(props, GetCallSiteLocked); accessor = target => { object val = (object)target; for (int i = 0; i < arr.Length; i++) { var cs = arr[i]; val = cs.Target(cs, val); } return val; }; } else { var callSite = GetCallSiteLocked(name); accessor = target => { return callSite.Target(callSite, (object)target); }; } accessors[name] = accessor; } } } return accessor; } } public static IOrderedEnumerable<dynamic> OrderBy( this IEnumerable<dynamic> source, string property) { return Enumerable.OrderBy<dynamic, object>( source, AccessorCache.GetAccessor(property), Comparer<object>.Default); } public static IOrderedEnumerable<dynamic> OrderByDescending( this IEnumerable<dynamic> source, string property) { return Enumerable.OrderByDescending<dynamic, object>( source, AccessorCache.GetAccessor(property), Comparer<object>.Default); } public static IOrderedEnumerable<dynamic> ThenBy( this IOrderedEnumerable<dynamic> source, string property) { return Enumerable.ThenBy<dynamic, object>( source, AccessorCache.GetAccessor(property), Comparer<object>.Default); } public static IOrderedEnumerable<dynamic> ThenByDescending( this IOrderedEnumerable<dynamic> source, string property) { return Enumerable.ThenByDescending<dynamic, object>( source, AccessorCache.GetAccessor(property), Comparer<object>.Default); } static void Main() { dynamic a = new ExpandoObject(), b = new ExpandoObject(), c = new ExpandoObject(); a.X = "abc"; b.X = "ghi"; c.X = "def"; dynamic[] data = new[] { new { Y = a }, new { Y = b }, new { Y = c } }; var ordered = data.OrderByDescending("Y.X").ToArray(); foreach (var obj in ordered) { Console.WriteLine(obj.Y.X); } } } A: I've stumble this question looking for Linq multiple orderby clauses and maybe this was what the author was looking for Here's how to do that: var query = pets.OrderBy(pet => pet.Name).ThenByDescending(pet => pet.Age); A: Use dynamic linq just add using System.Linq.Dynamic; And use it like this to order all your columns: string sortTypeStr = "ASC"; // or DESC string SortColumnName = "Age"; // Your column name query = query.OrderBy($"{SortColumnName} {sortTypeStr}"); A: After a lot of searching this worked for me: public static IEnumerable<TEntity> OrderBy<TEntity>(this IEnumerable<TEntity> source, string orderByProperty, bool desc) { string command = desc ? "OrderByDescending" : "OrderBy"; var type = typeof(TEntity); var property = type.GetProperty(orderByProperty); var parameter = Expression.Parameter(type, "p"); var propertyAccess = Expression.MakeMemberAccess(parameter, property); var orderByExpression = Expression.Lambda(propertyAccess, parameter); var resultExpression = Expression.Call(typeof(Queryable), command, new[] { type, property.PropertyType }, source.AsQueryable().Expression, Expression.Quote(orderByExpression)); return source.AsQueryable().Provider.CreateQuery<TEntity>(resultExpression); } A: Just stumbled across this question. Using Marc's ApplyOrder implementation from above, I slapped together an Extension method that handles SQL-like strings like: list.OrderBy("MyProperty DESC, MyOtherProperty ASC"); Details can be found here: http://aonnull.blogspot.com/2010/08/dynamic-sql-like-linq-orderby-extension.html A: First Install Dynamic Tools --> NuGet Package Manager --> Package Manager Console install-package System.Linq.Dynamic Add Namespace using System.Linq.Dynamic; Now you can use OrderBy("Name, Age DESC") A: You could add it: public static IEnumerable<T> OrderBy( this IEnumerable<T> input, string queryString) { //parse the string into property names //Use reflection to get and sort by properties //something like foreach( string propname in queryString.Split(',')) input.OrderBy( x => GetPropertyValue( x, propname ) ); // I used Kjetil Watnedal's reflection example } The GetPropertyValue function is from Kjetil Watnedal's answer The issue would be why? Any such sort would throw exceptions at run-time, rather than compile time (like D2VIANT's answer). If you're dealing with Linq to Sql and the orderby is an expression tree it will be converted into SQL for execution anyway. A: I guess it would work to use reflection to get whatever property you want to sort on: IEnumerable<T> myEnumerables var query=from enumerable in myenumerables where some criteria orderby GetPropertyValue(enumerable,"SomeProperty") select enumerable private static object GetPropertyValue(object obj, string property) { System.Reflection.PropertyInfo propertyInfo=obj.GetType().GetProperty(property); return propertyInfo.GetValue(obj, null); } Note that using reflection is considerably slower than accessing the property directly, so the performance would have to be investigated. A: Here's something else I found interesting. If your source is a DataTable, you can use dynamic sorting without using Dynamic Linq DataTable orders = dataSet.Tables["SalesOrderHeader"]; EnumerableRowCollection<DataRow> query = from order in orders.AsEnumerable() orderby order.Field<DateTime>("OrderDate") select order; DataView view = query.AsDataView(); bindingSource1.DataSource = view; reference: http://msdn.microsoft.com/en-us/library/bb669083.aspx (Using DataSetExtensions) Here is one more way to do it by converting it to a DataView: DataTable contacts = dataSet.Tables["Contact"]; DataView view = contacts.AsDataView(); view.Sort = "LastName desc, FirstName asc"; bindingSource1.DataSource = view; dataGridView1.AutoResizeColumns(); A: You can convert the IEnumerable to IQueryable. items = items.AsQueryable().OrderBy("Name ASC"); A: An alternate solution uses the following class/interface. It's not truly dynamic, but it works. public interface IID { int ID { get; set; } } public static class Utils { public static int GetID<T>(ObjectQuery<T> items) where T:EntityObject, IID { if (items.Count() == 0) return 1; return items.OrderByDescending(u => u.ID).FirstOrDefault().ID + 1; } } A: Thanks to Maarten (Query a collection using PropertyInfo object in LINQ) I got this solution: myList.OrderByDescending(x => myPropertyInfo.GetValue(x, null)).ToList(); In my case I was working on a "ColumnHeaderMouseClick" (WindowsForm) so just found the specific Column pressed and its correspondent PropertyInfo: foreach (PropertyInfo column in (new Process()).GetType().GetProperties()) { if (column.Name == dgvProcessList.Columns[e.ColumnIndex].Name) {} } OR PropertyInfo column = (new Process()).GetType().GetProperties().Where(x => x.Name == dgvProcessList.Columns[e.ColumnIndex].Name).First(); (be sure to have your column Names matching the object Properties) Cheers A: This answer is a response to the comments that need an example for the solution provided by @John Sheehan - Runscope Please provide an example for the rest of us. in DAL (Data Access Layer), The IEnumerable version: public IEnumerable<Order> GetOrders() { // i use Dapper to return IEnumerable<T> using Query<T> //.. do stuff return orders // IEnumerable<Order> } The IQueryable version public IQueryable<Order> GetOrdersAsQuerable() { IEnumerable<Order> qry= GetOrders(); // use the built-in extension method AsQueryable in System.Linq namespace return qry.AsQueryable(); } Now you can use the IQueryable version to bind, for example GridView in Asp.net and benefit for sorting (you can't sort using IEnumerable version) I used Dapper as ORM and build IQueryable version and utilized sorting in GridView in asp.net so easy. A: You can use this: public List<Book> Books(string orderField, bool desc, int skip, int take) { var propertyInfo = typeof(Book).GetProperty(orderField); return _context.Books .Where(...) .OrderBy(p => !desc ? propertyInfo.GetValue(p, null) : 0) .ThenByDescending(p => desc ? propertyInfo.GetValue(p, null) : 0) .Skip(skip) .Take(take) .ToList(); } A: You can define a dictionary from string to Func<> like this : Dictionary<string, Func<Item, object>> SortParameters = new Dictionary<string, Func<Item, object>>() { {"Rank", x => x.Rank} }; And use it like this : yourList.OrderBy(SortParameters["Rank"]); In this case you can dynamically sort by string. A: Too easy without any complication: * *Add using System.Linq.Dynamic; at the top. *Use vehicles = vehicles.AsQueryable().OrderBy("Make ASC, Year DESC").ToList(); Edit: to save some time, the System.Linq.Dynamic.Core (System.Linq.Dynamic is deprecated) assembly is not part of the framework, but can be installed from nuget: System.Linq.Dynamic.Core A: Just building on what others have said. I found that the following works quite well. public static IEnumerable<T> OrderBy<T>(this IEnumerable<T> input, string queryString) { if (string.IsNullOrEmpty(queryString)) return input; int i = 0; foreach (string propname in queryString.Split(',')) { var subContent = propname.Split('|'); if (Convert.ToInt32(subContent[1].Trim()) == 0) { if (i == 0) input = input.OrderBy(x => GetPropertyValue(x, subContent[0].Trim())); else input = ((IOrderedEnumerable<T>)input).ThenBy(x => GetPropertyValue(x, subContent[0].Trim())); } else { if (i == 0) input = input.OrderByDescending(x => GetPropertyValue(x, subContent[0].Trim())); else input = ((IOrderedEnumerable<T>)input).ThenByDescending(x => GetPropertyValue(x, subContent[0].Trim())); } i++; } return input; } A: I was trying to do this but having problems with Kjetil Watnedal's solution because I don't use the inline linq syntax - I prefer method-style syntax. My specific problem was in trying to do dynamic sorting using a custom IComparer. My solution ended up like this: Given an IQueryable query like so: List<DATA__Security__Team> teams = TeamManager.GetTeams(); var query = teams.Where(team => team.ID < 10).AsQueryable(); And given a run-time sort field argument: string SortField; // Set at run-time to "Name" The dynamic OrderBy looks like so: query = query.OrderBy(item => item.GetReflectedPropertyValue(SortField)); And that's using a little helper method called GetReflectedPropertyValue(): public static string GetReflectedPropertyValue(this object subject, string field) { object reflectedValue = subject.GetType().GetProperty(field).GetValue(subject, null); return reflectedValue != null ? reflectedValue.ToString() : ""; } One last thing - I mentioned that I wanted the OrderBy to use custom IComparer - because I wanted to do Natural sorting. To do that, I just alter the OrderBy to: query = query.OrderBy(item => item.GetReflectedPropertyValue(SortField), new NaturalSortComparer<string>()); See this post for the code for NaturalSortComparer(). A: you can do it like this for multiple order by IOrderedEnumerable<JToken> sort; if (query.OrderBys[0].IsDESC) { sort = jarry.OrderByDescending(r => (string)r[query.OrderBys[0].Key]); } else { sort = jarry.OrderBy(r => (string) r[query.OrderBys[0].Key]); } foreach (var item in query.OrderBys.Skip(1)) { if (item.IsDESC) { sort = sort.ThenByDescending(r => (string)r[item.Key]); } else { sort = sort.ThenBy(r => (string)r[item.Key]); } } A: Convert List to IEnumerable or Iquerable, add using System.LINQ.Dynamic namespace, then u can mention the property names in comma seperated string to OrderBy Method which comes by default from System.LINQ.Dynamic. A: var result1 = lst.OrderBy(a=>a.Name);// for ascending order. var result1 = lst.OrderByDescending(a=>a.Name);// for desc order. A: I am able to do this with the code below. No need write long and complex code. protected void sort_array(string field_name, string asc_desc) { objArrayList= Sort(objArrayList, field_name, asc_desc); } protected List<ArrayType> Sort(List<ArrayType> input, string property, string asc_desc) { if (asc_desc == "ASC") { return input.OrderBy(p => p.GetType() .GetProperty(property) .GetValue(p, null)).ToList(); } else { return input.OrderByDescending(p => p.GetType() .GetProperty(property) .GetValue(p, null)).ToList(); } } A: If you are using Specification (such as Ardalis Specification) using Microsoft.EntityFrameworkCore; namespace TestExtensions; public static class IQueryableExtensions { public static void ApplyOrder<T>(ISpecificationBuilder<T> query, string propertyName, bool ascendingOrder) { if (ascendingOrder) query.OrderBy(T => EF.Property<object>(T!, propertyName)); else query.OrderByDescending(T => EF.Property<object>(T!, propertyName)); } } A: With Net6 and EF .AsQueryable().OrderBy((ColumnOrder.Column, ColumnOrder.Dir));
{ "language": "en", "url": "https://stackoverflow.com/questions/41244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "739" }
Q: New Added Types in .NET Framework 2.0 Service Pack 1 I assumed there were only bug fixes/(no new types) in .NET 2.0 SP1 until I came across few posts which were mentioning DateTimeOffset structure, that was added in .NET 2.0 SP1. Is there a full listing of the newly added types in .NET 2.0 SP1? A: Here's what you're looking for: Full Article: http://www.hanselman.com/blog/CatchingRedBitsDifferencesInNET20AndNET20SP1.aspx This may also be helpful: Full Article: http://www.hanselman.com/blog/ChangesInTheNETBCLBetween20And35.aspx A: There were new interfaces added, like INotifyPropertyChanging, so there were new types added. The question is valid. A: DateTimeOffset was added to 2.0 SP1 - I'm not aware of any other new types. Given the coincidental timing, it's perhaps worth reminding people that 2.0 SP1 shipped with 3.5 RTM (i.e November 2007) and 2.0 SP2 shipped with 3.5 SP1. A: Based on what D2VIANT referenced Full Article: http://www.hanselman.com/blog/CatchingRedBitsDifferencesInNET20AndNET20SP1.aspx I was able to find additional resources which list the changes in .NET SP1 some of the types added/affected are listed below * *System.DateTimeOffset *System.GCCollectionMode *System.Runtime.GCLatencyMode *System.Configuration.OverrideMode *System.Data.SqlClient.SortOrder *System.Data.Design.TypedDataSetSchemaImporterExtensionFx35 *System.Data.TypedDataSetGenerator.GenerateOption *System.UriIdnScope *System.ComponentModel.INotifyPropertyChanging *System.ComponentModel.PropertyChangingEventArgs *System.ComponentModel.PropertyChangingEventHandler *System.ComponentModel.Design.Serialization.IDesignerLoaderHost2 *System.Configuration.IdnElement *System.Configuration.IriParsingElement *System.Configuration.UriSection *System.Net.Sockets.SendPacketsElement *and Many More... API Changes from org2.0 to 2.0 and New Methods and Types
{ "language": "en", "url": "https://stackoverflow.com/questions/41256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: SQL Server Merge Replication Schedule We're replicating a database between London and Hong Kong using SQL Server 2005 Merge replication. The replication is set to synchronise every one minute and it works just fine. There is however the option to set the synchronisation to be "Continuous". Is there any real difference between replication every one minute and continuously? The only reason for us doing every one minute rather than continuous in the first place was that it recovered better if the line went down for a few minutes, but this experience was all from SQL Server 2000 so it might not be applicable any more... A: We have been trying the continuous replication solution on SQL SERVER 2005 and it appeared to be less efficient than a scheduled solution: as your process is continuous, you will not get all the info related to your passed replications (how many replications failed, how long did the process take, why was the process stopped, how many records were updated, how many database structure modifications were replicated to suscribers, and so on), making the replication follow-up a lot more difficult. We have also been experiencing troubles while modifying database structure (ALTER TABLE instructions) and/or making bulk updates on one of the databases with continuous replication going on. Keep you "every minute" synchro as it is and just forget about this "continuous" option.
{ "language": "en", "url": "https://stackoverflow.com/questions/41273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Can Mac OS X's Spotlight be configured to ignore certain file types? I've got bunches of auxiliary files that are generated by code and LaTeX documents that I dearly wish would not be suggested by SpotLight as potential search candidates. I'm not looking for example.log, I'm looking for example.tex! So can Spotlight be configured to ignore, say, all .log files? (I know, I know; I should just use QuickSilver instead…) @diciu That's an interesting answer. The problem in my case is this: Figure out which importer handles your type of file I'm not sure if my type of file is handled by any single importer? Since they've all got weird extensions (.aux, .glo, .out, whatever) I think it's improbable that there's an importer that's trying to index them. But because they're plain text they're being picked up as generic files. (Admittedly, I don't know much about Spotlight's indexing, so I might be completely wrong on this.) @diciu again: TextImporterDontImportList sounds very promising; I'll head off and see if anything comes of it. Like you say, it does seem like the whole UTI system doesn't really allow not searching for something. @Raynet Making the files invisible is a good idea actually, albeit relatively tedious for me to set up in the general sense. If worst comes to worst, I might give that a shot (but probably after exhausting other options such as QuickSilver). (Oh, and SetFile requires the Developer Tools, but I'm guessing everyone here has them installed anyway :) ) A: @Will - these things that define types are called uniform type identifiers. The problem is they are a combination of extensions (like .txt) and generic types (i.e. public.plain-text matches a txt file without the txt extension based purely on content) so it's not as simple as looking for an extension. RichText.mdimporter is probably the importer that imports your text file. This should be easily verified by running mdimport in debug mode on one of the files you don't want indexed: cristi:~ diciu$ echo "All work and no play makes Jack a dull boy" > ~/input.txt cristi:~ diciu$ mdimport -d 4 -n ~/input.txt 2>&1 | grep Imported kMD2008-09-03 12:05:06.342 mdimport[1230:10b] Imported '/Users/diciu/input.txt' of type 'public.plain-text' with plugIn /System/Library/Spotlight/RichText.mdimporter. The type that matches in my example is public.plain-text. I've no idea how you actually write an extension-based exception for an UTI (like public.plain-text except anything ending in .log). Later edit: I've also looked though the RichText mdimporter binary and found a promising string but I can't figure out if it's actually being used (as a preference name or whatever): cristi:FoodBrowser diciu$ strings /System/Library/Spotlight/RichText.mdimporter/Contents/MacOS/RichText |grep Text TextImporterDontImportList A: Not sure how to do it on a file type level, but you can do it on a folder level: Source: http://lists.apple.com/archives/spotlight-dev/2008/Jul/msg00007.html Make spotlight ignore a folder If you absolutely can't rename the folder because other software depends on it another technique is to go ahead and rename the directory to end in ".noindex", but then create a symlink in the same location pointing to the real location using the original name. Most software is happy to use the symlink with the original name, but Spotlight ignores symlinks and will note the "real" name ends in *.noindex and will ignore that location. Perhaps something like: mv OriginalName OriginalName.noindex ln -s OriginalName.noindex OriginalName ls -l lrwxr-xr-x 1 andy admin 24 Jan 9 2008 OriginalName -> OriginalName.noindex drwxr-xr-x 11 andy admin 374 Jul 11 07:03 Original.noindex A: Here's how it might work. Note: this is not a very good solution as a system update will overwrite changes you will perform. Get a list of all importers cristi:~ diciu$ mdimport -L 2008-09-03 10:42:27.144 mdimport[727:10b] Paths: id(501) ( "/System/Library/Spotlight/Audio.mdimporter", "/System/Library/Spotlight/Chat.mdimporter", "/Developer/Applications/Xcode.app/Contents/Library/Spotlight/SourceCode.mdimporter", Figure out which importer handles your type of file (example for the Audio importer): cristi:~ diciu$ cat /System/Library/Spotlight/Audio.mdimporter/Contents/Info.plist [..] CFBundleTypeRole MDImporter LSItemContentTypes public.mp3 public.aifc-audio public.aiff-audio Alter the importer's plist to delete the type you want to ignore. Reimport the importer's types so the system picks up the change: mdimport -r /System/Library/Spotlight/Chat.mdimporter A: The only option probably is to have them not indexed by spotlight as from some reason you cannot do negative searches. You can search for files with specifix file extension, but you cannot not search for ones that don't match. You could try making those files invisible for Finder, Spotlight won't index invisible files. Command for setting the kIsInvisible flag on files is: SetFile -a v [filename(s)]
{ "language": "en", "url": "https://stackoverflow.com/questions/41279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: File Access Strategy in a Multi-Threaded Environment (Web App) I have a file which is an XML representation of some data that is taken from a Web service and cached locally within a Web Application. The idea being is that this data is very static, but just might change. So I have set it up to cache to a file, and stuck a monitor against it to check if it has been deleted. Once deleted, the file will be refreshed from its source and rebuilt. I am now running in to problems though, because obviously in a multi-threaded environment it falls over as it is trying to access the data when it is still reading/writing the file. This is confusing me, because I added a object to lock against, and this is always locked during read/write. It was my understanding that attempted access from other threads would be told to "wait" until the lock was released? Just to let you know, I am real new to multi-threaded development, so I am totally willing to accept this is a screw up on my part :) * *Am I missing something? *What is the best file access strategy in a multi-threaded environment? Edit Sorry - I should have said this is using ASP.NET 2.0 :) A: Here is the code that I use to make sure a file is not locked by another process. It's not 100% foolproof, but it gets the job done most of the time: /// <summary> /// Blocks until the file is not locked any more. /// </summary> /// <param name="fullPath"></param> bool WaitForFile(string fullPath) { int numTries = 0; while (true) { ++numTries; try { // Attempt to open the file exclusively. using (FileStream fs = new FileStream(fullPath, FileMode.Open, FileAccess.ReadWrite, FileShare.None, 100)) { fs.ReadByte(); // If we got this far the file is ready break; } } catch (Exception ex) { Log.LogWarning( "WaitForFile {0} failed to get an exclusive lock: {1}", fullPath, ex.ToString()); if (numTries > 10) { Log.LogWarning( "WaitForFile {0} giving up after 10 tries", fullPath); return false; } // Wait for the lock to be released System.Threading.Thread.Sleep(500); } } Log.LogTrace("WaitForFile {0} returning true after {1} tries", fullPath, numTries); return true; } Obviously you can tweak the timeouts and retries to suit your application. I use this to process huge FTP files that take a while to be written. A: If you're locking on a object stored as a static then the lock should work for all threads in the same Application Domain, but perhaps you need to upload a code sample so we can have a look at the offending lines. That said, one thought would be to check if IIS is configured to run in Web Garden mode (i.e. more than 1 process executing your application) which would break your locking logic. While you could fix such a situation with a mutex it'd be easier to reconfigure your application to execute in a single process, although you'd be wise to check the performance before and after messing with the web garden settings as it can potentially affect performance. A: You could maybe create the file with a temporary name ("data.xml_TMP"), and when it's ready change the name to what it is supposed to be. That way, no other process will be accessing it before it is ready. A: OK, I have been working on this and ended up creating a stress-test module to basically hammer the crap out of my code from several threads (See Related Question). It was much easier from this point on to find holes in my code. It turns out that my code wasn't actually far off, but there was a certain logic path that it could enter in to which basically caused read/write operations to stack up, meaning if they didn't get cleared in time, it would go boom! Once I took that out, ran my stress test again, all worked fine! So, I didn't really do anything special in my file access code, just ensured I used lock statements where appropriate (i.e. when reading or writing). A: How about using AutoResetEvent to communicate between threads? I created a console app which creates roughly 8 GB file in createfile method and then copy that file in main method static AutoResetEvent waitHandle = new AutoResetEvent(false); static string filePath=@"C:\Temp\test.txt"; static string fileCopyPath=@"C:\Temp\test-copy.txt"; static void Main(string[] args) { Console.WriteLine("in main method"); Console.WriteLine(); Thread thread = new Thread(createFile); thread.Start(); Console.WriteLine("waiting for file to be processed "); Console.WriteLine(); waitHandle.WaitOne(); Console.WriteLine(); File.Copy(filePath, fileCopyPath); Console.WriteLine("file copied "); } static void createFile() { FileStream fs= File.Create(filePath); Console.WriteLine("start processing a file "+DateTime.Now); Console.WriteLine(); using (StreamWriter sw = new StreamWriter(fs)) { for (long i = 0; i < 300000000; i++) { sw.WriteLine("The value of i is " + i); } } Console.WriteLine("file processed " + DateTime.Now); Console.WriteLine(); waitHandle.Set(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/41290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Emacs in Windows How do you run Emacs in Windows? What is the best flavor of Emacs to use in Windows, and where can I download it? And where is the .emacs file located? A: Well, I personally really like what I have been using since I started with Emacs, which is GNU Emacs. It looks like it is built for windows too. That link also answers your .emacs file question. Here is a place you can download it. You should probably get version 22.2 (the latest). If this is your first time, I hope you enjoy it! I know I absolutely love emacs! A: I use EmacsW32, it works great. EDIT: I now use regular GNU Emacs 24, see below. See its EmacsWiki page for details. To me, the biggest advantage is that: * *it has a version of emacsclient that starts the Emacs server if no server is running (open all your files in the same Emacs window) *it includes several useful packages such as Nxml *it has a Windows installer or you can build it from sources And concerning XEmacs, according to this post by Steve Yegge: To summarize, I've argued that XEmacs has a much lower market share, poorer performance, more bugs, much lower stability, and at this point probably fewer features than GNU Emacs. When you add it all up, it's the weaker candidate by a large margin. EDIT: I now use regular GNU Emacs 24. It also contains Nxml, can be installed or built from sources, and with this wrapper, the Emacs server starts if no server is running. Cheers! A: I run it under cygwin. That also gives me a Unix-ish environment for shelling out commands with meta-! A: I use a vanilla version of emacs. In my experience, this is very stable, simple, does everything I need, and doesn't add a bunch of bloat that I don't need. The .emacs file can be placed in C:\Users\YourName if the HOME environment variable is set. This is a great way to handle it because it works on a user basis and mimics emacs behavior on Linux. You can download the zip from any gnu software repository mirror in the emacs/windows folder. You want the file that is named emacs-xx.x-bin-i686-pc-mingw32.zip. There are some great instructions for configuring emacs for windows here. Basically, "installation" boils down to: * *Download emacs from a gnu mirror at emacs/windows/emacs-version-bin-i686-pc-mingw32.zip, and extract the zip to an appropriate folder. Preferably C:\emacs to avoid spaces in the filename. *Set the HOME environment variable to C:\Users\username (or whatever you want). Make it a user-only variable (if it is username-specific). This is where your .emacs file goes. *If you want a start menu or desktop shortcut, create a shortcut to bin/runemacs.exe. *Add c:\emacs\emacs-xx.x\bin\ to your path (user or system), so that you can run it from the command line. A: Note that GNU Emacs for Windows comes with two executables to start Emacs: "emacs.exe" and "runemacs.exe". The former keeps a DOS-Prompt window in the background, while the latter does not, so when if you choose that distribution and want to create a shortcut, be sure to launch "runemacs.exe". Carl A: Also, you can consider emacs-w64 for 64bit windows systems: emacs-w64: http://sourceforge.net/projects/emacsbinw64/ A: Easiest way to find where the user init file is: C-h v user-init-file Easiest way to open it is (in the scratch buffer): (find-file user-init-file) and hit C-j to eval A: See http://www.gnu.org/software/emacs/windows/ntemacs.html. Section 2.1 describes where to get it, and section 3.5 describes where the .emacs file goes (by default, in your home directory, as specified by the HOME environment variable). A: I've run both GNU emacs and Xemacs on windows. I used to use it as my primary editor, email client etc, but not it's "just" an editor. When I recently reinstalled to Vista I installed the latest GNU version. It works fine. So does Xemacs, but it does look like GNU have got their sh*t together so Xemacs isn't as compelling anymore. A: I suggest you to use development version of GNU Emacs 23, which is pretty stable and to be released relatively soon. You can get weekly binary builds from the link below. Latest GNU Emacs as a zip archive A: I have a portable version with .emacs configure ready, which setup org mode, I-do, etc. It also included org sample file. I think that is a better start point for new comers. Basically run with runemacs.bat and everything is ready. http://nd.edu/~gsong/portable_emacs.html A: When forced to use Windows, I ... Download "Emacs for windows", and save it in some directory (henceforth referred to as EMACS_SOMEWHERE) Drop a .cmd file in "Startup" to map, "My Documents" to H: drive with subst, or if "My Documents" resides on a remote server, I use the "Map Network Drive" thing in Explorer to have "My Documents" named H:. Then I create an environment variable named HOME in Windows and give it the name of "H:\". Now I can drop my .emacs file in "My Documents" and it will be read by emacs when it launches. Then I create the H:\bin directory. Then I add "H:\bin" to my Windows "Path" environment variable. Then I create a H:\bin\emacs.cmd file. It contains one line: @call drive:\EMACS_SOMEWHERE\emacs-23.2\bin\emacsclientw.exe --alternate-editor=c:\programs\emacs-23.2\bin\runemacs.exe -n -c %* This is a fair bit of work, but it will enable me to run the one and same emacs from either a windows command prompt or from a cygwin command prompt, provided that /cygdrive/h/bin is added to my cygwin PATH variable. Haven't used this setup for a while but as I recall, when I call the emacs.cmd with a new file over and over, they all end up being buffers in the one and same emacs session. A: I've encountered this problem, and discovered the fault (at least in my case) to be the existence of c:\site-lisp\site-start.el, a file that was created when EmacsW32 was installed, and which was not removed when I uninstalled it. (Vanilla GNU Emacs for Windows has c:\site-lisp in its load-path, and will try to load this file, which somehow winds up triggering that error.) Solution: removing that whole directory (c:\site-lisp) worked for me, but you should just be able to remove the site-start.el file. A: The best place to start, to get an MS Windows binary for GNU Emacs is ... GNU Emacs: http://www.gnu.org/software/emacs/ (Oh, and how did I find that URL? From the Emacs manual, node Distribution. If you have access to Emacs anywhere, that's the place to go for such information.) On that page you will see everything you need to know about obtaining Emacs. In particular, you will find a section called Obtaining/Downloading GNU Emacs, which links to a nearby GNU mirror. Clicking that link takes you to a page of links that download all Emacs releases since release 21. More imporantly here, on that page of links you will also see a directory link named windows. Click that to get a page of links to Emacs binaries (executables) for MS Windows. That is the page you want. Knowing the above information can help when you need to find the page again, if you haven't bookmarked it. But here is the final URL, directly: http://mirror.anl.gov/pub/gnu/emacs/windows/ A: There was https://bitbucket.org/Haroogan/emacs-for-windows with the latest Emacs 25, but the whole page has been removed. The benefit of this build and the emacs-w64 above is that they come with jpg, png, tiff DLLs as well as lxml DLL, which is needed for the new eww web browser. A: I prefer to run Windows 10 + VcXsrv + Emacs 25 client in WSL. Emacs is my shell. A: To access the .emacs file for your profile the easiest way is to open up emacs. Then do C-x C-, type in ~USERNAME/.emacs (or you can use init.el or one of the other flavours). Type your stuff into the file and C-x C-s (I think) to save it. The actual file is located (in Windows XP) in c:\Documents and Settings\USERNAME.emacs.d(whatever you named the file), or the equivalent spelling/location on your system. A: You can download GNU Emacs NT from here direct. It works fine in windows, make sure you create a shortcut to the runemacs.exe file rather than the emacs.exe file so it doesn't show a command prompt before opening! XEmacs is less stable than GNU Emacs, and a lot of extensions are specifically written for GNU. I would recommend GNU > X. You can place the .emacs file in the root of the drive it's installed on. Not sure whether you can add it elsewhere too... A: Im using emacs32, I only have one problem with it really: https://stackoverflow.com/questions/3625738/comint-previous-matching-input-in-emacsw32-is-not-interactive A: If You Mean Emacs as Latex Editor for Windows 7. Emacs4LS (Emacs 4 Latex Support under Windows 7) for newcomer for Emacs. http://chunqishi.github.io/emacs4ls/ * *Easy Steps to Install. *Plugins Built-In.
{ "language": "en", "url": "https://stackoverflow.com/questions/41300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "96" }
Q: Find item in WPF ComboBox I know in ASP.NET I can get an item from a DropDownList by using DropDownList1.Items.FindByText Is there a similar method I can use in WPF for a ComboBox? Here's the scenario. I have a table called RestrictionFormat that contains a column called RestrictionType, the type is a foreign key to a table that stores these values. In my editor application I'm writing, when the user selects the RestrictionFormat from a ComboBox (this works fine), I'm pulling up the details for editing. I'm using a second ComboBox to make sure the user only selects one RestrictionType when editing. I already have the second combobox bound property from the RestrictionType table, but I need to change the selected index on it to match the value specified in the record. Here's the scenario. I have a table called RestrictionFormat that contains a column called RestrictionType, the type is a foreign key to a table that stores these values. In my editor application I'm writing, when the user selects the RestrictionFormat from a ComboBox (this works fine), I'm pulling up the details for editing. I'm using a second ComboBox to make sure the user only selects one RestrictionType when editing. I already have the second combobox bound property from the RestrictionType table, but I need to change the selected index on it to match the value specified in the record. Does this make sense? A: In WPF you can use FindName method. XAML: <ComboBox Name="combo"> <ComboBoxItem Name="item1" >1</ComboBoxItem> <ComboBoxItem Name="item2">2</ComboBoxItem> <ComboBoxItem Name="item3">3</ComboBoxItem> </ComboBox> Code-behind file item1.Content = "New content"; // Reference combo box item by name ComboBoxItem item = (ComboBoxItem)this.combo.FindName("item1"); // Using FindName method To find item by its content you can use UI automation. A: Can you use ItemContainerGenerator? ItemContainerGenerator contains a ContainerFromItem method that takes an object parameter. If you have a reference to the full object that your comboBox contains (or a way to reconstruct it), you can use the following: ComboBoxItem item = (ComboBoxItem)myComboBox.ItemContainerGenerator.ContainerFromItem(myObject); A: instead of trying to bind the SelectedIndex why don't you just bind the SelectedItem in the ComboBox to the value in the record? in other words, set the DataContext of the ComboBox (or its parent) to the selected 'record' and bind the SelectedItem on the ComboBox to an exposed property on the 'record'.. it may help if you could provide some code snippets, or extra details so that responses can be more specific and refer to the variables and types you are using in both the source record and the ComboBox which you have populated. A: You can retrieve combobox items in two ways: By item: ComboBoxItem item = (ComboBoxItem) control.ItemContainerGenerator.ContainerFromItem(control.SelectedItem); By index: ComboBoxItem item = (ComboBoxItem) control.ItemContainerGenerator.ContainerFromIndex(1); A: Can you give some context as to what exactly you are trying to do? What objects do you put in your Combobox, and using which method? (Are you setting or binding the ItemsSource property?) Why do you need to lookup an item by its "text"? The most usual usage in WPF is to bind the SelectedItem property to something else so you can retrieve/set the selected entry using your representation. Is there a specific requirement for which you need to find a specific item in the list? Worst case, you can perform the search on the collection to which you bind your ComboBox using Linq To Objects. Do not mistake the ComboBoxItem (that is, the element generated for you behind the scenes by WPF when you bind ItemsSource) with the SelectedItem, which is the actual object in the collection you bind to. That usually is the source of most problems whith WPF when you are not used to it. There are precious few cases when you need to find the actual ComboBoxItem.
{ "language": "en", "url": "https://stackoverflow.com/questions/41304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Checking if a list is empty with LINQ What's the "best" (taking both speed and readability into account) way to determine if a list is empty? Even if the list is of type IEnumerable<T> and doesn't have a Count property. Right now I'm tossing up between this: if (myList.Count() == 0) { ... } and this: if (!myList.Any()) { ... } My guess is that the second option is faster, since it'll come back with a result as soon as it sees the first item, whereas the second option (for an IEnumerable) will need to visit every item to return the count. That being said, does the second option look as readable to you? Which would you prefer? Or can you think of a better way to test for an empty list? Edit @lassevk's response seems to be the most logical, coupled with a bit of runtime checking to use a cached count if possible, like this: public static bool IsEmpty<T>(this IEnumerable<T> list) { if (list is ICollection<T>) return ((ICollection<T>)list).Count == 0; return !list.Any(); } A: LINQ itself must be doing some serious optimization around the Count() method somehow. Does this surprise you? I imagine that for IList implementations, Count simply reads the number of elements directly while Any has to query the IEnumerable.GetEnumerator method, create an instance and call MoveNext at least once. /EDIT @Matt: I can only assume that the Count() extension method for IEnumerable is doing something like this: Yes, of course it does. This is what I meant. Actually, it uses ICollection instead of IList but the result is the same. A: I just wrote up a quick test, try this: IEnumerable<Object> myList = new List<Object>(); Stopwatch watch = new Stopwatch(); int x; watch.Start(); for (var i = 0; i <= 1000000; i++) { if (myList.Count() == 0) x = i; } watch.Stop(); Stopwatch watch2 = new Stopwatch(); watch2.Start(); for (var i = 0; i <= 1000000; i++) { if (!myList.Any()) x = i; } watch2.Stop(); Console.WriteLine("myList.Count() = " + watch.ElapsedMilliseconds.ToString()); Console.WriteLine("myList.Any() = " + watch2.ElapsedMilliseconds.ToString()); Console.ReadLine(); The second is almost three times slower :) Trying the stopwatch test again with a Stack or array or other scenarios it really depends on the type of list it seems - because they prove Count to be slower. So I guess it depends on the type of list you're using! (Just to point out, I put 2000+ objects in the List and count was still faster, opposite with other types) A: List.Count is O(1) according to Microsoft's documentation: http://msdn.microsoft.com/en-us/library/27b47ht3.aspx so just use List.Count == 0 it's much faster than a query This is because it has a data member called Count which is updated any time something is added or removed from the list, so when you call List.Count it doesn't have to iterate through every element to get it, it just returns the data member. A: The second option is much quicker if you have multiple items. * *Any() returns as soon as 1 item is found. *Count() has to keep going through the entire list. For instance suppose the enumeration had 1000 items. * *Any() would check the first one, then return true. *Count() would return 1000 after traversing the entire enumeration. This is potentially worse if you use one of the predicate overrides - Count() still has to check every single item, even it there is only one match. You get used to using the Any one - it does make sense and is readable. One caveat - if you have a List, rather than just an IEnumerable then use that list's Count property. A: @Konrad what surprises me is that in my tests, I'm passing the list into a method that accepts IEnumerable<T>, so the runtime can't optimize it by calling the Count() extension method for IList<T>. I can only assume that the Count() extension method for IEnumerable is doing something like this: public static int Count<T>(this IEnumerable<T> list) { if (list is IList<T>) return ((IList<T>)list).Count; int i = 0; foreach (var t in list) i++; return i; } ... in other words, a bit of runtime optimization for the special case of IList<T>. /EDIT @Konrad +1 mate - you're right about it more likely being on ICollection<T>. A: I would make one small addition to the code you seem to have settled on: check also for ICollection, as this is implemented even by some non-obsolete generic classes as well (i.e., Queue<T> and Stack<T>). I would also use as instead of is as it's more idiomatic and has been shown to be faster. public static bool IsEmpty<T>(this IEnumerable<T> list) { if (list == null) { throw new ArgumentNullException("list"); } var genericCollection = list as ICollection<T>; if (genericCollection != null) { return genericCollection.Count == 0; } var nonGenericCollection = list as ICollection; if (nonGenericCollection != null) { return nonGenericCollection.Count == 0; } return !list.Any(); } A: You could do this: public static Boolean IsEmpty<T>(this IEnumerable<T> source) { if (source == null) return true; // or throw an exception return !source.Any(); } Edit: Note that simply using the .Count method will be fast if the underlying source actually has a fast Count property. A valid optimization above would be to detect a few base types and simply use the .Count property of those, instead of the .Any() approach, but then fall back to .Any() if no guarantee can be made. A: Ok, so what about this one? public static bool IsEmpty<T>(this IEnumerable<T> enumerable) { return !enumerable.GetEnumerator().MoveNext(); } EDIT: I've just realized that someone has sketched this solution already. It was mentioned that the Any() method will do this, but why not do it yourself? Regards A: Another idea: if(enumerable.FirstOrDefault() != null) However I like the Any() approach more. A: This was critical to get this to work with Entity Framework: var genericCollection = list as ICollection<T>; if (genericCollection != null) { //your code } A: If I check with Count() Linq executes a "SELECT COUNT(*).." in the database, but I need to check if the results contains data, I resolved to introducing FirstOrDefault() instead of Count(); Before var cfop = from tabelaCFOPs in ERPDAOManager.GetTable<TabelaCFOPs>() if (cfop.Count() > 0) { var itemCfop = cfop.First(); //.... } After var cfop = from tabelaCFOPs in ERPDAOManager.GetTable<TabelaCFOPs>() var itemCfop = cfop.FirstOrDefault(); if (itemCfop != null) { //.... } A: private bool NullTest<T>(T[] list, string attribute) { bool status = false; if (list != null) { int flag = 0; var property = GetProperty(list.FirstOrDefault(), attribute); foreach (T obj in list) { if (property.GetValue(obj, null) == null) flag++; } status = flag == 0 ? true : false; } return status; } public PropertyInfo GetProperty<T>(T obj, string str) { Expression<Func<T, string, PropertyInfo>> GetProperty = (TypeObj, Column) => TypeObj.GetType().GetProperty(TypeObj .GetType().GetProperties().ToList() .Find(property => property.Name .ToLower() == Column .ToLower()).Name.ToString()); return GetProperty.Compile()(obj, str); } A: Here's my implementation of Dan Tao's answer, allowing for a predicate: public static bool IsEmpty<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) throw new ArgumentNullException(); if (IsCollectionAndEmpty(source)) return true; return !source.Any(predicate); } public static bool IsEmpty<TSource>(this IEnumerable<TSource> source) { if (source == null) throw new ArgumentNullException(); if (IsCollectionAndEmpty(source)) return true; return !source.Any(); } private static bool IsCollectionAndEmpty<TSource>(IEnumerable<TSource> source) { var genericCollection = source as ICollection<TSource>; if (genericCollection != null) return genericCollection.Count == 0; var nonGenericCollection = source as ICollection; if (nonGenericCollection != null) return nonGenericCollection.Count == 0; return false; } A: This extension method works for me: public static bool IsEmpty<T>(this IEnumerable<T> enumerable) { try { enumerable.First(); return false; } catch (InvalidOperationException) { return true; } } A: myList.ToList().Count == 0. That's all A: List<T> li = new List<T>(); (li.First().DefaultValue.HasValue) ? string.Format("{0:yyyy/MM/dd}", sender.First().DefaultValue.Value) : string.Empty;
{ "language": "en", "url": "https://stackoverflow.com/questions/41319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "129" }
Q: Working on a Visual Studio Project with multiple users? I just wonder what the best approach is to have multiple users work on a Project in Visual Studio 2005 Professional. We got a Solution with multiple Class Libraries, but when everyone opens the solution, we keep getting the "X was modified, Reload/Discard?" prompt all the time. Just opening one project is an obvious alternative, but I find it harder to use as you can't just see some of the other classes in other projects that way. Are there any Guidelines for Team Development with VS2005 Pro? Edit: Thanks. The current environment is a bit limited in the sense there is only 1 PC with RDP Connection, but that will change in the future. Marking the first answer as Accepted, but they are all good :) A: What you need is source control. You should definitely not open the same files over the network on multiple machines. For one thing, Visual Studio has safeguards in place to prevent you from modifying certain files during a build, but it has none of that that will prevent others from modifying the same files over the network. By setting up source control, each developer will have a separate copy of the files locally on his or her developer machine, and periodically communicate with the source control system to check in/commit changes. After that, other developers can ask for the latest updates when they're ready to retrieve them. A: Use source control to keep a central repository of all your code. Then each user checks out their own copy of the source code and works locally. Then submits only the code that changed. https://en.wikipedia.org/wiki/Version_control A: A number of people have recommended using source control and I totally agree. However you also need do the following. * *Exclude your personal options files from the repository (eg your .suo files) *Exclude your App.config files from the repository. - Not entirely but you need to have a Template.App.config. You commit that instead, and only copy your App.config into the Template.App.config when you make structural changes. That was each user has their own individual config for testing. There are probably some other files worth excluding (obj directories and so forth) but thats all I can think of right now. Peter A: This might sound snide, but if you're opening up the solution from a shared location then you're doing something wrong. If that's the case then you should start using source control (something like Subversion) and have everyone check out a copy of the project to work on. However if you're already using source control, then it might be a symptom of having the wrong things checked in. I find that you only need the sln, and the vcproj under source control. Otherwise I don't know... A: You should definitely, definitely be working with source control! This will help stop the collisions that are occurring. Also, if you are making changes to the shared projects this often that it is a problem, then also ensure that all code is tested before getting checked in (otherwise they may bust someone else's build), but make sure they check in often (or time gained from not dealing with prompts will be lost in merging conflicts) :)
{ "language": "en", "url": "https://stackoverflow.com/questions/41320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to detect the presence of a default recording device in the system? How do I detect if the system has a default recording device installed? I bet this can be done through some calls to the Win32 API, anyone has any experience with this? I'm talking about doing this through code, not by opening the control panel and taking a look under sound options. A: Using the DirectX SDK, you can call DirectSoundCaptureEnumerate, which will call your DSEnumCallback function for each DirectSoundCapture device on the system. The first parameter passed to your DSEnumCallback is an LPGUID, which is the "Address of the GUID that identifies the device being enumerated, or NULL for the primary device". If all you need to do is find out if a recording device is present (I don't think this is good enough if you really need to know the default device), you can use waveInGetNumDevs: #include <tchar.h> #include <windows.h> #include "mmsystem.h" int _tmain( int argc, wchar_t *argv[] ) { UINT deviceCount = waveInGetNumDevs(); if ( deviceCount > 0 ) { for ( int i = 0; i < deviceCount; i++ ) { WAVEINCAPSW waveInCaps; waveInGetDevCapsW( i, &waveInCaps, sizeof( WAVEINCAPS ) ); // do some stuff with waveInCaps... } } return 0; } A: There is an Open Source Audio API called PortAudio that has a method you could use. I think the method is called Pa_GetDeviceInfo() or something. A: The win32 api has a function called waveInGetNumDevs for it.
{ "language": "en", "url": "https://stackoverflow.com/questions/41330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Unit testing in Xcode 3.1 I read the question on 'The best way to unit test Objective-C and followed the instructions, but no matter what I do, the Unit tests do not run. Actually the entire program does not run, I get the following message. dyld: Library not loaded: @rpath/SenTestingKit.framework/Versions/A/SenTestingKit Referenced from /Users/garethlewis/work/objc/UnitTesting/build/Debug/UnitTesting Reason: image not found I have set the DYLD_FALLBACK_FRAMEWORK_PATH variable, and also the XCInjectBundle as well as the DYLD_INSERT_LIBRARIES and added the variable -SenTest All. I can't have the only installation of Xcode 3.1 that Unit testing fails on. Can someone who has managed to get Unit Testing on Xcode 3.1 working give some details on what needs to be done. It would help so much, with what I am trying to do. A: You don't need to do this stuff to just run your tests. If you're writing tests for an application, you should just need to set the Test Host and Bundle Loader build settings for your unit test bundle target and they will be run as part of your build. If you're writing tests for a framework you don't even need to do that, just make sure your test bundle links against your framework. I assume you're actually talking about debugging your tests, not just running them. If so, it's important to give us the following information: * *what kind of tests — application or framework — you're trying to debug *what environment variables you set, and what values you set them to *what arguments you set (-SenTest All should be an argument, not environment variable) *what the full error shown in your debug console is, not just the specific failure That will help diagnose what's going on. At first glance, it looks like you might have a typo in your DYLD_FALLBACK_FRAMEWORK_PATH because that determines where dyld will look for the SenTestingKit.framework binary if @rpath cannot be resolved. Knowing what it's set to will probably help. (PS - It's Xcode.)
{ "language": "en", "url": "https://stackoverflow.com/questions/41337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Dynamic programming with WCF Has anybody got any kind of experience with dynamic programming using WCF. By dynamic programming I mean runtime consumption of WSDL's. I have found one blog entry/tool: http://blogs.msdn.com/vipulmodi/archive/2006/11/16/dynamic-programming-with-wcf.aspx Has anybody here found good tools for this? A: This is one of the weirder aspects of WCF. You can dynamically create a channelfactory, but only with a known type. I came up with a solution that is not perfect, but does work: Create an interface, "IFoo" which contains a single method, say Execute(). In your ESB, dynamically create a ChannelFactory<IFoo> for the endpoint that you want to connect to. Set the connection properties (URI, etc.). Now, you can attach services dynamically to your ESB, provided that they always implement the "IFoo" interface. A: I have done this a long time ago with SOAP web services. There was a tool on GotDotNet which I think has become Web Services Studio Express, that had code which inspected/parsed a WSDL file and allowed you to call it. I think the assumption is that the WSDL is known at the time of client creation, and you don't need to be hooked up at runtime. If you inspect the WSDL at runtime you still need to have some sort of logic to decide how to generate the proxy. Why would can you not consume the WSDL before runtime? Web Services are supposed to be fairly static with an interface that doesn't change once published. You can use .NET CodeDom to generate code to execute and use the web service described by the WSDL. The WSDL can be parsed using the standard .NET XML classes. A: I am actually considering making a small ESB, where a user can add a webservice to route to at run time. So I can not add WSDLs statically
{ "language": "en", "url": "https://stackoverflow.com/questions/41355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I make my applications scale well? In general, what kinds of design decisions help an application scale well? (Note: Having just learned about Big O Notation, I'm looking to gather more principles of programming here. I've attempted to explain Big O Notation by answering my own question below, but I want the community to improve both this question and the answers.) Responses so far 1) Define scaling. Do you need to scale for lots of users, traffic, objects in a virtual environment? 2) Look at your algorithms. Will the amount of work they do scale linearly with the actual amount of work - i.e. number of items to loop through, number of users, etc? 3) Look at your hardware. Is your application designed such that you can run it on multiple machines if one can't keep up? Secondary thoughts 1) Don't optimize too much too soon - test first. Maybe bottlenecks will happen in unforseen places. 2) Maybe the need to scale will not outpace Moore's Law, and maybe upgrading hardware will be cheaper than refactoring. A: Ok, so you've hit on a key point in using the "big O notation". That's one dimension that can certainly bite you in the rear if you're not paying attention. There are also other dimensions at play that some folks don't see through the "big O" glasses (but if you look closer they really are). A simple example of that dimension is a database join. There are "best practices" in constructing, say, a left inner join which will help to make the sql execute more efficiently. If you break down the relational calculus or even look at an explain plan (Oracle) you can easily see which indexes are being used in which order and if any table scans or nested operations are occurring. The concept of profiling is also key. You have to be instrumented thoroughly and at the right granularity across all the moving parts of the architecture in order to identify and fix any inefficiencies. Say for example you're building a 3-tier, multi-threaded, MVC2 web-based application with liberal use of AJAX and client side processing along with an OR Mapper between your app and the DB. A simplistic linear single request/response flow looks like: browser -> web server -> app server -> DB -> app server -> XSLT -> web server -> browser JS engine execution & rendering You should have some method for measuring performance (response times, throughput measured in "stuff per unit time", etc.) in each of those distinct areas, not only at the box and OS level (CPU, memory, disk i/o, etc.), but specific to each tier's service. So on the web server you'll need to know all the counters for the web server your're using. In the app tier, you'll need that plus visibility into whatever virtual machine you're using (jvm, clr, whatever). Most OR mappers manifest inside the virtual machine, so make sure you're paying attention to all the specifics if they're visible to you at that layer. Inside the DB, you'll need to know everything that's being executed and all the specific tuning parameters for your flavor of DB. If you have big bucks, BMC Patrol is a pretty good bet for most of it (with appropriate knowledge modules (KMs)). At the cheap end, you can certainly roll your own but your mileage will vary based on your depth of expertise. Presuming everything is synchronous (no queue-based things going on that you need to wait for), there are tons of opportunities for performance and/or scalability issues. But since your post is about scalability, let's ignore the browser except for any remote XHR calls that will invoke another request/response from the web server. So given this problem domain, what decisions could you make to help with scalability? * *Connection handling. This is also bound to session management and authentication. That has to be as clean and lightweight as possible without compromising security. The metric is maximum connections per unit time. *Session failover at each tier. Necessary or not? We assume that each tier will be a cluster of boxes horizontally under some load balancing mechanism. Load balancing is typically very lightweight, but some implementations of session failover can be heavier than desired. Also whether you're running with sticky sessions can impact your options deeper in the architecture. You also have to decide whether to tie a web server to a specific app server or not. In the .NET remoting world, it's probably easier to tether them together. If you use the Microsoft stack, it may be more scalable to do 2-tier (skip the remoting), but you have to make a substantial security tradeoff. On the java side, I've always seen it at least 3-tier. No reason to do it otherwise. *Object hierarchy. Inside the app, you need the cleanest possible, lightest weight object structure possible. Only bring the data you need when you need it. Viciously excise any unnecessary or superfluous getting of data. *OR mapper inefficiencies. There is an impedance mismatch between object design and relational design. The many-to-many construct in an RDBMS is in direct conflict with object hierarchies (person.address vs. location.resident). The more complex your data structures, the less efficient your OR mapper will be. At some point you may have to cut bait in a one-off situation and do a more...uh...primitive data access approach (Stored Procedure + Data Access Layer) in order to squeeze more performance or scalability out of a particularly ugly module. Understand the cost involved and make it a conscious decision. *XSL transforms. XML is a wonderful, normalized mechanism for data transport, but man can it be a huge performance dog! Depending on how much data you're carrying around with you and which parser you choose and how complex your structure is, you could easily paint yourself into a very dark corner with XSLT. Yes, academically it's a brilliantly clean way of doing a presentation layer, but in the real world there can be catastrophic performance issues if you don't pay particular attention to this. I've seen a system consume over 30% of transaction time just in XSLT. Not pretty if you're trying to ramp up 4x the user base without buying additional boxes. *Can you buy your way out of a scalability jam? Absolutely. I've watched it happen more times than I'd like to admit. Moore's Law (as you already mentioned) is still valid today. Have some extra cash handy just in case. *Caching is a great tool to reduce the strain on the engine (increasing speed and throughput is a handy side-effect). It comes at a cost though in terms of memory footprint and complexity in invalidating the cache when it's stale. My decision would be to start completely clean and slowly add caching only where you decide it's useful to you. Too many times the complexities are underestimated and what started out as a way to fix performance problems turns out to cause functional problems. Also, back to the data usage comment. If you're creating gigabytes worth of objects every minute, it doesn't matter if you cache or not. You'll quickly max out your memory footprint and garbage collection will ruin your day. So I guess the takeaway is to make sure you understand exactly what's going on inside your virtual machine (object creation, destruction, GCs, etc.) so that you can make the best possible decisions. Sorry for the verbosity. Just got rolling and forgot to look up. Hope some of this touches on the spirit of your inquiry and isn't too rudimentary a conversation. A: Well there's this blog called High Scalibility that contains a lot of information on this topic. Some useful stuff. A: Often the most effective way to do this is by a well thought through design where scaling is a part of it. Decide what scaling actually means for your project. Is infinite amount of users, is it being able to handle a slashdotting on a website is it development-cycles? Use this to focus your development efforts A: Jeff and Joel discuss scaling in the Stack Overflow Podcast #19. A: The only thing I would say is write your application so that it can be deployed on a cluster from the very start. Anything above that is a premature optimisation. Your first job should be getting enough users to have a scaling problem. Build the code as simple as you can first, then profile the system second and optimise only when there is an obvious performance problem. Often the figures from profiling your code are counter-intuitive; the bottle-necks tend to reside in modules you didn't think would be slow. Data is king when it comes to optimisation. If you optimise the parts you think will be slow, you will often optimise the wrong things. A: One good idea is to determine how much work each additional task creates. This can depend on how the algorithm is structured. For example, imagine you have some virtual cars in a city. At any moment, you want each car to have a map showing where all the cars are. One way to approach this would be: for each car { determine my position; for each car { add my position to this car's map; } } This seems straightforward: look at the first car's position, add it to the map of every other car. Then look at the second car's position, add it to the map of every other car. Etc. But there is a scalability problem. When there are 2 cars, this strategy takes 4 "add my position" steps; when there are 3 cars, it takes 9 steps. For each "position update," you have to cycle through the whole list of cars - and every car needs its position updated. Ignoring how many other things must be done to each car (for example, it may take a fixed number of steps to calculate the position of an individual car), for N cars, it takes N2 "visits to cars" to run this algorithm. This is no problem when you've got 5 cars and 25 steps. But as you add cars, you will see the system bog down. 100 cars will take 10,000 steps, and 101 cars will take 10,201 steps! A better approach would be to undo the nesting of the for loops. for each car { add my position to a list; } for each car { give me an updated copy of the master list; } With this strategy, the number of steps is a multiple of N, not of N2. So 100 cars will take 100 times the work of 1 car - NOT 10,000 times the work. This concept is sometimes expressed in "big O notation" - the number of steps needed are "big O of N" or "big O of N2." Note that this concept is only concerned with scalability - not optimizing the number of steps for each car. Here we don't care if it takes 5 steps or 50 steps per car - the main thing is that N cars take (X * N) steps, not (X * N2). A: FWIW, most systems will scale most effectively by ignoring this until it's a problem- Moore's law is still holding, and unless your traffic is growing faster than Moore's law does, it's usually cheaper to just buy a bigger box (at $2 or $3K a pop) than to pay developers. That said, the most important place to focus is your data tier; that is the hardest part of your application to scale out, as it usually needs to be authoritative, and clustered commercial databases are very expensive- the open source variations are usually very tricky to get right. If you think there is a high likelihood that your application will need to scale, it may be intelligent to look into systems like memcached or map reduce relatively early in your development.
{ "language": "en", "url": "https://stackoverflow.com/questions/41367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Asking a Generic Method to Throw Specific Exception Type on FAIL Right, I know I am totally going to look an idiot with this one, but my brain is just not kicking in to gear this morning. I want to have a method where I can say "if it goes bad, come back with this type of Exception", right? For example, something like (and this doesn't work): static ExType TestException<ExType>(string message) where ExType:Exception { Exception ex1 = new Exception(); ExType ex = new Exception(message); return ex; } Now whats confusing me is that we KNOW that the generic type is going to be of an Exception type due to the where clause. However, the code fails because we cannot implicitly cast Exception to ExType. We cannot explicitly convert it either, such as: static ExType TestException<ExType>(string message) where ExType:Exception { Exception ex1 = new Exception(); ExType ex = (ExType)(new Exception(message)); return ex; } As that fails too.. So is this kind of thing possible? I have a strong feeling its going to be real simple, but I am having a tough day with the old noggin, so cut me some slack :P Update Thanks for the responses guys, looks like it wasn't me being a complete idiot! ;) OK, so Vegard and Sam got me on to the point where I could instantiate the correct type, but then obviously got stuck because the message param is read-only following instantiation. Matt hit the nail right on the head with his response, I have tested this and all works fine. Here is the example code: static ExType TestException<ExType>(string message) where ExType:Exception, new () { ExType ex = (ExType)Activator.CreateInstance(typeof(ExType), message); return ex; } Sweet! :) Thanks guys! A: You can almost do it like this: static void TestException<E>(string message) where E : Exception, new() { var e = new E(); e.Message = message; throw e; } However, that doesn't compile because Exception.Message is read only. It can only be assigned by passing it to the constructor, and there's no way to constrain a generic type with something other than a default constructor. I think you'd have to use reflection (Activator.CreateInstance) to "new up" the custom exception type with the message parameter, like this: static void TestException<E>(string message) where E : Exception { throw Activator.CreateInstance(typeof(E), message) as E; } Edit Oops just realised you're wanting to return the exception, not throw it. The same principle applies, so I'll leave my answer as-is with the throw statements. A: The only issue with the solution is that it is possible to create a subclass of Exception which does not implement a constructor with a single string parameter, so the MethodMissingException might be thrown. static void TestException<E>(string message) where E : Exception, new() { try { return Activator.CreateInstance(typeof(E), message) as E; } catch(MissingMethodException ex) { return new E(); } } A: I have been instantiating inline the type of exception I want to throw, like this: if (ItemNameIsValid(ItemName, out errorMessage)) throw new KeyNotFoundException("Invalid name '" + ItemName + "': " + errorMessage); if (null == MyArgument) throw new ArgumentNullException("MyArgument is null"); A: Have you tried, instead: static T TestException<Exception>(string message) {} because I have a feeling that putting in the generic constraint is not necessary as all throwable exceptions must inherit from System.Exception anyway. Remember that generics do accept inherited types. A: I think seeing as all exceptions should have a parameterless constructor, and have the Message property, so the following should work: static ExType TestException<ExType>(string message) where ExType:Exception { ExType ex = new ExType(); ex.Message = message; return ex; } Edit: OK, Message is read only, so you'll have to hope the class implements the Exception(string) constructor instead. static ExType TestException<ExType>(string message) where ExType:Exception { return new ExType(message); }
{ "language": "en", "url": "https://stackoverflow.com/questions/41397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How can I wrap a function with variable length arguments? I am looking to do this in C/C++. I came across Variable Length Arguments, but this suggests a solution with Python and C using libffi. Now, if I want to wrap the printf function with myprintf. I do it like below: void myprintf(char* fmt, ...) { va_list args; va_start(args, fmt); printf(fmt, args); va_end(args); } int _tmain(int argc, _TCHAR* argv[]) { int a = 9; int b = 10; char v = 'C'; myprintf("This is a number: %d and \nthis is a character: %c and \n another number: %d\n", a, v, b); return 0; } But the results are not as expected! This is a number: 1244780 and this is a character: h and another number: 29953463 What did I miss? A: I am also unsure what you mean by pure. In C++ we use: #include <cstdarg> #include <cstdio> class Foo { void Write(const char* pMsg, ...); }; void Foo::Write( const char* pMsg, ...) { char buffer[4096]; std::va_list arg; va_start(arg, pMsg); std::vsnprintf(buffer, 4096, pMsg, arg); va_end(arg); ... } A: The problem is that you cannot use 'printf' with va_args. You must use vprintf if you are using variable argument lists. vprint, vsprintf, vfprintf, etc. (there are also 'safe' versions in Microsoft's C runtime that will prevent buffer overruns, etc.) You sample works as follows: void myprintf(char* fmt, ...) { va_list args; va_start(args, fmt); vprintf(fmt, args); va_end(args); } int _tmain(int argc, _TCHAR* argv[]) { int a = 9; int b = 10; char v = 'C'; myprintf("This is a number: %d and \nthis is a character: %c and \n another number: %d\n", a, v, b); return 0; } A: Actually, there's a way to call a function that doesn’t have a va_list version from a wrapper. The idea is to use assembler, do not touch arguments on the stack, and temporarily replace the function return address. An example for Visual C x86. call addr_printf calls printf(): __declspec( thread ) static void* _tls_ret; static void __stdcall saveret(void *retaddr) { _tls_ret = retaddr; } static void* __stdcall _getret() { return _tls_ret; } __declspec(naked) static void __stdcall restret_and_return_int(int retval) { __asm { call _getret mov [esp], eax ; /* replace current retaddr with saved */ mov eax, [esp+4] ; /* retval */ ret 4 } } static void __stdcall _dbg_printf_beg(const char *fmt, va_list args) { printf("calling printf(\"%s\")\n", fmt); } static void __stdcall _dbg_printf_end(int ret) { printf("printf() returned %d\n", ret); } __declspec(naked) int dbg_printf(const char *fmt, ...) { static const void *addr_printf = printf; /* prolog */ __asm { push ebp mov ebp, esp sub esp, __LOCAL_SIZE nop } { va_list args; va_start(args, fmt); _dbg_printf_beg(fmt, args); va_end(args); } /* epilog */ __asm { mov esp, ebp pop ebp } __asm { call saveret call addr_printf push eax push eax call _dbg_printf_end call restret_and_return_int } } A: In C++11, this is one possible solution using variadic templates: template<typename... Args> void myprintf(const char* fmt, Args... args) { std::printf(fmt, args...); } As rubenvb points out, there are trade-offs to consider. For example, you will be generating code for each instance which will lead to code bloat. A: Are you using C or C++? The next C++ version, C++0x, will support variadic templates which provide a solution to that problem. Another workaround can be achieved by clever operator overloading to achieve a syntax like this: void f(varargs va) { BOOST_FOREACH(varargs::iterator i, va) cout << *i << " "; } f(args = 1, 2, 3, "Hello"); In order to get this to work, the class varargs has to be implemented to override operator = that returns a proxy object which, in turn, overrides operator ,. However, making this variant type safe in current C++ isn't possible as far as I know since it would have to work by type erasure. A: How do you mean a pure C/C++ solution? The rest parameter (...) is supported cross platform in the C runtime. va_arg, va_copy, va_end, va_start A: void myprintf(char* fmt, ...) { va_ list args; va_ start(args, fmt); printf(fmt, args); // This is the fault. "vprintf(fmt, args);" // should have been used. va_ end(args); } If you're just trying to call printf, there's a printf variant called vprintf that takes the va_list directly: vprintf(fmt, args);
{ "language": "en", "url": "https://stackoverflow.com/questions/41400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: Working with Common/Utility Libraries At the company I work for we have a "Utility" project that is referenced by pretty much ever application we build. It's got lots of things like NullHelpers, ConfigSettingHelpers, Common ExtensionMethods etc. The way we work is that when we want to make a new project, we get the latest version of the project from source control add it to the solution and then reference the project from any new projects that get added to the solution. This has worked ok, however there have been a couple of instances where people have made "breaking changes" to the common project, which works for them, but doesn't work for others. I've been thinking that rather than adding the common library as a project reference perhaps we should start developing the common library as a standalone dll and publish different versions and target a particular version for a particular project so that changes can be made without any risk to other projects using the common library. Having said all that I'm interested to see how others reference or use their common libraries. A: That's exactly what we're doing. We have a Utility project which has some non project specific useful functions. We increase the version manually (minor), build the project in Release version, sign it and put it to a shared location. People then use the specific version of the library. If some useful methods are implemented in some specific projects which could find their way into main Utility project, we put the to a special helper class in the project, and mark them as a possible Utility candidate (simple //TODO). At the end of the project, we review the candidates and if they stick, we move them to the main library. Breaking changes are a no-no and we mark methods and classes as [Obsolete] if needed. But, it doesn't really matter because we increase the version on every publish. Hope this helps. A: We use branching in source control; everyone uses the head branch until they make a release. When they branch the release, they'll branch the common utilities project as well. Additionally, our utilities project has its own unit tests. That way, other teams can know if they would break the build for other teams. Of course, we still have problems like you mention occasionally. But when one team checks in a change that breaks another team's build, it usually means the contract for that method/object has been broken somewhere. We look at these as opportunities to improve the design of the common utilities project... or at least to write more unit tests :/ A: I've had the EXACT same issue! I used to use project references, but it all seems to go bad, when as you say, you have many projects referencing it. I now compile to a DLL, and set the CopyLocal property for the DLL reference to false after the first build (otherwise I find it can override sub projects and just become a mess). I guess in theory it should probably be GAC'ed, but if its a problem that is changing a lot (as mine is) this can become problematic..
{ "language": "en", "url": "https://stackoverflow.com/questions/41405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I execute PHP that is stored in a MySQL database? I'm trying to write a page that calls PHP that's stored in a MySQL database. The page that is stored in the MySQL database contains PHP (and HTML) code which I want to run on page load. How could I go about doing this? A: eval() function was covered in other responses here. I agree you should limit use of eval unless it is absolutely needed. Instead of having PHP code in db you could have just a class name that has method called, say, execute(). Whenever you need to run your custom PHP code just instantiate the class of name you just fetched from db and run ->execute() on it. It is much cleaner solution and gives you great field of flexibility and improves site security significantly. A: You can use the eval command for this. I would recommend against this though, because there's a lot of pitfalls using this approach. Debugging is hard(er), it implies some security risks (bad content in the DB gets executed, uh oh). See When is eval evil in php? for instance. Google for Eval is Evil, and you'll find a lot of examples why you should find another solution. Addition: Another good article with some references to exploits is this blogpost. Refers to past vBulletin and phpMyAdmin exploits which were caused by improper Eval usage. A: You can look at the eval function in PHP. It allows you to run arbitrary PHP code. It can be a huge security risk, though, and is best avoided. A: Easy: $x // your variable with the data from the DB <?php echo eval("?>".$x."<?") ?> Let me know, works great for me in MANY applications, can't help but notice that everyone is quick to say how bad it is, but slow to actually help out with a straight answer... A: Have you considered using your Source Control system to store different forks for the various installations (and the modules that differ among them)? That would be one of several best practices for application configuration I can think of. Yours is not an unusual requirement, so it's a problem that's been solved by others in the past; and storing code in a database is one I think you'd have a hard time finding reference to, or being advised as a best practice. Good thing you posted the clarification. You've probably unintentionally posed an answer in search of a suitable question. A: How I did this is to have a field in the database that identified something unique about the block of code needing to be executed. That one word is in the file name of that code. I put the strings together to point to the php file to be included. example: $lookFor = $row['page']; include("resources/" . $lookFor . "Codebase.php"); In this way even if a hacker could access you DB he couldn't put malicious code straight in there to be executed. He could perhaps change the reference word, but unless he could actually put a file directly onto the server it would do him no good. If he could put files directly onto the server, you're sunk then anyway if he really wants to be nasty. Just my two cents worth. And yes, there are reasons you would want to execute stored code, but there are cons. A: Read php code from database and save to file with unique name and then include file this easy way for run php code and debug it. $uniqid="tmp/".date("d-m-Y h-i-s").'_'.$Title."_".uniqid().".php"; $file = fopen($uniqid,"w"); fwrite($file,"<?php \r\n ".$R['Body']); fclose($file); // eval($R['Body']); include $uniqid;
{ "language": "en", "url": "https://stackoverflow.com/questions/41406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }