text
stringlengths
8
267k
meta
dict
Q: Locate and add project reference for .Net assembly containing a given type? I'm working with a large (270+ project) VS.Net solution. Yes, I know this is pushing the friendship with VS but it's inherited and blah blah. Anyway, to speed up the solution load and compile time I've removed all projects that I'm not currently working on... which in turn has removed those project references from the projects I want to retain. So now I'm going through a mind numbing process of adding binary references to the retained projects so that the referenced Types can be found. Here's how I'm working at present; * *Attempt to compile, get thousands of errors, 'type or namespace missing' *Copy the first line of the error list to the clipboard *Using a perl script hooked up to a hotkey (AHK) I extract the type name from the error message and store it in the windows clipboard *I paste the type name into source insight symbol browser and note the assembly containing the Type *I go back to VS and add that assembly as a binary reference to the relevant project So now, after about 30 mins I'm thinking there's just got to be a quicker way... A: These solutions come to my mind: * *You can try to use Dependency Walker or similar program to analyze dependecies. *Parse MSBuild files (*.csproject) to get list of dependencies EDIT: Just found 2 cool tools Dependency Visualizer & Dependency Finder on codeplex I think they can help you greatly. EDIT: @edg, I totally misread your question, since you lose references from csproj files you have to use static analysis tool like NDepend or try to analyze dependencies in run time. A: One thing you can try is opening up the old .csproj file in notepad and replacing the ProjectReference tags with Reference tags. If you can write a parser, feel free to share. :) Entry in .csproj file if it is a project reference <ItemGroup> <ProjectReference Include="..\WindowsApplication2\WindowsApplication2.csproj"> <Project>{7CE93073-D1E3-49B0-949E-89C73F3EC282}</Project> <Name>WindowsApplication2</Name> </ProjectReference> </ItemGroup> Entry in .csproj file if it is an assembly reference <ItemGroup> <Reference Include="WindowsApplication2, Version=1.0.0.0, Culture=neutral, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <ExecutableExtension>.dll</ExecutableExtension> <HintPath>..\WindowsApplication2\bin\Release\WindowsApplication2.dll</HintPath> </Reference> </ItemGroup> A: No, there currently isn't a built-in quicker way. I would suggest not modifying the existing solution and create a new solution with new projects that duplicate (e.g. rename and edit) the projects you want to work on. If you find that the solution with the hundreds of projects is an issue for you then you'll likely just need to work on a subset. Start with a couple of new projects, add the binary (not project) reference and go from there. A: Instead of removing the project files from the solution, you could unload the projects you aren't working on (right-click the project and select Unload Project). As long as the unloaded project has been built once, any other project with a reference to it will be able to find the assembly in the project's output directory and build with it.
{ "language": "en", "url": "https://stackoverflow.com/questions/53961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is wrong with this PHP regular expression? $output = preg_replace("|(/D)(/s+)(/d+)(;)|", "//1,//3;", $output); I'm trying to replace all alphabetical character followed by one or more whitespace characters (tabs and/or spaces) followed by one or more numerical characters followed by a semicolon with the alphabetical character followed by a comma followed by the numerical digits and then the semicolon. I'll provide an example: Start: hello world 1007; End: hello world,1007; A: The two | at the start and end probably are incorrect - and should both be forward-slashes. All other forward slashes should be backward slashes (and need escaping). And since PHP 4.04 $n is the preferred way of referring to a capture group. $output = preg_replace("/(\\D)\\s+(\\d+;)/", "$1,$2", $output); If you use single quotes you don't need to escape your backslashes: $output = preg_replace('/(\D)\s+(\d+;)/', '$1,$2', $output); A: Should those forward-slashes be backslashes? You'll need to escape them for PHP too unless you change your double-quotes to single-quotes. A: You want backslashes in the regular expression, not forward slashes. The starting and ending pipes are needed (or another delimiter for the regex) $x = "hello world 1007;"; echo preg_replace('|(\D)(\s+)(\d+)(;)|','$1,$3',$x); echo preg_replace('/(\D)(\s+)(\d+)(;)/','$1,$3',$x); echo preg_replace('{(\D)(\s+)(\d+)(;)}','$1,$3',$x);
{ "language": "en", "url": "https://stackoverflow.com/questions/53965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How would you implement the IEnumerator interface? I have a class that map objects to objects, but unlike dictionary it maps them both ways. I am now trying to implement a custom IEnumerator interface that iterates through the values. public class Mapper<K,T> : IEnumerable<T>, IEnumerator<T> { C5.TreeDictionary<K,T> KToTMap = new TreeDictionary<K,T>(); C5.HashDictionary<T,K> TToKMap = new HashDictionary<T,K>(); public void Add(K key, T value) { KToTMap.Add(key, value); TToKMap.Add(value, key); } public int Count { get { return KToTMap.Count; } } public K this[T obj] { get { return TToKMap[obj]; } } public T this[K obj] { get { return KToTMap[obj]; } } public IEnumerator<T> GetEnumerator() { return KToTMap.Values.GetEnumerator(); } public T Current { get { throw new NotImplementedException(); } } public void Dispose() { throw new NotImplementedException(); } object System.Collections.IEnumerator.Current { get { throw new NotImplementedException(); } } public bool MoveNext() { ; } public void Reset() { throw new NotImplementedException(); } } A: Just implement the IEnumerable<T> interface. No need to implement the IEnumerator<T> unless you want to do some special things in the enumerator, which for your case doesn't seem to be needed. public class Mapper<K,T> : IEnumerable<T> { public IEnumerator<T> GetEnumerator() { return KToTMap.Values.GetEnumerator(); } } and that's it. A: CreateEnumerable() returns an IEnumerable which implements GetEnumerator() public class EasyEnumerable : IEnumerable<int> { IEnumerable<int> CreateEnumerable() { yield return 123; yield return 456; for (int i = 0; i < 6; i++) { yield return i; }//for }//method public IEnumerator<int> GetEnumerator() { return CreateEnumerable().GetEnumerator(); }//method IEnumerator IEnumerable.GetEnumerator() { return CreateEnumerable().GetEnumerator(); }//method }//class A: Use yield return. What is the yield keyword used for in C#? A: Here's an example from the book "Algorithms (4th Edition) by Robert Sedgewick". It was written in java and i basically rewrote it in C#. public class Stack<T> : IEnumerable<T> { private T[] array; public Stack(int n) { array = new T[n]; } public Stack() { array = new T[16]; } public void Push(T item) { if (Count == array.Length) { Grow(array.Length * 2); } array[Count++] = item; } public T Pop() { if (Count == array.Length/4) { Shrink(array.Length/2); } return array[--Count]; } private void Grow(int size) { var temp = array; array = new T[size]; Array.Copy(temp, array, temp.Length); } private void Shrink(int size) { Array temp = array; array = new T[size]; Array.Copy(temp,0,array,0,size); } public int Count { get; private set; } public IEnumerator<T> GetEnumerator() { return new ReverseArrayIterator(Count,array); } IEnumerator IEnumerable.GetEnumerator() { return GetEnumerator(); } // IEnumerator implementation private class ReverseArrayIterator : IEnumerator<T> { private int i; private readonly T[] array; public ReverseArrayIterator(int count,T[] array) { i = count; this.array = array; } public void Dispose() { } public bool MoveNext() { return i > 0; } public void Reset() { } public T Current { get { return array[--i]; } } object IEnumerator.Current { get { return Current; } } } } A: First, don't make your collection object implement IEnumerator<>. This leads to bugs. (Consider the situation where two threads are iterating over the same collection). Implementing an enumerator correctly turns out to be non-trivial, so C# 2.0 added special language support for doing it, based on the 'yield return' statement. Raymond Chen's recent series of blog posts ("The implementation of iterators in C# and its consequences") is a good place to get up to speed. * *Part 1: https://web.archive.org/web/20081216071723/http://blogs.msdn.com/oldnewthing/archive/2008/08/12/8849519.aspx *Part 2: https://web.archive.org/web/20080907004812/http://blogs.msdn.com/oldnewthing/archive/2008/08/13/8854601.aspx *Part 3: https://web.archive.org/web/20080824210655/http://blogs.msdn.com/oldnewthing/archive/2008/08/14/8862242.aspx *Part 4: https://web.archive.org/web/20090207130506/http://blogs.msdn.com/oldnewthing/archive/2008/08/15/8868267.aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/53967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Linking directly to a SWF, what are the downsides? Usually Flash and Flex applications are embedded on in HTML using either a combination of object and embed tags, or more commonly using JavaScript. However, if you link directly to a SWF file it will open in the browser window and without looking in the address bar you can't tell that it wasn't embedded in HTML with the size set to 100% width and height. Considering the overhead of the HTML, CSS and JavaScript needed to embed a Flash or Flex application filling 100% of the browser window, what are the downsides of linking directly to the SWF file instead? What are the upsides? I can think of one upside and three downsides: you don't need the 100+ lines of HTML, JavaScript and CSS that are otherwise required, but you have no plugin detection, no version checking and you lose your best SEO option (progressive enhancement). Update don't get hung up on the 100+ lines, I simply mean that the the amount of code needed to embed a SWF is quite a lot (and I mean including libraries like SWFObject), and it's just for displaying the SWF, which can be done without a single line by linking to it directly. A: Upsides for linking directly to SWF file: * *Faster access *You know it's a flash movie even before you click on the link *Skipping the html & js files (You won't use CSS to display 100% flash movie anyway) Downsides: * *You have little control on movie defaults. *You can't use custom background colors, transparency etc. *You can't use flashVars to send data to the movie from the HTML *Can't use fscommand from the movie to the page *Movie proportions are never the same as the user's window's aspect ratio *You can't compensate for browser incompetability (The next new browser comes out and you're in trouble) *No SEO *No page title, bad if you want people to bookmark properly. *No plugin information, download links etc. *If your SWF connects to external data sources, you might have cross domain problems. *Renaming the SWF file will also rename the link. Bad for versioning. In short, for a complicated application - always use the HTML. For a simple animation movie you can go either way. A: You also lose external control of the SWF. When it's embedded in HTML you can use javascript to communicate with the SWF. If the SWF is loaded directly that may not be possible. Your 100+ lines quote seems pretty high to me. The HTML that FlashDevelop generates for embedding a SWF is only around 35 lines, with an include of a single swfobject.js file. You shouldn't need to touch the js file, and at the most would only have to tweak the HTML in very minor ways to get it to do what you want. A: In my experience not all browsers handle this properly. I'm not really sure why (or which browsers) but I've mistakenly sent links like this to clients on occasion and they've often come back confused. I suspect their browser prompts them to download the file instead of displaying it properly. A: One upside I can think of is being able to specify GET parameters in the direct URL to the SWF, which will then be available in the Flash app (via Application.application.parameters in Flex, not sure how you'd access them in Flash CS3). This can of course be achieved by other means as well if you have an HTML wrapper but this way it's less work. A: Why would you need 100+ lines of code? Using something like swfobject reduces this amout quite some (and generally you don't want to do plugin detection, etc. by hand anyway). A: More advantages: * *Light weight look cuz you can get rid of the header with all the tool bars that seem to accumulate there and even the scroll bar is not needed. This enhances the impact when you are trying to show a lot of action in a short flash. *The biggie: you get it in a window that you can drag larger or smaller and make the movie larger and smaller. The player will resize the movie to fill the window you have. This is great for things like group photos where everyone wants to enlarge to find themselves and their friends. I've done this for a one frame Flash production! Downsides: As with popups in general, if you are asking for multiple ones from the same site, and you want different size popups, the browsers tend to simply override the size you ask for in window.open and reuse whatever is up. You need to close any open popup so the window.open will do a fresh create. It gets complicated, and I have not been able to get it to work across pages in a website. Anyone who has done this successfully, pls post how! A: Adobe should be ashamed of themselves with the standard embed, which defeats the puprose of convention over configuration. Check ^swfobject (as mentioned above) or swfin
{ "language": "en", "url": "https://stackoverflow.com/questions/53989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Any good AJAX framework for Google App Engine apps? I am trying to implement AJAX in my Google App Engine application, and so I am looking for a good AJAX framework that will help me. Anyone has any idea? I am thinking about Google Web Toolkit, how good it is in terms of creating AJAX for Google App Engine? A: A nice way is to use an AJAX library is to take advantage of Google's AJAX Libraries API service. This is a bit faster and cleaner than downloading the JS and putting it in your /static/ folder and doesn't eat into your disk quota. In your javascript you would just put, for example: google.load("jquery", "1.3.2"); and/or google.load(google.load("dojo", "1.3.0"); Somewhere in your header you would put something like: <script src="http://www.google.com/jsapi?key=your-key-here"></script> And that's all you need to use Google's API libraries. A: There is no reason why you shouldn't use GAE and Google Web Toolkit (GWT) together. You write your backend code in Python and the frontend code in Java (and possibly some JavaScript), which is then compiled to JavaScript. When using another AJAX framework you will also have this difference between server and client side language. GWT has features that make remote invocation of java code on the server easier, but these are entirely optional. You can just use JSON or XML interfaces, just like with other AJAX frameworks. GWT 1.5 also comes with JavaScript Overlay Types, that basically allow you to treat a piece of JSON data like a Java object when developing the client side code. You can read more about this here. Update: Now that Google has added Java support for Google App Engine, you can develop both backend and frontend code in Java on a full Google stack - if you like. There is a nice Eclipse plugin from Google that makes it very easy to develop and deploy applications that use GAE, GWT or both. A: Here is how we've implemented Ajax on the Google App Engine, but the idea can be generalized to other platforms. We have a handler script for Ajax requests that responds -mostly- with JSON responses. The structure looks something like this (this is an excerpt from a standard GAE handler script): def Get(self, user): self.handleRequest() def Post(self, user): self.handleRequest() def handleRequest(self): ''' A dictionary that maps an operation name to a command. aka: a dispatcher map. ''' operationMap = {'getfriends': [GetFriendsCommand], 'requestfriend': [RequestFriendCommand, [self.request.get('id')]], 'confirmfriend': [ConfirmFriendCommand, [self.request.get('id')]], 'ignorefriendrequest': [IgnoreFriendRequestCommand, [self.request.get('id')]], 'deletefriend': [DeleteFriendCommand, [self.request.get('id')]]} # Delegate the request to the matching command class here. The commands are a simple implementation of the command pattern: class Command(): """ A simple command pattern. """ _valid = False def validate(self): """ Validates input. Sanitize user input here. """ self._valid = True def _do_execute(self): """ Executes the command. Override this in subclasses. """ pass @property def valid(self): return self._valid def execute(self): """ Override _do_execute rather than this. """ try: self.validate() except: raise return self._do_execute() # Make it easy to invoke commands: # So command() is equivalent to command.execute() __call__ = execute On the client side, we create an Ajax delegate. Prototype.js makes this easy to write and understand. Here is an excerpt: /** * Ajax API * * You should create a new instance for every call. */ var AjaxAPI = Class.create({ /* Service URL */ url: HOME_PATH+"ajax/", /* Function to call on results */ resultCallback: null, /* Function to call on faults. Implementation not shown */ faultCallback: null, /* Constructor/Initializer */ initialize: function(resultCallback, faultCallback){ this.resultCallback = resultCallback; this.faultCallback = faultCallback; }, requestFriend: function(friendId){ return new Ajax.Request(this.url + '?op=requestFriend', {method: 'post', parameters: {'id': friendId}, onComplete: this.resultCallback }); }, getFriends: function(){ return new Ajax.Request(this.url + '?op=getfriends', {method: 'get', onComplete: this.resultCallback }); } }); to call the delegate, you do something like: new AjaxApi(resultHandlerFunction, faultHandlerFunction).getFriends() I hope this helps! A: I'd recommend looking into a pure javascript framework (probably Jquery) for your client-side code, and write JSON services in python- that seems to be the easiest / bestest way to go. Google Web Toolkit lets you write the UI in Java and compile it to javascript. As Dave says, it may be a better choice where the backend is in Java, as it has nice RPC hooks for that case. A: You may want to have a look at Pyjamas (http://pyjs.org/), which is "GWT for Python". A: try also GQuery for GWT. This is Java code: public void onModuleLoad() { $("div").css("color", "red").click(new Function() { public void f(Element e) { Window.alert("Hello"); $(e).as(Effects).fadeOut(); } }); } Being Java code resulting in somewhat expensive compile-time (Java->JavaScript) optimizations and easier refactoring. Nice, it isn't? A: As Google Web Toolkit is a subset of Java it works best when you Java at the backend too. Since Google App Engine is currently Python only I think you'd have to do a lot of messing about to get your server and client to talk nicely to each other. jQuery seems to be the most popular JavaScript library option in the AJAX Tag at DjangoSnippets.com. Edit: The above is only true of Google App Engine applications written in Python. As Google App Engine now supports Java, GWT could now be a good choice for writing an AJAX front end. Google even have a tutorial showing you how to do it. A: jQuery is a fine library, but also check out the Prototype JavaScript framework. It really turns JavaScript from being an occasionally awkward language into a beautiful and elegant language. A: If you want to be able to invoke method calls from JavaScript to Python, JSON-RPC works well with Google App Engine. See Google's article, "Using AJAX to Enable Client RPC Requests", for details. A: I'm currently using JQuery for my GAE app and it works beautifully for me. I have a chart (google charts) that is dynamic and uses an Ajax call to grab a JSON string. It really seems to work fine for me. A: Google has recently announced the Java version of Google App Engine. This release also provides an Eclipse plugin that makes developing GAE applications with GWT easier. See details here: http://code.google.com/appengine/docs/java/overview.html Of course, it would require you to rewrite your application in Java instead of python, but as someone who's worked with GWT, let me tell you, the advantages of using a modern IDE on your AJAX codebase are totally worth it.
{ "language": "en", "url": "https://stackoverflow.com/questions/53997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Could not load type 'XXX.Global' Migrating a project from ASP.NET 1.1 to ASP.NET 2.0 and I keep hitting this error. I don't actually need Global because I am not adding anything to it, but after I remove it I get more errors. A: The reason I encounter this issue is because I change the build configuration. When I set a web project to x86, it changes the output path to bin\x86\Debug. However, the output path should be bin and the web server won't find the binaries because of this. The solution thus is to change the output path of the website back to bin after you change the build configuration. A: There are a few things you can try with this, seems to happen alot and the solution varies for everyone it seems. * *If you are still using the IIS virtual directory make sure its pointed to the correct directory and also check the ASP.NET version it is set to, make sure it is set to ASP.NET 2.0. *Clear out your bin/debug/obj all of them. Do a Clean solution and then a Build Solution. *Check your project file in a text editor and make sure where its looking for the global file is correct, sometimes it doesnt change the directory. *Remove the global from the solution and add it back after saving and closing. make sure all the script tags in the ASPX file point to the correct one after. *You can try running the Convert to Web Application tool, that redoes all of the code and project files. *IIS Express is using the wrong root directory (see answer in VS 2012 launching app based on wrong path) Make sure you close VS after you try them. Those are some things I know to try. Hope one of them works for you. A: I've found that it happens when the Global.asax.(vb|cs) wasn't converted to a partial class properly. Quickest solution is to surround the class name 'Global' with [square brackets] like so (in VB.Net): Public Class [Global] Inherits System.Web.HttpApplication ... A: Deleting the existing global.asax file and adding a new one, clears out this error. This has worked for me many times. A: If your using visual studio 2010 this error can occur when you change the configuration deployment type. The 3 types are x86, x64 and Mixed mode. Changing to mixed mode setting for all projects in solution should resolve the issue. Don't forget to delete the bin, Lib files and change the tempdirectory output if your an ASP.NET website. A: This just happened to me and after trying everything else, I just happened to notice on the error message that the app pool was set to .Net 1.1. I upgraded the app to 2.0, converted to web application, but never changed the app pool: Version Information: Microsoft .NET Framework Version:1.1.4322.2490; ASP.NET Version:1.1.4322.2494 A: This one drove me completely insane and I couldn't find anything helpful to solve it. This is probably not the reason most people have this issue but I just hope that someone else will benefit from this answer. What caused my problem was a <clear /> statement in the <assemblies> config section. I had added this because in production it had been required because there were multiple unrelated applications on the same hosting plan and I didn't want any of them to be affected by others. The more correct solution would have been to have just used web config transforms on publish. Hope this helps someone else! A: Changing the address's port number (localhost:) worked for me :) A: I fixed this error by simply switching from Debug to Release, launch program (It worked on release), then switch back to Debug. I tried just about everything else, including restarting Visual Studio and nothing worked. A: I had this same problem installing my app to a server. It ended up being the installer project, it wasn't installing all the files needed to run the web app. I tried to figure out where it was broken but in the end I had to revert the project to the previous version to fix it. Hope this helps someone... A: In my case, a AfterBuild target in the project to compile the web application was the reason for this error. See here for more info A: Removing Language="c#" in global.asax file resolved the issue for me. A: In my case, I was duplicating an online site locally and getting this error locally in Utildev Cassini for asp.net 2.0. It turned out that I copied only global.asax locally and didn't copy the App_code conterpart of it. Copying it fixed the problem. A: When you try to access the Microsoft Dynamics NAV Web client, you get the following error. Could not load type 'System.ServiceModel.Activation.HttpModule' from assembly 'System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 This error can occur when there are multiple versions of the .NET Framework on the computer that is running IIS, and IIS was installed after .NET Framework 4.0 or before the Service Model in Windows Communication Foundation was registered. For Windows 7 and Windows Server 2008, use the ASP.NET IIS Registration Tool (aspnet_regiis.exe,) to register the correct version of ASP.NET. For more information about the aspnet_regiis.exe, see ASP.NET IIS Registration Tool at Microsoft web site. try this solution https://www.youtube.com/watch?v=LNwpNqgX7qw A: Ensure compiled dll of your project placed in proper bin folder. In my case, when i have changed the compiled directory of our subproject to bin folder of our main project, it worked. A: Had this error in my case I was renaming the application. I changed the name of the Project and the name of the class but neglected to change the "Assembly Name" or "Root namespace" in the "My Project" or project properties. A: Deletin obj, bin folders and rebuilding fixed my issue A: I had this problem. I solved it with this solution, by giving CREATOR OWNER full rights to the Windows Temp folder. For some reason, that user had no rights at all assigned. Maybe because some time ago I ran Combofix on my computer.
{ "language": "en", "url": "https://stackoverflow.com/questions/54001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: Returning an element from a List in Scala I've recently been working on a beginner's project in Scala, and have a beginner question about Scala's Lists. Say I have a list of tuples ( List[Tuple2[String, String]], for example). Is there a convenience method to return the first occurence of a specified tuple from the List, or is it necessary to iterate through the list by hand? A: You could try using find. (Updated scala-doc location of find) A: As mentioned in a previous comment, find is probably the easiest way to do this. There are actually three different "linear search" methods in Scala's collections, each returning a slightly different value. Which one you use depends upon what you need the data for. For example, do you need an index, or do you just need a boolean true/false? A: If you're learning scala, I'd take a good look at the Seq trait. It provides the basis for much of scala's functional goodness. A: scala> val list = List(("A", "B", 1), ("C", "D", 1), ("E", "F", 1), ("C", "D", 2), ("G", "H", 1)) list: List[(java.lang.String, java.lang.String, Int)] = List((A,B,1), (C,D,1), (E,F,1), (C,D,2), (G,H,1)) scala> list find {e => e._1 == "C" && e._2 == "D"} res0: Option[(java.lang.String, java.lang.String, Int)] = Some((C,D,1)) A: You could also do this, which doesn't require knowing the field names in the Tuple2 class--it uses pattern matching instead: list find { case (x,y,_) => x == "C" && y == "D" } "find" is good when you know you only need one; if you want to find all matching elements you could either use "filter" or the equivalent sugary for comprehension: for ( (x,y,z) <- list if x == "C" && y == "D") yield (x,y,z) A: Here's code that may help you. I had a similar case, having a collection of base class entries (here, A) out of which I wanted to find a certain derived class's node, if any (here, B). class A case class B(val name: String) extends A object TestX extends App { val states: List[A] = List( B("aa"), new A, B("ccc") ) def findByName( name: String ): Option[B] = { states.find{ case x: B if x.name == name => return Some(x) case _ => false } None } println( findByName("ccc") ) // "Some(B(ccc))" } The important part here (for my app) is that findByName does not return Option[A] but Option[B]. You can easily modify the behaviour to return B instead, and throw an exception if none was found. Hope this helps. A: Consider collectFirst which delivers Some[(String,String)] for the first matching tuple or None otherwise, for instance as follows, xs collectFirst { case t@(a,_) if a == "existing" => t } Some((existing,str)) scala> xs collectFirst { case t@(a,_) if a == "nonExisting" => t } None Using @ we bind the value of the tuple to t so that a whole matching tuple can be collected.
{ "language": "en", "url": "https://stackoverflow.com/questions/54010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Programmatically access browser history how can i create an application to read all my browser (firefox) history? i noticed that i have in C:\Users\user.name\AppData\Local\Mozilla\Firefox\Profiles\646vwtnu.default what looks like a sqlite database (urlclassifier3.sqlite) but i don't know if its really what is used to store de history information. i searched for examples on how to do this but didn't find anything. ps: although the title is similar i believe this question is not the same as "How do you access browser history?" A: I believe places.sqlite is the one you should be looking into for history (Firefox 3). Below are a couple of Mozilla wiki entries that have some info on the subject. * *Mozilla 2: Unified Storage *Browser History (see especially section "Database Design" here) In earlier versions of Firefox they stored history in a file called history.dat, which was encoded in a format called "Mork". This perl script by Jamie Zawinski can be used to parse Mork files. A: I also found the following links to be interesting: * *Literally make history with Firefox 3 *SQLite on .NET - Get up and running in 3 minutes. *SQLite Manager Firefox Addon After adding a reference to System.Data.Sqlite in my .Net project, all I had to do to create a connection was: cnn = New SQLiteConnection("data source=c:\Users\user.name\AppData\Roaming\Mozilla\Firefox\Profiles\646vwtnu.default\places.sqlite") cnn.Open() I had one minor glitch has the .net sqlite provider does not support sqlite3_enable_shared_cache which I believe is preventing me to open the places.sqlite database while having firefox running (see Support for sqlite3_enable_shared_cache) A: The Firefox SQLite Manager Addon is a great tool. If you wish to learn about the Firefox Places design and DB schema visit Mozilla Places. A: Indeed the history is in places.sqlite but the database file is locked. So you need to make a copy to be able to access it: $ pwd /home/amirouche/.mozilla/firefox/p4x432.default $ ls -l *sqlite -rw-r--r-- 1 amirouche amirouche 229376 Oct 4 12:39 content-prefs.sqlite -rw-r--r-- 1 amirouche amirouche 1572864 Oct 4 12:51 cookies.sqlite -rw-r--r-- 1 amirouche amirouche 40501248 Oct 4 12:47 favicons.sqlite -rw-r--r-- 1 amirouche amirouche 294912 Oct 4 12:46 formhistory.sqlite -rw-r--r-- 1 amirouche amirouche 196608 Oct 4 12:50 permissions.sqlite -rw-r--r-- 1 amirouche amirouche 36700160 Oct 4 12:50 places.sqlite -rw-r--r-- 1 amirouche amirouche 65536 Oct 4 11:50 protections.sqlite -rw-r--r-- 1 amirouche amirouche 512 Jul 24 23:41 storage.sqlite -rw-r--r-- 1 amirouche amirouche 131072 Oct 4 12:05 storage-sync.sqlite -rw-r--r-- 1 amirouche amirouche 15892480 Oct 4 12:51 webappsstore.sqlite $ sqlite3 places.sqlite SQLite version 3.27.2 2019-02-25 16:06:06 Enter ".help" for usage hints. sqlite> .schema Error: database is locked sqlite> $ cp places.sqlite places.backup.sqlite $ sqlite3 places.backup.sqlite SQLite version 3.27.2 2019-02-25 16:06:06 Enter ".help" for usage hints. sqlite> .schema Here is the output, one interesting table is moz_places on line 2: CREATE TABLE moz_origins ( id INTEGER PRIMARY KEY, prefix TEXT NOT NULL, host TEXT NOT NULL, frecency INTEGER NOT NULL, UNIQUE (prefix, host) ); CREATE TABLE moz_places ( id INTEGER PRIMARY KEY, url LONGVARCHAR, title LONGVARCHAR, rev_host LONGVARCHAR, visit_count INTEGER DEFAULT 0, hidden INTEGER DEFAULT 0 NOT NULL, typed INTEGER DEFAULT 0 NOT NULL, frecency INTEGER DEFAULT -1 NOT NULL, last_visit_date INTEGER , guid TEXT, foreign_count INTEGER DEFAULT 0 NOT NULL, url_hash INTEGER DEFAULT 0 NOT NULL , description TEXT, preview_image_url TEXT, origin_id INTEGER REFERENCES moz_origins(id)); CREATE TABLE moz_historyvisits ( id INTEGER PRIMARY KEY, from_visit INTEGER, place_id INTEGER, visit_date INTEGER, visit_type INTEGER, session INTEGER); CREATE TABLE moz_inputhistory ( place_id INTEGER NOT NULL, input LONGVARCHAR NOT NULL, use_count INTEGER, PRIMARY KEY (place_id, input)); CREATE TABLE moz_bookmarks ( id INTEGER PRIMARY KEY, type INTEGER, fk INTEGER DEFAULT NULL, parent INTEGER, position INTEGER, title LONGVARCHAR, keyword_id INTEGER, folder_type TEXT, dateAdded INTEGER, lastModified INTEGER, guid TEXT, syncStatus INTEGER NOT NULL DEFAULT 0, syncChangeCounter INTEGER NOT NULL DEFAULT 1); CREATE TABLE moz_bookmarks_deleted ( guid TEXT PRIMARY KEY, dateRemoved INTEGER NOT NULL DEFAULT 0); CREATE TABLE moz_keywords ( id INTEGER PRIMARY KEY AUTOINCREMENT, keyword TEXT UNIQUE, place_id INTEGER, post_data TEXT); CREATE TABLE sqlite_sequence(name,seq); CREATE TABLE moz_anno_attributes ( id INTEGER PRIMARY KEY, name VARCHAR(32) UNIQUE NOT NULL); CREATE TABLE moz_annos ( id INTEGER PRIMARY KEY, place_id INTEGER NOT NULL, anno_attribute_id INTEGER, content LONGVARCHAR, flags INTEGER DEFAULT 0, expiration INTEGER DEFAULT 0, type INTEGER DEFAULT 0, dateAdded INTEGER DEFAULT 0, lastModified INTEGER DEFAULT 0); CREATE TABLE moz_items_annos ( id INTEGER PRIMARY KEY, item_id INTEGER NOT NULL, anno_attribute_id INTEGER, content LONGVARCHAR, flags INTEGER DEFAULT 0, expiration INTEGER DEFAULT 0, type INTEGER DEFAULT 0, dateAdded INTEGER DEFAULT 0, lastModified INTEGER DEFAULT 0); CREATE TABLE moz_meta (key TEXT PRIMARY KEY, value NOT NULL) WITHOUT ROWID ; CREATE TABLE sqlite_stat1(tbl,idx,stat); CREATE INDEX moz_places_url_hashindex ON moz_places (url_hash); CREATE INDEX moz_places_hostindex ON moz_places (rev_host); CREATE INDEX moz_places_visitcount ON moz_places (visit_count); CREATE INDEX moz_places_frecencyindex ON moz_places (frecency); CREATE INDEX moz_places_lastvisitdateindex ON moz_places (last_visit_date); CREATE UNIQUE INDEX moz_places_guid_uniqueindex ON moz_places (guid); CREATE INDEX moz_places_originidindex ON moz_places (origin_id); CREATE INDEX moz_historyvisits_placedateindex ON moz_historyvisits (place_id, visit_date); CREATE INDEX moz_historyvisits_fromindex ON moz_historyvisits (from_visit); CREATE INDEX moz_historyvisits_dateindex ON moz_historyvisits (visit_date); CREATE INDEX moz_bookmarks_itemindex ON moz_bookmarks (fk, type); CREATE INDEX moz_bookmarks_parentindex ON moz_bookmarks (parent, position); CREATE INDEX moz_bookmarks_itemlastmodifiedindex ON moz_bookmarks (fk, lastModified); CREATE INDEX moz_bookmarks_dateaddedindex ON moz_bookmarks (dateAdded); CREATE UNIQUE INDEX moz_bookmarks_guid_uniqueindex ON moz_bookmarks (guid); CREATE UNIQUE INDEX moz_keywords_placepostdata_uniqueindex ON moz_keywords (place_id, post_data); CREATE UNIQUE INDEX moz_annos_placeattributeindex ON moz_annos (place_id, anno_attribute_id); CREATE UNIQUE INDEX moz_items_annos_itemattributeindex ON moz_items_annos (item_id, anno_attribute_id);
{ "language": "en", "url": "https://stackoverflow.com/questions/54036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Credit card expiration dates - Inclusive or exclusive? Say you've got a credit card number with an expiration date of 05/08 - i.e. May 2008. Does that mean the card expires on the morning of the 1st of May 2008, or the night of the 31st of May 2008? A: In my experience, it has expired at the end of that month. That is based on the fact that I can use it during that month, and that month is when my bank sends a new one. A: According to Visa's "Card Acceptance and Chargeback Management Guidelines for Visa Merchants"; "Good Thru" (or "Valid Thru") Date is the expiration date of the card: A card is valid through the last day of the month shown, (e .g ., if the Good Thru date is 03/12,the card is valid through March 31, 2012 and expires on April 1, 2012 .) It is located below the embossed account number. If the current transaction date is after the "Good Thru" date, the card has expired. A: I process a lot of credit card transaction at work, and I can tell you that the expiry date is inclusive. Also, I agree with Gorgapor. Don't write your own processing code. They are some good tools out there for credit card processing. Here we have been using Monetra for 3 years and it does a pretty decent job at it. A: lots of big companies dont even use your expiration date anymore because it causes auto-renewal of payments to be lost when cards are issued with new expiration dates and the same account number. This has been a huge problem in the service industry, so these companies have cornered the card issuers into processing payments w/o expiration dates to avoid this pitfall. Not many people know about this yet, so not all companies use this practice. A: How do time zones factor in this analysis. Does a card expire in New York before California? Does it depend on the billing or shipping addresses? A: If you are writing a site which takes credit card numbers for payment: * *You should probably be as permissive as possible, so that if it does expire, you allow the credit card company to catch it. So, allow it until the last second of the last day of the month. *Don't write your own credit card processing code. If^H^HWhen you write a bug, someone will lose real money. We all make mistakes, just don't make decisions that turn your mistakes into catastrophes. A: It took me a couple of minutes to find a site that I could source for this. The card is valid until the last day of the month indicated, after the last [sic]1 day of the next month; the card cannot be used to make a purchase if the merchant attempts to obtain an authorization. - Source Also, while looking this up, I found an interesting article on Microsoft's website using an example like this, exec summary: Access 2000 for a month/year defaults to the first day of the month, here's how to override that to calculate the end of the month like you'd want for a credit card. Additionally, this page has everything you ever wanted to know about credit cards. * *This is assumed to be a typo and that it should read "..., after the first day of the next month; ..." A: Have a look on one of your own credit cards. It'll have some text like EXPIRES END or VALID THRU above the date. So the card expires at the end of the given month. A: I had a Automated Billing setup online and the credit card said it say good Thru 10/09, but the card was rejected the first week in October and again the next week. Each time it was rejected it cost me a $10 fee. Don't assume it good thru the end of the month if you have automatic billing setup. A: In your example a credit card is expired on 6/2008. Without knowing what you are doing I cannot say definitively you should not be validating ahead of time but be aware that sometimes business rules defy all logic. For example, where I used to work they often did not process a card at all or would continue on transaction failure simply so they could contact the customer and get a different card.
{ "language": "en", "url": "https://stackoverflow.com/questions/54037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "131" }
Q: Running a scheduled task in a Wordpress plug-in I'm trying to write a Wordpress plug-in that automatically posts a blog post at a certain time of day. For example, read a bunch of RSS feeds and post a daily digest of all new entries. There are plug-ins that do something similar to what I want, but many of them rely on a cron job for the automated scheduling. I'll do that if I have to, but I was hoping there was a better way. Getting a typical Wordpress user to add a cron job isn't exactly friendly. Is there a good way to schedule a task that runs from a Wordpress plug-in? It doesn't have to run at exactly the right time. A: http://codex.wordpress.org/Function_Reference/wp_schedule_event A: pseudo-cron is good but the two issues it has is 1, It requires someone to "hit" the blog to execute. Low volume sites will potentially have wide ranging execution times so don't be two specific about the time. 2, The processing happens before the page loads. So if teh execution time happens and you have lots of "cron" entries you potentially upset visitors by giving them a sub standard experience. Just my 2 cents :-) A: vBulletin uses a sort of Pseudo-Cron system, that basically checks a schedule on every page access, and fires any processes that are due then. It's been a while since I worked with Wordpress, but I think something like this should work if it runs on each page view. A: I think the best way to do this is with a pseudo-cron. I have seen it on several occasions, and although not exact in the timing, it should do what you need it to do. Since in Wordpress the index.php is the first thing always hit based upon the settings in the .htaccess, create a file called pseudo-cron.php, dump it into the root directory, and then require it once from the index. Whenever someone hits the site, it will run, and you can use it to initiate a script, and check if another daily digest needs to be generated depending upon the time of the day, and when the previous digest ran.
{ "language": "en", "url": "https://stackoverflow.com/questions/54038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Does anyone know a library for working with quantity/unit of measure pairs? I would like to be able to do such things as var m1 = new UnitOfMeasureQuantityPair(123.00, UnitOfMeasure.Pounds); var m2 = new UnitOfMeasureQuantityPair(123.00, UnitOfMeasure.Liters); m1.ToKilograms(); m2.ToPounds(new Density(7.0, DensityType.PoundsPerGallon); If there isn't something like this already, anybody interested in doing it as an os project? A: Check out the Measurement Unit Conversion Library on The Code Project. A: We actually built one in-house where I work. Unfortunately, it's not available for the public. This is actually a great project to work on and it's not that hard to do. If you plan on doing something by yourself, I suggest you read about Quantity, Dimension and Unit (fundamental units). These helped us understand the domain of the problem clearly and helped a lot in designing the library. A: In Chapter 10. Quantity archetype pattern of the book Enterprise Patterns and MDA: Building Better Software with Archetype Patterns and UML by Jim Arlow and Ila Neustadt there is a really useful discussion of this topic and some general patterns you could use as a guide. A: Also see the most recent F# release - it has static measurement domain/dimension analysis. A: There is an (old) article on CodeProject. I've used it in a production environment previously and it worked great. We had some minor issues (performance amongst others), which I addressed. I put all this in a library you can find here. Disclaimer: I am the maintainer of this project, so this might be conceived as a shameless plug. The library is free (as in beer and as in speech) however. It includes the SI units, but also allows creating new units and conversions. So you can for example create a unit "XP" (experience points). You can then register a conversion to "m" (meter, makes no sense, but you can do it). You can also create an amount like 3 XP/min (3 experience points per minute). I believe it offers decent defaults, while allowing flexibility. A: Unix units is imho brilliant; source must be on the web somewhere. (Under "bugs", the original doc said "do not base your financial plans on the currency conversions".)
{ "language": "en", "url": "https://stackoverflow.com/questions/54043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Regex's For Developers I've been trying to figure out a regex to allow me to search for a particular string while automatically skipping comments. Anyone have an RE like this or know of one? It doesn't even need to be sophisticated enough to skip #if 0 blocks; I just want it to skip over // and /* blocks. The converse, that is only search inside comment blocks, would be very useful too. Environment: VS 2003 A: This is a harder problem than it might at first appear, since you need to consider comment tokens inside strings, comment tokens that are themselves commented out etc. I wrote a string and comment parser for C#, let me see if I can dig out something that will help... I'll update if I find anything. EDIT: ... ok, so I found my old 'codemasker' project. Turns out that I did this in stages, not with a single regex. Basically I inch through a source file looking for start tokens, when I find one I then look for an end-token and mask everything in between. This takes into account the context of the start token... if you find a token for "string start" then you can safely ignore comment tokens until you find the end of the string, and vice versa. Once the code is masked (I used guids as masks, and a hashtable to keep track) then you can safely do your search and replace, then finally restore the masked code. Hope that helps. A: Be especially careful with strings. Strings often have escape sequences which you also have to respect while you're finding the end of them. So e.g. "This is \"a test\"". You cannot blindly look for a double-quote to terminate. Also beware of ``"This is \"`, which shows that you cannot just say "unless double-quote is preceded by a backslash." In summary, make some brutal unit tests! A: A regexp is not the best tool for the job. Perl FAQ: C comments: #!/usr/bin/perl $/ = undef; $_ = <>; s#/\*[^*]*\*+([^/*][^*]*\*+)*/|([^/"']*("[^"\\]*(\\[\d\D][^"\\]*)*"[^/"']*|'[^'\\]*(\\[\d\D][^'\\]*)*'[^/"']*|/+[^*/][^/"']*)*)#$2#g; print; C++ comments: #!/usr/local/bin/perl $/ = undef; $_ = <>; s#//(.*)|/\*[^*]*\*+([^/*][^*]*\*+)*/|"(\\.|[^"\\])*"|'(\\.|[^'\\])*'|[^/"']+# $1 ? "/*$1 */" : $& #ge; print; A: I would make a copy and strip out the comments first, then search the string the regular way.
{ "language": "en", "url": "https://stackoverflow.com/questions/54047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you get the logged in Windows domain account from an ASP.NET application? We have an ASP.NET application that manages it's own User, Roles and Permission database and we have recently added a field to the User table to hold the Windows domain account. I would like to make it so that the user doesn't have to physically log in to our application, but rather would be automatically logged in based on the currently logged in Windows domain account DOMAIN\username. We want to authenticate the Windows domain account against our own User table. This is a piece of cake to do in Windows Forms, is it possible to do this in Web Forms? I don't want the user to be prompted with a Windows challenge screen, I want our system to handle the log in. Clarification: We are using our own custom Principal object. Clarification: Not sure if it makes a difference or not, but we are using IIS7. A: Integration of this sort is at the server level, it's IIS that decides that the user is not logged in; and it's IIS that sends back the authentication prompt to the user, to which the browser reacts. As you want to use the domain login there is only one way to do this; integrated windows authentication. This will only work if the IIS server is also part of the domain and the users are accessing the machine directly, not through a proxy, and from machines which are also part of the domain (with the users suitably logged in). However your custom principal object may create fun and games; authentication of this type will be a WindowsPrincipal and a WindowsIdentity; which you can access via the User object (see How To: Use Windows Authentication in ASP.NET 2.0) I assume you want a custom principal because of your custom roles? I doubt you can get the two to play nicely; you could create a custom role provider which looks at your data store or look at you could look at ADAM, an extension to AD which provides roles on a per program basis and comes with nice management tools. A: using System.Security.Principal; ... WindowsPrincipal wp = (WindowsPrincipal)HttpContext.Current.User; to get the current domain user. Of course you have to make sure that the IIS is set up to handle Windows Authentication. A: This might be helpful: WindowsIdentity myIdentity = WindowsIdentity.GetCurrent(); WindowsPrincipal myPrincipal = new WindowsPrincipal(myIdentity); string name = myPrincipal.Identity.Name; string authType = myPrincipal.Identity.AuthenticationType; string isAuth = myPrincipal.Identity.IsAuthenticated.ToString(); string identName = myIdentity.Name; string identType = myIdentity.AuthenticationType; string identIsAuth = myIdentity.IsAuthenticated.ToString(); string iSAnon = myIdentity.IsAnonymous.ToString(); string isG = myIdentity.IsGuest.ToString(); string isSys = myIdentity.IsSystem.ToString(); string token = myIdentity.Token.ToString(); Disclaimer: I got this from a technet article, but I can't find the link. A: I did pretty much exactly what you want to do a few years ago. Im trying to find some code for it, though it was at a previous job so that code is at home. I do remember though i used this article as my starting point. You set up the LDAP provider so you can actually run a check of the user vs the LDAP. One thing to make sure of if you try the LDAP approach. In the setting file where you set up the LDAP make sure LDAP is all caps, if it is not it will not resolve. A: You can use System.Threading.Thread.CurrentPrincipal. A: Request.ServerVariables["REMOTE_USER"] This is unverified for your setup, but I recall using this awhile back. A: Try Request.ServerVariables("LOGON_USER"). If the directory security options are set so that this directory does not allow anonymous users, when the surfer hits this page they will be prompted with the standard modal dialog asking for username and password. Request.ServerVariables("LOGON_USER") will return that user. However, this will probably not work for you because you are using your own custom security objects. If you can figure out how to get around that logon box, or pass in NT credentials to the site before it askes for them, then you would be all set. A: Have you thought about impersonation? You could store the user's NT logon credentials in your custom security object, and then just impseronate the user via code when appropriate. http://msdn.microsoft.com/en-us/library/aa292118(VS.71).aspx
{ "language": "en", "url": "https://stackoverflow.com/questions/54050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Tool to view the contents of the Solution User Options file (.suo) Are there any free tools available to view the contents of the solution user options file (the .suo file that accompanies solution files)? I know it's basically formatted as a file system within the file, but I'd like to be able to view the contents so that I can figure out which aspects of my solution and customizations are causing it grow very large over time. A: I'm not aware of a tool, but you could write a Visual Studio extension to list the contents without too much work. If you download the Visual Studio SDK, it has some straightforward examples that you can use. Find one that looks appropriate (like maybe the Toolwindow, if you want to give yourself a graphical display) and lift it (for your own personal use, of course). What makes it easy is that the Package class which you implement in any VS extension, already implements the IVSPersistSolutionOpts, as aku mentioned. So you can just call the ReadUserOptions method on your package and inspect the contents. A: A bit late for the original poster, but maybe useful to others. Two freeware viewers for structured storage files (including .suo-files): https://github.com/ironfede/openmcdf (old URL: http://sourceforge.net/projects/openmcdf/) http://www.mitec.cz/ssv.html (free for non-commercial use) When you open a .suo file in one of these viewers, you will see streams related to: * *Bookmarks *Debugger watches *Unloaded projects *Outlining *Task-list user tasks *Debugger exceptions *Debugger Breakpoints *Debugger find source data *Open document windows And much more... A: The .SUO file is effectively disposable. If it's getting too large, just delete it. Visual Studio will create a fresh one. If you do want to go poking around in it, it looks like an OLE Compound Document File. You should be able to use the StgOpenStorage function to get hold of an IStorage pointer. A: I don't know any tool, but you can try to access user settings via IVsPersistSolutionOpts interface A: You can use the built in tool that comes with OpenMCDF, which is called Structured Storage Explorer. It doesn't allow you to see all the details, but allows you to see all the individual settings and their sizes. In order to see the actual settings, you need to format the bytes as UTF-16. Reference: https://github.com/ParticularLabs/SetStartupProjects A: I created an open source dotnet global tool for this: dotnet install --global suo suo view <path-to-suo-file> More information at https://github.com/drewnoakes/suo
{ "language": "en", "url": "https://stackoverflow.com/questions/54052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27" }
Q: Efficiently selecting a set of random elements from a linked list Say I have a linked list of numbers of length N. N is very large and I don’t know in advance the exact value of N. How can I most efficiently write a function that will return k completely random numbers from the list? A: There's a very nice and efficient algorithm for this using a method called reservoir sampling. Let me start by giving you its history: Knuth calls this Algorithm R on p. 144 of his 1997 edition of Seminumerical Algorithms (volume 2 of The Art of Computer Programming), and provides some code for it there. Knuth attributes the algorithm to Alan G. Waterman. Despite a lengthy search, I haven't been able to find Waterman's original document, if it exists, which may be why you'll most often see Knuth quoted as the source of this algorithm. McLeod and Bellhouse, 1983 (1) provide a more thorough discussion than Knuth as well as the first published proof (that I'm aware of) that the algorithm works. Vitter 1985 (2) reviews Algorithm R and then presents an additional three algorithms which provide the same output, but with a twist. Rather than making a choice to include or skip each incoming element, his algorithm predetermines the number of incoming elements to be skipped. In his tests (which, admittedly, are out of date now) this decreased execution time dramatically by avoiding random number generation and comparisons on each in-coming number. In pseudocode the algorithm is: Let R be the result array of size s Let I be an input queue > Fill the reservoir array for j in the range [1,s]: R[j]=I.pop() elements_seen=s while I is not empty: elements_seen+=1 j=random(1,elements_seen) > This is inclusive if j<=s: R[j]=I.pop() else: I.pop() Note that I've specifically written the code to avoid specifying the size of the input. That's one of the cool properties of this algorithm: you can run it without needing to know the size of the input beforehand and it still assures you that each element you encounter has an equal probability of ending up in R (that is, there is no bias). Furthermore, R contains a fair and representative sample of the elements the algorithm has considered at all times. This means you can use this as an online algorithm. Why does this work? McLeod and Bellhouse (1983) provide a proof using the mathematics of combinations. It's pretty, but it would be a bit difficult to reconstruct it here. Therefore, I've generated an alternative proof which is easier to explain. We proceed via proof by induction. Say we want to generate a set of s elements and that we have already seen n>s elements. Let's assume that our current s elements have already each been chosen with probability s/n. By the definition of the algorithm, we choose element n+1 with probability s/(n+1). Each element already part of our result set has a probability 1/s of being replaced. The probability that an element from the n-seen result set is replaced in the n+1-seen result set is therefore (1/s)*s/(n+1)=1/(n+1). Conversely, the probability that an element is not replaced is 1-1/(n+1)=n/(n+1). Thus, the n+1-seen result set contains an element either if it was part of the n-seen result set and was not replaced---this probability is (s/n)*n/(n+1)=s/(n+1)---or if the element was chosen---with probability s/(n+1). The definition of the algorithm tells us that the first s elements are automatically included as the first n=s members of the result set. Therefore, the n-seen result set includes each element with s/n (=1) probability giving us the necessary base case for the induction. References * *McLeod, A. Ian, and David R. Bellhouse. "A convenient algorithm for drawing a simple random sample." Journal of the Royal Statistical Society. Series C (Applied Statistics) 32.2 (1983): 182-184. (Link) *Vitter, Jeffrey S. "Random sampling with a reservoir." ACM Transactions on Mathematical Software (TOMS) 11.1 (1985): 37-57. (Link) A: This is called a Reservoir Sampling problem. The simple solution is to assign a random number to each element of the list as you see it, then keep the top (or bottom) k elements as ordered by the random number. A: I would suggest: First find your k random numbers. Sort them. Then traverse both the linked list and your random numbers once. If you somehow don't know the length of your linked list (how?), then you could grab the first k into an array, then for node r, generate a random number in [0, r), and if that is less than k, replace the rth item of the array. (Not entirely convinced that doesn't bias...) Other than that: "If I were you, I wouldn't be starting from here." Are you sure linked list is right for your problem? Is there not a better data structure, such as a good old flat array list. A: If you don't know the length of the list, then you will have to traverse it complete to ensure random picks. The method I've used in this case is the one described by Tom Hawtin (54070). While traversing the list you keep k elements that form your random selection to that point. (Initially you just add the first k elements you encounter.) Then, with probability k/i, you replace a random element from your selection with the ith element of the list (i.e. the element you are at, at that moment). It's easy to show that this gives a random selection. After seeing m elements (m > k), we have that each of the first m elements of the list are part of you random selection with a probability k/m. That this initially holds is trivial. Then for each element m+1, you put it in your selection (replacing a random element) with probability k/(m+1). You now need to show that all other elements also have probability k/(m+1) of being selected. We have that the probability is k/m * (k/(m+1)*(1-1/k) + (1-k/(m+1))) (i.e. probability that element was in the list times the probability that it is still there). With calculus you can straightforwardly show that this is equal to k/(m+1). A: Well, you do need to know what N is at runtime at least, even if this involves doing an extra pass over the list to count them. The simplest algorithm to do this is to just pick a random number in N and remove that item, repeated k times. Or, if it is permissible to return repeat numbers, don't remove the item. Unless you have a VERY large N, and very stringent performance requirements, this algorithm runs with O(N*k) complexity, which should be acceptable. Edit: Nevermind, Tom Hawtin's method is way better. Select the random numbers first, then traverse the list once. Same theoretical complexity, I think, but much better expected runtime. A: Why can't you just do something like List GetKRandomFromList(List input, int k) List ret = new List(); for(i=0;i<k;i++) ret.Add(input[Math.Rand(0,input.Length)]); return ret; I'm sure that you don't mean something that simple so can you specify further?
{ "language": "en", "url": "https://stackoverflow.com/questions/54059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Any restrictions on development in Vista I'm looking at a new computer which will probably have vista on it. But there are so many editions of vista; are there any weird restrictions on what you can run on the various editions? For instance you couldn't run IIS on Windows ME. Can you still run IIS on the home editions of vista? A: You can't run Aero on the 'basic' editions, and there are some 'extras' that only run in Ultimate. You probably won't care about those for development, though. The only thing to be careful of would be that it has the same client access restrictions that XP did. A: Vista Home Basic only has enough IIS features to host WCF services and does not have any of web server features for hosting static files, asp.net, etc. Here is a link to compare editions. I would recommend going with Home Premium or Ultimate depending on whether the computer will run on a domain. A: Get Home Premium unless you need to connect to a domain controller (if you don't know what that is, you don't need it).
{ "language": "en", "url": "https://stackoverflow.com/questions/54068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to send MMS with C# I need to send MMS thought a C# application. I have already found 2 interesting components: http://www.winwap.com http://www.nowsms.com Does anyone have experience with other third party components? Could someone explain what kind of server I need to send those MMS? Is it a classic SMTP Server? A: Typically I have always done this using a 3rd party aggregator. The messages are compiled into SMIL, which is the description language for the MMS messages. These are then sent on to the aggregator who will then send them through the MMS gateway of the Network Operator. They are typically charged on a per message basis and the aggregators will buy the messages in a block from the operators. If you are trying to send an MMS message without getting charged then I am not sure how to do this, or if it is possible. A: You could do it yourself. Some MMS companies just have a SOAP API that you can call. All you need to do is construct the XML and send it off via a URL. I have done this once before, but can't remember the name of the company I used. A: This post earlier discussed different approaches for SMS and might be helpful for you. A: You could use Twilio to accomplish this. You can dive into the docs for specific implementation details but using the C# helper library the code to send an MMS would look like this: // Send a new outgoing MMS by POSTing to the Messages resource */ client.SendMessage( "YYY-YYY-YYYY", // From number, must be an SMS-enabled Twilio number person.Key, // To number, if using Sandbox see note above // message content string.Format("Hey {0}, Monkey Party at 6PM. Bring Bananas!", person.Value), // media url of the image new string[] {"https://demo.twilio.com/owl.png" } ); Disclaimer: I work for Twilio.
{ "language": "en", "url": "https://stackoverflow.com/questions/54092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Secure session cookies in ASP.NET over HTTPS I got a little curious after reading this /. article over hijacking HTTPS cookies. I tracked it down a bit, and a good resource I stumbled across lists a few ways to secure cookies here. Must I use adsutil, or will setting requireSSL in the httpCookies section of web.config cover session cookies in addition to all others (covered here)? Is there anything else I should be considering to harden sessions further? A: https://www.isecpartners.com/media/12009/web-session-management.pdf A 19 page white paper on "Secure Session Management with Cookies for Web Applications" They cover lots of security issues that I haven't seen all in one spot before. It's worth a read. A: The web.config setting to control this goes inside the System.Web element and looks like: <httpCookies httpOnlyCookies="true" requireSSL="true" />
{ "language": "en", "url": "https://stackoverflow.com/questions/54096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Is there any way to enable code completion for Perl in vim? Surprisingly as you get good at vim, you can code even faster than standard IDEs such as Eclipse. But one thing I really miss is code completion, especially for long variable names and functions. Is there any way to enable code completion for Perl in vim? A: Vim 7 supports omni completion. For example, I have this in my vimrc autocmd FileType php set omnifunc=phpcomplete#CompletePHP and then, when I press Ctrl-X Ctrl-O in Insert mode, I get a dropdown list of autocomplete possibilities. Here's an omnicfunc for perl. No idea how well it works though. A: Well, Vim's generic completion mechanism is surprisingly good, just using Ctrl-N in insert mode. Also, line completion is very handy, using C-x C-l. Also check out this vim script for perl. A: The standard Ctrl+N and Ctrl+P works even better if you add the following to your ~/.vim/ftplugin/perl.vim file: set iskeyword+=: Then it will autocomplete module names, etc. A: The .vimrc clip in one of the other answers is slightly wrong. To turn your tab key into an auto-complete key, use this code: inoremap <tab> <c-r>=InsertTabWrapper()<cr> function! InsertTabWrapper() let col = col('.') - 1 if !col || getline('.')[col - 1] !~ '\k' return "\<tab>" else return "\<c-p>" endif endfunction You can find this, and tons of other vim tricks in this thread at Perlmonks--which links to even more threads with lots more customizations. A: Ctrl-P (Get Previous Match) and Ctrl-N (Get Next Match) are kind of pseudo code completion. They basically search the file (Backwards for Ctrl-P, Forwards for Ctrl-N) you are editing (and any open buffers, and if you are using TAGS anything in your TAG file) for words that start with what you are typing and add a drop down list. It works surprisingly well for variables and function names, even if it isn't intellisense. Generally I use Ctrl-P as the variable or function I am looking for is usually behind in the code. Also if you keep the same copy of Vim open, it will search the files you have previously opened. A: Ctrl+N This is explained in the Perl Hacks book, along with how to do Package completion. Highly recommended. A: You should look at the SuperTab plugin: http://www.vim.org/scripts/script.php?script_id=1643 It let's you do completion (either the OmniCompletion or the regular completion) using tab and shift-tab instead of ^N and ^P. A: https://github.com/c9s/perlomni.vim
{ "language": "en", "url": "https://stackoverflow.com/questions/54104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: What is the best way to populate a menu control on a Master Page? Database? Page variables? Enum? I'm looking for opinions here. A: The ASP.NET Sitemap feature is built for that and works well in a lot of cases. If you get in a spot where you want your Menu to look different from your Sitemap, here are some workarounds. If you have a dynamic site structure, you can create a custom sitemap provider. You might get to the point where it's more trouble than it's worth, but in general populating your menu from your sitemap gives you some nice features like security trimming, in which the menu options are appropriate for the logged-in user. A: That's an interesting question, there are lots of ways to approach it. You could load the menu structure from XML, that's the way the built-in ASP.NET navigation controls/"sitemap" setup works. This is probably a good choice overall, and there is reasonably good tooling for it in Visual Studio. If it's a dynamic menu that needs to change a lot, getting the items from a database could be a good idea, but you would definitely want to cache them, so the DB doesn't get hit on every page render. A: I've created a site using the ASP.NET Login Controls and Forms Authentication for membership/credentials for an ASP.NET web application. And I'm using a site map for site navigation. I have ASP.NET TreeView and Menu navigation controls populated using a SiteMapDataSource. But off-limits administrator-only pages are visible to non-administrator users. * *I created a web.sitemap site map file. And I used the ASP.NET Web Site Administration Tool to set up access rules. *I added navigation controls on my .master page… <asp:SiteMapPath ID="SiteMapPath1" runat="server" /> <asp:Menu ID="Menu1" runat="server" DataSourceID="SiteMapDataSource2" /> <asp:TreeView ID="TreeView1" runat="server" DataSourceID="SiteMapDataSource1" /> <asp:SiteMapDataSource ID="SiteMapDataSource1" runat="server" /> <asp:SiteMapDataSource ID="SiteMapDataSource2" runat="server" ShowStartingNode="False" /> *I set securityTrimmingEnabled to "true" in my web.config file… <?xml version="1.0"?> <configuration> ... <system.web> ... <siteMap defaultProvider="default"> <providers> <clear/> <add name="default" type="System.Web.XmlSiteMapProvider" siteMapFile="web.sitemap" securityTrimmingEnabled="true"/> </providers> </siteMap> ... </system.web> ... </configuration> *I adjusted the tree in the master.vb code behind file… Protected Sub TreeView1_DataBound( ByVal sender As Object, ByVal e As EventArgs ) Handles TreeView1.DataBound 'Collapse unnecessary menu items... If TreeView1.SelectedNode IsNot Nothing Then Dim n As TreeNode = TreeView1.SelectedNode TreeView1.CollapseAll() n.Expand() Do Until n.Parent Is Nothing n = n.Parent n.Expand() Loop Else TreeView1.ExpandAll() End If End Sub A: IF the menu is dynamic per-user then you'll have to hit the database for each user. From then on I would probably store it in session to avoid future round-trips to the database. If it's dynamic, but the entire site sees the same items, then put it in the database and cache the results A: Binding to a Sitemap is certainly the easiest. A: It depends entirely on how the site works. I'm in agreement with most that a sitemap is usually the best way to do it. However, if you're using a CMS, then you might need to keep it in the database. If you have a taxonomy-centric site, then use the taxonomy to build the menu. There's no "best way" to do navigation, only the best way for a given situation. A: We've got a similar feature. The application menu is loaded on the master page from the database, because visible menu options depend on the user's permissions. A couple of conventions and clever structure on the database ensure that the menu loading code is generic and automagically navigates to the proper screen upon selection of a certain menu option. We use UIP to navigate and ComponentArt for web controls. BTW ComponentArt sucks. Then again I suppose all third party control libraries do. A: Efficient access is a primal feature from a user's perspective. A generic suggestive approach is dictionary lookup, that fits well for large and nested menu structures too. The user navigates by clicks or unique keypresses, additionally arrow keys advance (right) or go back (left) with up/down as usual. I'd suggest to populate the menus on request except the initial one and provide a javascript action, whenever a final element is selected.
{ "language": "en", "url": "https://stackoverflow.com/questions/54118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Using jQuery to beautify someone else's html I have a third-party app that creates HTML-based reports that I need to display. I have some control over how they look, but in general it's pretty primitive. I can inject some javascript, though. I'd like to try to inject some jQuery goodness into it to tidy it up some. One specific thing I would like to do is to take a table (an actual HTML <table>) that always contains one row and a variable number of columns and magically convert that into a tabbed view where the contents (always one <div> that I can supply an ID if necessary) of each original table cell represents a sheet in the tabbed view. I haven't found any good (read: simple) examples of re-parenting items like this, so I'm not sure where to begin. Can someone provide some hints on how I might try this? A: Given a html page like this: <body><br/> <table id="my-table">`<br/> <tr><br/> <td><div>This is the contents of Column One</div></td><br/> <td><div>This is the contents of Column Two</div></td><br/> <td><div>This is the contents of Column Three</div></td><br/> <td><div>Contents of Column Four blah blah</div></td><br/> <td><div>Column Five is here</div></td><br/> </tr><br/> </table><br/> </body><br/> the following jQuery code converts the table cells into tabs (tested in FF 3 and IE 7) $(document).ready(function() { var tabCounter = 1; $("#my-table").after("<div id='tab-container' class='flora'><ul id='tab-list'></ul></div>"); $("#my-table div").appendTo("#tab-container").each(function() { var id = "fragment-" + tabCounter; $(this).attr("id", id); $("#tab-list").append("<li><span><a href='#" + id + "'>Tab " + tabCounter + "</a></span></li>"); tabCounter++; }); $("#tab-container > ul").tabs(); }); To get this to work I referenced the following jQuery files * *jquery-latest.js *ui.core.js *ui.tabs.js And I referenced the flora.all.css stylesheet. Basically I copied the header section from the jQuery tab example A: You could do this with jQuery but it may make additional maintenance a nightmare. I would recommend against doing this / screen scraping because if the source ever changes so does your work around. A: This certainly sounds possible. With a combination of jQuery.append and jQuery.fadeIn and fadeOut you should be able to create a nice little tabbed control. See the JQuery UI/Tabs for a simple way to create a set of tabs based on a <ul> element and a set of <div>'s: http://docs.jquery.com/UI/Tabs A: I would also suggest injecting an additional stylesheet and use that to show/hide and style ugly elements. A: Sounds like you are not interested in a HTML cleanup as HTML Tidy does, but in an interactive enhancement of static HTML components (e.g. turning a static table in a tabbed interface). Some replies here already gave you hints, e.g. using Jquery Tabs, but i don't like the HTML rewrite approach in their answers. IMHO its better to extract the content of the table cells you want with a JQuery selector, like: var mycontent = $('table tr[:first-child]').find('td[:first-child]').html() then you can feed this data to the JQuery UI plugin by programmatically creating the tabs: $('body').append($('<div></div>').attr('id','mytabs')); $('#mytabs').tabs({}); //specify tab preferences here $('#mytabs').tabs('add',mycontent); A: Beautifying HTML is not such a simple process, because every line break between tags is a text node and arbitrary beautification creates and removes text nodes in manner that could be harmful to the document structure and likely harmful to the content. I recommend using a program that has already thought all these conditions through, such as http://prettydiff.com/?m=beautify
{ "language": "en", "url": "https://stackoverflow.com/questions/54138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How does the Comma Operator work How does the comma operator work in C++? For instance, if I do: a = b, c; Does a end up equaling b or c? (Yes, I know this is easy to test - just documenting on here for someone to find the answer quickly.) Update: This question has exposed a nuance when using the comma operator. Just to document this: a = b, c; // a is set to the value of b! a = (b, c); // a is set to the value of c! This question was actually inspired by a typo in code. What was intended to be a = b; c = d; Turned into a = b, // <- Note comma typo! c = d; A: The comma operator: * *has the lowest precedence *is left-associative A default version of comma operator is defined for all types (built-in and custom), and it works as follows - given exprA , exprB: * *exprA is evaluated *the result of exprA is ignored *exprB is evaluated *the result of exprB is returned as the result of the whole expression With most operators, the compiler is allowed to choose the order of execution and it is even required to skip the execution whatsoever if it does not affect the final result (e.g. false && foo() will skip the call to foo). This is however not the case for comma operator and the above steps will always happen*. In practice, the default comma operator works almost the same way as a semicolon. The difference is that two expressions separated by a semicolon form two separate statements, while comma-separation keeps all as a single expression. This is why comma operator is sometimes used in the following scenarios: * *C syntax requires an single expression, not a statement. e.g. in if( HERE ) *C syntax requires a single statement, not more, e.g. in the initialization of the for loop for ( HERE ; ; ) *When you want to skip curly braces and keep a single statement: if (foo) HERE ; (please don't do that, it's really ugly!) When a statement is not an expression, semicolon cannot be replaced by a comma. For example these are disallowed: * *(foo, if (foo) bar) (if is not an expression) *int x, int y (variable declaration is not an expression) In your case we have: * *a=b, c;, equivalent to a=b; c;, assuming that a is of type that does not overload the comma operator. *a = b, c = d; equivalent to a=b; c=d;, assuming that a is of type that does not overload the comma operator. Do note that not every comma is actually a comma operator. Some commas which have a completely different meaning: * *int a, b; --- variable declaration list is comma separated, but these are not comma operators *int a=5, b=3; --- this is also a comma separated variable declaration list *foo(x,y) --- comma-separated argument list. In fact, x and y can be evaluated in any order! *FOO(x,y) --- comma-separated macro argument list *foo<a,b> --- comma-separated template argument list *int foo(int a, int b) --- comma-separated parameter list *Foo::Foo() : a(5), b(3) {} --- comma-separated initializer list in a class constructor * This is not entirely true if you apply optimizations. If the compiler recognizes that certain piece of code has absolutely no impact on the rest, it will remove the unnecessary statements. Further reading: http://en.wikipedia.org/wiki/Comma_operator A: b's value will be assigned to a. Nothing will happen to c A: It would be equal to b. The comma operator has a lower precedence than assignment. A: The value of a will be b, but the value of the expression will be c. That is, in d = (a = b, c); a would be equal to b, and d would be equal to c. A: Yes Comma operator has low precedence than Assignment operator #include<stdio.h> int main() { int i; i = (1,2,3); printf("i:%d\n",i); return 0; } Output : i=3 Because comma operator always return rightmost value. In case of comma operator with Assignment Operator: int main() { int i; i = 1,2,3; printf("i:%d\n",i); return 0; } Ouput: i=1 As we know comma operator has lower precedence than assignment..... A: The value of a will be equal to b, since the comma operator has a lower precedence than the assignment operator. A: Take care to notice that the comma operator may be overloaded in C++. The actual behaviour may thus be very different from the one expected. As an example, Boost.Spirit uses the comma operator quite cleverly to implement list initializers for symbol tables. Thus, it makes the following syntax possible and meaningful: keywords = "and", "or", "not", "xor"; Notice that due to operator precedence, the code is (intentionally!) identical to (((keywords = "and"), "or"), "not"), "xor"; That is, the first operator called is keywords.operator =("and") which returns a proxy object on which the remaining operator,s are invoked: keywords.operator =("and").operator ,("or").operator ,("not").operator ,("xor"); A: The comma operator has the lowest precedence of all C/C++ operators. Therefore it's always the last one to bind to an expression, meaning this: a = b, c; is equivalent to: (a = b), c; Another interesting fact is that the comma operator introduces a sequence point. This means that the expression: a+b, c(), d is guaranteed to have its three subexpressions (a+b, c() and d) evaluated in order. This is significant if they have side-effects. Normally compilers are allowed to evaluate subexpressions in whatever order they find fit; for example, in a function call: someFunc(arg1, arg2, arg3) arguments can be evaluated in an arbitrary order. Note that the commas in the function call are not operators; they are separators. A: First things first: Comma is actually not an operator, for the compiler it is just a token which gets a meaning in context with other tokens. What does this mean and why bother? Example 1: To understand the difference between the meaning of the same token in a different context we take a look at this example: class Example { Foo<int, char*> ContentA; } Usually a C++ beginner would think that this expression could/would compare things but it is absolutly wrong, the meaning of the <, > and , tokens depent on the context of use. The correct interpretation of the example above is of course that it is an instatiation of a template. Example 2: When we write a typically for loop with more than one initialisation variable and/or more than one expressions that should be done after each iteration of the loop we use comma too: for(a=5,b=0;a<42;a++,b--) ... The meaning of the comma depends on the context of use, here it is the context of the for construction. What does a comma in context actually mean? To complicate it even more (as always in C++) the comma operator can itself be overloaded (thanks to Konrad Rudolph for pointing that out). To come back to the question, the Code a = b, c; means for the compiler something like (a = b), c; because the priority of the = token/operator is higher than the priority of the , token. and this is interpreted in context like a = b; c; (note that the interpretation depend on context, here it it neither a function/method call or a template instatiation.)
{ "language": "en", "url": "https://stackoverflow.com/questions/54142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "193" }
Q: How do I insert a character at the caret with javascript? I want to insert some special characters at the caret inside textboxes using javascript on a button. How can this be done? The script needs to find the active textbox and insert the character at the caret in that textbox. The script also needs to work in IE and Firefox. EDIT: It is also ok to insert the character "last" in the previously active textbox. A: I think Jason Cohen is incorrect. The caret position is preserved when focus is lost. [Edit: Added code for FireFox that I didn't have originally.] [Edit: Added code to determine the most recent active text box.] First, you can use each text box's onBlur event to set a variable to "this" so you always know the most recent active text box. Then, there's an IE way to get the cursor position that also works in Opera, and an easier way in Firefox. In IE the basic concept is to use the document.selection object and put some text into the selection. Then, using indexOf, you can get the position of the text you added. In FireFox, there's a method called selectionStart that will give you the cursor position. Once you have the cursor position, you overwrite the whole text.value with text before the cursor position + the text you want to insert + the text after the cursor position Here is an example with separate links for IE and FireFox. You can use you favorite browser detection method to figure out which code to run. <html><head></head><body> <script language="JavaScript"> <!-- var lasttext; function doinsert_ie() { var oldtext = lasttext.value; var marker = "##MARKER##"; lasttext.focus(); var sel = document.selection.createRange(); sel.text = marker; var tmptext = lasttext.value; var curpos = tmptext.indexOf(marker); pretext = oldtext.substring(0,curpos); posttest = oldtext.substring(curpos,oldtext.length); lasttext.value = pretext + "|" + posttest; } function doinsert_ff() { var oldtext = lasttext.value; var curpos = lasttext.selectionStart; pretext = oldtext.substring(0,curpos); posttest = oldtext.substring(curpos,oldtext.length); lasttext.value = pretext + "|" + posttest; } --> </script> <form name="testform"> <input type="text" name="testtext1" onBlur="lasttext=this;"> <input type="text" name="testtext2" onBlur="lasttext=this;"> <input type="text" name="testtext3" onBlur="lasttext=this;"> </form> <a href="#" onClick="doinsert_ie();">Insert IE</a> <br> <a href="#" onClick="doinsert_ff();">Insert FF</a> </body></html> This will also work with textareas. I don't know how to reposition the cursor so it stays at the insertion point. A: In light of your update: var inputs = document.getElementsByTagName('input'); var lastTextBox = null; for(var i = 0; i < inputs.length; i++) { if(inputs[i].getAttribute('type') == 'text') { inputs[i].onfocus = function() { lastTextBox = this; } } } var button = document.getElementById("YOURBUTTONID"); button.onclick = function() { lastTextBox.value += 'PUTYOURTEXTHERE'; } A: Note that if the user pushes a button, focus on the textbox will be lost and there will be no caret position! A: loop over all you input fields... finding the one that has focus.. then once you have your text area... you should be able to do something like... myTextArea.value = 'text to insert in the text area goes here'; A: I'm not sure if you can capture the caret position, but if you can, you can avoid Jason Cohen's concern by capturing the location (in relation to the string) using the text box's onblur event. A: A butchered version of @bmb code in previous answer works well to reposition the cursor at end of inserted characters too: var lasttext; function doinsert_ie() { var ttInsert = "bla"; lasttext.focus(); var sel = document.selection.createRange(); sel.text = ttInsert; sel.select(); } function doinsert_ff() { var oldtext = lasttext.value; var curposS = lasttext.selectionStart; var curposF = lasttext.selectionEnd; pretext = oldtext.substring(0,curposS); posttest = oldtext.substring(curposF,oldtext.length); var ttInsert='bla'; lasttext.value = pretext + ttInsert + posttest; lasttext.selectionStart=curposS+ttInsert.length; lasttext.selectionEnd=curposS+ttInsert.length; }
{ "language": "en", "url": "https://stackoverflow.com/questions/54147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What do you think will be the level of usage of Silverlight 1 year from now? There is a lot of buzz about Microsoft Silverlight, especially after the Olympics. Also H264 will be supported in a future version. Where do you think Silverlight will be 1 year from now? A: They were saying they were getting 1.5 million downloads per day back in March 2008, and that was before the Olympics and the Democratic National Convention. So, unless my math is off, that's more than 4 people. I'd expect to see it show up as a recommended Windows update, and possible included with IE8 or something in the future. A: A year from now, the number of people with the runtime installed will still be a fairly small minority! I suspect that choosing Silverlight will still be a barrier to people using your stuff for a long while to come. A: Most .NET developers I work with have been shying away from Silverlight. Right now it seems more like a novelty than a development platform. A: In a year it will still be a minority of content, but the installed base will be large enough that mainstream projects will be considering it as a viable alternative to Flash. Until they survey the pool of available, talented designers familiar with it. A: At best, in the same place at Flash. Now, how many of you do Flash enterprise applications? Does Google do flash applications? or SalesForce.com? Oracle? or any other major on demand application provider? In my opinion, even if it kills off Flash, it will still be largely irrelevant for the types of applications we write everyday. A: 100% more. (so about 4 people) A: Considering NBC has already dropped Silverlight and are using Flash again for NFL telecasts, I don't see a healthy future for Microsoft's platform. Do they even have any other partners using it? I know WWE was one of their partners but they barely use it on their own website. EDIT - not sure if it's true or not but this guy says that the decision to go with Flash was the NFL's and not NBC's. Either way still doesn't look good for the MS platform. A: I think that as long as the Moonlight project is successful that we'll see Silverlight become significant competition for Flash. Silverlight is still in its infancy - 1.0 had next to nothing in it. Version 2 is in beta now, and that adds lots of common user controls that developers need to write applications. A: They really got a huge bump with the Olympics as far as getting it installed on machines. It will be interesting to see how much developer buy in they can gather. It's a tough sell for front end web people because it's a complete toolset change. I know the midteir/WPF people like it because it's closer to their normal .NET toolset, but they're not usually the ones doing web design. IMHO, things like HTML5 and Gears are where many people are going to go. A: I think that it will grow, but MSFT will need to do more deals like they did with the Olympics. Hooking up with CBS/NCAA on the March Madness broadcasts would be worth whatever millions they could throw at it. A: Silverlight 1 Vs Silverlight 2: Silverlight 2 is expected to be out in the next few months (they used to say in August 2008 until ... August ended. In September they say October.), so MS will probably be promoting Silverlight 2.1 (or whatever upgrade to Silerlight 2) in a year's time, and Silverlight 1.0 will likely have no developer share at all, and no momentum. Silverlight Vs Javascript-based platforms: Google Chrome (and the upcoming Firefox 2.1) promise an order of magnitude better performance in JavaScript. We haven't seen the best from them yet. MS will have to improve IE's JavaScript speeds, though who knows when they'll be able to ship that (in IE 9 maybe?). I think that it will be a few more years yet before the clear winners emerge from the fray. A: The installation barrier will be a problem until it ships with Windows by default. But even then developers will only support the established Flash. Considering certain mobile platforms have neither Flash nor Silverlight, it's best to back the one more likely to be ported to all platforms, and that's the dominant Flash. In the end Javascript + SVG will almost certainly win out over these vendor produced solutions. But within a year I'd be surprised if any significant amount of development is done with Silverlight. Flash has too much momentum and MS is too late to the game with nothing sufficiently compelling.
{ "language": "en", "url": "https://stackoverflow.com/questions/54169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What are the advantages/disadvantages of using a CTE? I'm looking at improving the performance of some SQL, currently CTEs are being used and referenced multiple times in the script. Would I get improvements using a table variable instead? (Can't use a temporary table as the code is within functions). A: You'll really have to performance test - There is no Yes/No answer. As per Andy Living's post above links to, a CTE is just shorthand for a query or subquery. If you are calling it twice or more in the same function, you might get better performance if you fill a table variable and then join to/select from that. However, as table variables take up space somewhere, and don't have indexes/statistics (With the exception of any declared primary key on the table variable) there's no way of saying which will be faster. They both have costs and savings, and which is the best way depends on the data they pull in and what they do with it. I've been in your situation, and after testing for speed under various conditions - Some functions used CTEs, and others used table variables. A: A CTE is not much more than syntactic sugar. It enhances the readability and allows to avoid repetition. Just think of it as a placeholder for the actual statement specified in the WITH()-clause. The engine will replace any occurance of the CTE's name in your query with this statement (quite similar to a view). This is the meaning of inline. Compared to a previously filled table (delared or created) You'll find advantages: * *useable in ad-hoc-queries (functions, views) *no unexpected side effects (most narrow scope) ...and disadvantages: * *You cannot use the CTE's result in different statements *You cannot use indexes, statistics to optimize your CTE's set (although it will implicitly use existing indexes and statistics of the targeted objects - if appropriate). In terms of performance a persisted set (declared or created table) can be (much!) better in some cases, but it forces you into procedural code. You will have to race your horses to find out which is better... Example: Various approaches to do the same The following simple (rather useless) example describes a set of user tables together with their columns. I use various different approaches to tell SQL-Server what I want: Try this with "include actual execution plan" USE master; --in my case the master database has just 5 "user tables", you can use any other DB of course GO --simple join, first the small set joining to the large set SELECT o.name AS TableName ,c.name AS ColumnName FROM sys.objects o INNER JOIN sys.columns c ON c.object_id=o.object_id WHERE o.type='U'; GO --simple join "the other way round" with the filter as part of the ON-clause SELECT o.name AS TableName ,c.name AS ColumnName FROM sys.columns c INNER JOIN sys.objects o ON c.object_id=o.object_id AND o.type='U'; GO --join from the large set with a sub-query to the small set SELECT o.name AS TableName ,c.name AS ColumnName FROM sys.columns c INNER JOIN ( SELECT o.* FROM sys.objects o WHERE o.type='U' --user tables ) o ON c.object_id=o.object_id; GO --join for large to small with a row-wise APPLY SELECT o.name AS TableName ,c.name AS ColumnName FROM sys.columns c CROSS APPLY ( SELECT o.* FROM sys.objects o WHERE o.type='U' --user tables AND o.object_id=c.object_id ) o; GO --use a CTE to "pre-filter" the small set WITH cte AS ( SELECT o.* FROM sys.objects o WHERE o.type='U' --user tables ) SELECT cte.name AS TableName ,c.name AS ColumnName FROM sys.columns c INNER JOIN cte ON c.object_id=cte.object_id; GO Now look at the result and at the execution plans: * *All queries return the same result. *All queries produce the same execution plan Important hint: This might differ on your machine! Why is this? T-SQL is a declarative language. Your statement is a description of WHAT you want to retrieve. It is not your job to tell the engine HOW this is done. SQL-Server's extremely smart engine will find the best way to get the set you asked for. In the case above all result descriptions point to the same goal. The engine can deduce this from various statements and finds the same plan for all of them. Well, is it just a matter of taste? In a way... There are some important things to keep in mind: * *There is no reason for the engine to compute the CTE's result before the rest (although the statement might look so). Therefore it is wrong to describe a CTE as something like a temp table... *In other words: The visible order of your statement does not predict the actual order of execution! *The smart engine will reach its limits with complexity and nest level. Imagine various VIEWs, all using CTEs and calling each-other... *There are cases where the engine really f**s up. I remember a case where a CTE did not much more than a TRY_CAST. The idea was to ensure valid values in the query below. But the engine thought "Oh, just a CAST, not expensiv!" and included the acutal CAST to the execution plan on a higher position. I remember another case where the engine performed an expensive operation against millions of rows (unnecessarily, the final result was filtered to a tiny set), just because the actual order of execution was not as expected. Okay... So when should I use a CTE? The following points are good reasons to use a CTE: * *A CTE can help you to avoid repeated sub queries. *A CTE can be used multiple times within your statement, e.g. within a JOIN with a dynamic behavior depending on the actual row-count. *You can use multiple CTEs within one statement and you can use the result of one CTE within a later CTE. *There are recursive (or better iterative) CTEs. *Sometimes I used single-row-CTEs to define / pre-compute variables later used in the query. Things you would do with declared variables in procedural T-SQL. You can use A CROSS JOIN to get them into your query easily. *and also very nice: the updatable CTE allows for very easy-to-read statements, same applies for DELETE. As above: Nothing one could not do without the CTE, but it is far better to read (I really like speaking names). Final hints Well, there are cases, where ugly code performs better :-) It is always good to have clean and readable code. A CTE will help you with this. So give it a try. If the performance is bad, get into depth, look at the execution plans and try to find a reason where the engine might decide wrong. In most cases it is a bad idea trying to outsmart the engine with hints such as FORCE ORDER (but in can help) UPDATE I was asked to point to advantages and disadvantages specifically: Uhm, technically there are no real advantages or disadvantages. Disregarding recursive CTEs there's nothing one couldn't solve without a CTE. Advantages The main advantage is readability and maintainabilty. Sometimes a CTE can save hundreds of lines of code. Instead of a repeating a huge sub-query one can use just a name as a variable. Corrections to the sub-query can be solved just in one place. The CTE can serve in ad-hoc queries and make your life easier. Disadvantages One possible disadvantage is that it's very easy, even for experienced developers, to mistake a CTE as a temp table, assume that the visible order of steps will be the same as the acutal order of execution and stumble into unexpected results or even errors. And - of course :-) - the strange wrong syntax error you'll see when you write a CTE after another statement without a separating ;. That's why many people tend to use ;WITH. A: Probably not. CTE's are especially good at querying data for tree structures. A: The information and quotes are from the following article on mssqltips.com "Choose Between SQL Server Subquery T-SQL Code" by Eric Blinn. https://www.mssqltips.com/sqlservertip/6618/sql-server-query-performance-cte-view-subquery-temp-table-table-variable/ SQL Server 2019 CTEs, subqueries, and views The SQL Server [2019] engine optimizes every query that is given to it. When it encounters a CTE, traditional subquery, or view, it sees them all the same way and optimizes them the same way. This involves looking at the underlying tables, considering their statistics, and choosing the best way to proceed. In most cases they will return the same plan and therefore perform exactly the same. TempDB table For the query that inserted rows into the temporary table, the optimizer looked at the table statistics and chose the best way forward. It actually made new table statistics for the temporary table and then used them to run the second. This brings about very similar performance. Table variable The table variable has poor performance in the example given in the article due to lack of table statistics. ...the table variable does not have any table statistics generated for it like the TempDB table did. This means the optimizer has to make a wild guess as to how to proceed. In this example it made a very, very poor decision. This is not to write off table variables. They surely have their place as will be discussed later in the tip. Temp table vs Table variable A temporary table will be stored on disk and have statistics calculated on it and a table variable will not. Because of this difference temporary tables are best when the expected row count is >100 and the table variable for smaller expected row counts where the lack of statistics will be less likely to lead to a bad query plan. A: Advantages of CTE CTE can be termed as 'Temporary View' used as a good alternative for a View in some cases. The main advantage over a view is usage of memory. As CTE's scope is limited only to its batch, the memory allocated for it is flushed as soon as its batch is crossed. But once a view is created, it is stored until user drops it. If the view is not used after creation then it's a mere waste of memory. CPU cost for CTE execution is lesser when compared to that of View. Like View, CTE doesn't store any metadata of its definition and provides better readability. A CTE can be referred for multiple times in a query. As the scope is limited to the batch, multiple CTEs can have the same name which a view cannot have. It can be made recursive. Disadvantages of CTE Though using CTE is advantageous, it does have some limitations to be kept in mind, We knew that it is a substitute for a view but a CTE cannot be nested while Views can be nested. View once declared can be used for any number of times but CTE cannot be used. It should be declared every time you want to use it. For this scenario, CTE is not recommended to use as it is a tiring job for user to declare the batches again and again. Between the anchor members there should be operators like UNION, UNION ALL or EXCEPT etc. In Recursive CTEs, you can define many Anchor Members and Recursive Members but all the Anchor Members must be defined before the first Recursive Member. You cannot define an Anchor Member between two Recursive Member. The number of columns, the data types used in Anchor and Recursive Members should be same. In Recursive Member, aggregate functions like TOP, operator like DISTINCT, clause like HAVING and GROUP BY, Sub-queries, joins like Left Outer or Right Outer or Full Outer are not allowed. Regarding Joins, only Inner Join is allowed in Recursive Member. Recursion Limit is 32767, crossing which results in the crash of server due to infinite loop.
{ "language": "en", "url": "https://stackoverflow.com/questions/54176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What should be considered when building a Recommendation Engine? I've read the book Programming Collective Intelligence and found it fascinating. I'd recently heard about a challenge amazon had posted to the world to come up with a better recommendation engine for their system. The winner apparently produced the best algorithm by limiting the amount of information that was being fed to it. As a first rule of thumb I guess... "More information is not necessarily better when it comes to fuzzy algorithms." I know's it's subjective, but ultimately it's a measurable thing (clicks in response to recommendations). Since most of us are dealing with the web these days and search can be considered a form of recommendation... I suspect I'm not the only one who'd appreciate other peoples ideas on this. In a nutshell, "What is the best way to build a recommendation ?" A: You don't want to use "overall popularity" unless you have no information about the user. Instead, you want to align this user with similar users and weight accordingly. This is exactly what Bayesian Inference does. In English, it means adjusting the overall probability you'll like something (the average rating) with ratings from other people who generally vote your way as well. Another piece of advice, but this time ad hoc: I find that there are people where if they like something I will almost assuredly not like it. I don't know if this effect is real or imagined, but it might be fun to build in a kind of "negative effect" instead of just clumping people by similarity. Finally there's a company specializing in exactly this called SenseArray. The owner (Ian Clarke of freenet fame) is very approachable. You can use my name if you call him up. A: There is an entire research area in computer science devoted to this subject. I'd suggest reading some articles. A: Agree with @Ricardo. This question is too broad, like asking "What's the best way to optimize a system?" One common feature to nearly all existing recommendation engines is that making the final recommendation boils down to multiplying some number of matrices and vectors. For example multiply a matrix containing proximity weights between users by a vector of item ratings. (Of course you have to be ready for most of your vectors to be super sparse!) My answer is surely too late for @Allain but for other users finding this question through search -- send me a PM and ask a more specific question and I will be sure to respond. (I design recommendation engines professionally.) A: @Lao Tzu, I agree with you. According to me, recommendation engines are made up of: * *Context Input fed from context aware systems (logging all your data) *Logical reasoning to filter the most obvious *Expert systems that improve your subjective data over the period of time based on context inputs, and *Probabilistic reasoning to do decision-making close-to-proximity based on weighted sum of previous actions(beliefs, desires, & intentions). P.S. I made such recommendation engine.
{ "language": "en", "url": "https://stackoverflow.com/questions/54179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Best tool to monitor network connection bandwidth I'm looking for a very simple tool to monitor the bandwidth of all my applications. No need for extra features like traffic spying, I'm just interested by bandwidth. I already know Wireshark (which is great), but what I'm looking for is more something like TcpView (great tool from Sysinternals) with current bandwidth indication. PS: I'm interested by Windows tools only A: Try NetLimiter, which is great for that and also allows you to limit bandwidth usage so that you can test your app in reduced bandwidth scenarios.
{ "language": "en", "url": "https://stackoverflow.com/questions/54184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Are C++ Reads and Writes of an int Atomic? I have two threads, one updating an int and one reading it. This is a statistic value where the order of the reads and writes is irrelevant. My question is, do I need to synchronize access to this multi-byte value anyway? Or, put another way, can part of the write be complete and get interrupted, and then the read happen. For example, think of a value = 0x0000FFFF that gets incremented value of 0x00010000. Is there a time where the value looks like 0x0001FFFF that I should be worried about? Certainly the larger the type, the more possible something like this to happen. I've always synchronized these types of accesses, but was curious what the community thinks. A: Yes, you need to synchronize accesses. In C++0x it will be a data race, and undefined behaviour. With POSIX threads it's already undefined behaviour. In practice, you might get bad values if the data type is larger than the native word size. Also, another thread might never see the value written due to optimizations moving the read and/or write. A: Boy, what a question. The answer to which is: Yes, no, hmmm, well, it depends It all comes down to the architecture of the system. On an IA32 a correctly aligned address will be an atomic operation. Unaligned writes might be atomic, it depends on the caching system in use. If the memory lies within a single L1 cache line then it is atomic, otherwise it's not. The width of the bus between the CPU and RAM can affect the atomic nature: a correctly aligned 16bit write on an 8086 was atomic whereas the same write on an 8088 wasn't because the 8088 only had an 8 bit bus whereas the 8086 had a 16 bit bus. Also, if you're using C/C++ don't forget to mark the shared value as volatile, otherwise the optimiser will think the variable is never updated in one of your threads. A: At first one might think that reads and writes of the native machine size are atomic but there are a number of issues to deal with including cache coherency between processors/cores. Use atomic operations like Interlocked* on Windows and the equivalent on Linux. C++0x will have an "atomic" template to wrap these in a nice and cross-platform interface. For now if you are using a platform abstraction layer it may provide these functions. ACE does, see the class template ACE_Atomic_Op. A: You must synchronize, but on certain architectures there are efficient ways to do it. Best is to use subroutines (perhaps masked behind macros) so that you can conditionally replace implementations with platform-specific ones. The Linux kernel already has some of this code. A: On Windows, Interlocked***Exchange***Add is guaranteed to be atomic. A: To echo what everyone said upstairs, the language pre-C++0x cannot guarantee anything about shared memory access from multiple threads. Any guarantees would be up to the compiler. A: IF you're reading/writing 4-byte value AND it is DWORD-aligned in memory AND you're running on the I32 architecture, THEN reads and writes are atomic. A: No, they aren't (or at least you can't assume they are). Having said that, there are some tricks to do this atomically, but they typically aren't portable (see Compare-and-swap). A: I agree with many and especially Jason. On windows, one would likely use InterlockedAdd and its friends. A: Asside from the cache issue mentioned above... If you port the code to a processor with a smaller register size it will not be atomic anymore. IMO, threading issues are too thorny to risk it. A: Lets take this example int x; x++; x=x+5; The first statement is assumed to be atomic because it translates to a single INC assembly directive that takes a single CPU cycle. However, the second assignment requires several operations so it's clearly not an atomic operation. Another e.g, x=5; Again, you have to disassemble the code to see what exactly happens here. A: tc, I think the moment you use a constant ( like 6) , the instruction wouldn't be completed in one machine cycle. Try to see the instruction set of x+=6 as compared to x++ A: Some people think that ++c is atomic, but have a eye on the assembly generated. For example with 'gcc -S' : movl cpt.1586(%rip), %eax addl $1, %eax movl %eax, cpt.1586(%rip) To increment an int, the compiler first load it into a register, and stores it back into the memory. This is not atomic. A: Definitively NO ! That answer from our highest C++ authority, M. Boost: Operations on "ordinary" variables are not guaranteed to be atomic. A: The only portable way is to use the sig_atomic_t type defined in signal.h header for your compiler. In most C and C++ implementations, that is an int. Then declare your variable as "volatile sig_atomic_t." A: Reads and writes are atomic, but you also need to worry about the compiler re-ordering your code. Compiler optimizations may violate happens-before relationship of statements in your code. By using atomic you don't have to worry about that. ... atomic i; soap_status = GOT_RESPONSE ; i = 1 In the above example, the variable 'i' will only be set to 1 after we get a soap response.
{ "language": "en", "url": "https://stackoverflow.com/questions/54188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "91" }
Q: How to implement Repository pattern withe LinqToEntities? How to implement Repository pattern withe LinqToEntities how to implement the interface A: I do the following: A service layer contains my business objects. It is passed the repository via an Inversion of Control (Castle Windor is my usual choice). The repository is in charge of mapping between the business objects and my entity framework objects. The advantages: You have no problems with object state or the context of the EF objects because you are just loading them during data manipulation on the repository side. This eases the situation when passing them to WCF/Web-Services. The disadvantages: You are losing some of the tracking functionality of Entity Framework, you have to manually load the data object (ef objects), possibly if required manually to optimistic concurrency checks (via a timestamp on the business object for example). But generally I prefer this solution, because it is possible to later change the repository. It allows me to have different repositories (for example my user object is actually using the ASPNetAuthenticationRepository instead of the EntityFrameworkRepository) but for my service layer it's transparent. With regards to the interface, I would use the business objects from the service layer as your parameter objects and don't let those EF objects out of the repository layer. Hope that helps A: I´ve almost like this except for the "Castle Windor" stuff. Take a look at openticket.codeplex.com
{ "language": "en", "url": "https://stackoverflow.com/questions/54199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Encrypting appSettings in web.config I am developing a web app which requires a username and password to be stored in the web.Config, it also refers to some URLs which will be requested by the web app itself and never the client. I know the .Net framework will not allow a web.config file to be served, however I still think its bad practice to leave this sort of information in plain text. Everything I have read so far requires me to use a command line switch or to store values in the registry of the server. I have access to neither of these as the host is online and I have only FTP and Control Panel (helm) access. Can anyone recommend any good, free encryption DLL's or methods which I can use? I'd rather not develop my own! Thanks for the feedback so far guys but I am not able to issue commands and and not able to edit the registry. Its going to have to be an encryption util/helper but just wondering which one! A: * *Encrypting and Decrypting Configuration Sections (ASP.NET) on MSDN *Encrypting Web.Config Values in ASP.NET 2.0 on ScottGu's blog *Encrypting Custom Configuration Sections on K. Scott Allen's blog EDIT: If you can't use asp utility, you can encrypt config file using SectionInformation.ProtectSection method. Sample on codeproject: Encryption of Connection Strings inside the Web.config in ASP.Net 2.0 A: While on the first glance it seems to be straightforward, there are a couple of hurdles I encountered. So I am providing steps that worked fine for me (to encrypt the appSettings section) using the default crypto provider: Encrypt sections in the web.config: * *Open Admin command shell (run as administrator!). The command prompt will be on C: which is assumed for the steps below.Further assumed is that the application is deployed on D:\Apps\myApp - replace this by the path you're using in step 3. *cd "C:\Windows\Microsoft.NET\Framework64\v4.0.30319", on 32 bit Windows systems use Framework instead of Framework64 *cd /D "D:\Apps\myApp"Note: The /D switch will change the drive automatically if it is different from your current drive. Here it will change the path and drive, so the current directory will be D:\Apps\myApp afterwards. *c:aspnet_regiis -pef appConfig . You should see this message: Microsoft (R) ASP.NET RegIIS version 4.0.30319.0 Administration utility to install and uninstall ASP.NET on the local machine. Copyright (C) Microsoft Corporation. All rights reserved. Encrypting configuration section... Succeeded! You can also Decrypt sections in the web.config: These are the same steps, but with option -pdf instead of -pef for aspnet_regiis. It is also possible to encrypt other sections of your web.config, for example you can encrypt the connection strings section via: aspnet_regiis -pe "connectionStrings" -app "/SampleApplication" More details about that can be found here. Note: The encryption above is transparent to your web application, i.e. your web application doesn't recognize that the settings are encrypted. You can also choose to use non-transparent encryption, for example by using Microsoft's DPAPI or by using AES along with the Framework's AES Class. How it is done with DPAPI I have described here at Stackoverflow. DPAPI works very similar in a sense that it uses the machine's or user credential's keys. Generally, non-transparent encryption gives you more control, for instance you can add a SALT, or you can use a key based on a user's passphrase. If you want to know more about how to generate a key from a passphrase, look here. A: Use aspnet_setreg.exe http://support.microsoft.com/kb/329290 A: * *Publish your project *Open Developer command Prompt as Administrator *use this command asp_rigiis -pef "appSettings" "C:\yourPublishPath" -prov "DataProtectionConfigurationProvider"
{ "language": "en", "url": "https://stackoverflow.com/questions/54200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Soap logging in .net I have an internal enterprise app that currently consumes 10 different web services. They're consumed via old style "Web References" instead of using WCF. The problem I'm having is trying to work with the other teams in the company who are authoring the services I'm consuming. I found I needed to capture the exact SOAP messages that I'm sending and receiving. I did this by creating a new attribute that extends SoapExtensionAttribute. I then just add that attribute to the service method in the generated Reference.cs file. This works, but is painful for two reasons. First, it's a generated file so anything I do in there can be overwritten. Second, I have to remember to remove the attribute before checking in the file. Is There a better way to capture the exact SOAP messages that I am sending and receiving? A: This seems to be a common question, as I just asked it and was told to look here. You don't have to edit the generated Reference.cs. You can reference the extension in your application's app.config. A: Is this a webapp? Place your SoapExtension code in a HTTPModule, and inject the SOAP envelope into the HTTPOutput stream. That way, when in debug mode, I picture something like a collapsible div on the top of the page that lists all SOAP communication for that page. A: I have a HTTPModule already built that does this, I'll strip out my company specific information and post the goodies later today. Also, check out SoapUI, its a handy tool. A: You can do this by creating a SoapExtention. Check this article. A: I used the following code is an example of how I captured SOAP requests in a application written a while back. <System.Diagnostics.Conditional("DEBUG")> _ Private Sub CheckHTTPRequest(ByVal functionName As String) Dim e As New UTF8Encoding() Dim bytes As Long = Me.Context.Request.InputStream.Length Dim stream(bytes) As Byte Me.Context.Request.InputStream.Seek(0, IO.SeekOrigin.Begin) Me.Context.Request.InputStream.Read(stream, 0, CInt(bytes)) Dim thishttpRequest As String = e.GetString(stream) My.Computer.FileSystem.WriteAllText("D:\SoapRequests\" & functionName & ".xml", thishttpRequest, False) End Sub Setting the conditional attribute like I did makes the compiler ignore the method call for all build types other than debug. Sorry for the VB, it is forced upon me.
{ "language": "en", "url": "https://stackoverflow.com/questions/54207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Ajax XMLHttpRequest object limit Is there a security limit to the number of Ajax XMLHttpRequest objects you can create on a single page? If so, does this vary from one browser to another? A: I don't think so, but there's a limit of two simultaneous HTTP connections per domain per client (you can override this in Firefox, but practically no one does so). A: I've found it easier to pool and reuse XMLHTTPRequest objects instead of creating new ones... A: Yes, as Kevin says, HTTP/1.1 specifications say "A single-user client should not maintain more than 2 connections with any server or proxy."
{ "language": "en", "url": "https://stackoverflow.com/questions/54217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How should I handle a situation where I need to store several unrelated types but provide specific types on demand? I'm working on an editor for files that are used by an important internal testing tool we use. The tool itself is large, complicated, and refactoring or rewriting would take more resources than we are able to devote to it for the forseeable future, so my hands are tied when it comes to large modifications. I must use a .NET language. The files are XML serialized versions of four classes that are used by the tool (let's call them A, B, C, and D). The classes form a tree structure when all is well. Our editor works by loading a set of files, deserializing them, working out the relationships between them, and keeping track of any bad states it can find. The idea is for us to move away from hand-editing these files, which introduces tons of errors. For a particular type of error, I'd like to maintain a collection of all files that have the problem. All four classes can have the problem, and I'd like to reduce duplication of code as much as possible. An important requirement is the user needs to be able to get the items in sets; for example, they need to get all A objects with an error, and telling them to iterate over the whole collection and pick out what they want is unacceptable compared to a GetAs() method. So, my first thought was to make a generic item that related the deserialized object and some metadata to indicate the error: public class ErrorItem<T> { public T Item { get; set; } public Metadata Metadata { get; set; } } Then, I'd have a collection class that could hold all of the error items, with helper methods to extract the items of a specific class when the user needs them. This is where the trouble starts. None of the classes inherit from a common ancestor (other than Object). This was probably a mistake of the initial design, but I've spent a few days thinking about it and the classes really don't have much in common other than a GUID property that uniquely identifies each item so I can see why the original designer did not relate them through inheritance. This means that the unified error collection would need to store ErrorItem<Object> objects, since I don't have a base class or interface to restrict what comes in. However, this makes the idea of this unified collection a little sketchy to me: Public Class ErrorCollection { public ErrorItem<Object> AllItems { get; set; } } However, this has consequences on the public interface. What I really want is to return the appropriate ErrorItem generic type like this: public ErrorItem<A>[] GetA() This is impossible because I can only store ErrorItem<Object>! I've gone over some workarounds in my head; mostly they include creating a new ErrorItem of the appropriate type on-the-fly, but it just feels kind of ugly. Another thought has been using a Dictionary to keep items organized by type, but it still doesn't seem right. Is there some kind of pattern that might help me here? I know the easiest way to solve this is to add a base class that A, B, C, and D derive from, but I'm trying to have as small an impact on the original tool as possible. Is the cost of any workaround great enough that I should push to change the initial tool? A: Is this what you are looking for? private List<ErrorItem<object>> _allObjects = new List<ErrorItem<object>>(); public IEnumerable<ErrorItem<A>> ItemsOfA { get { foreach (ErrorItem<object> obj in _allObjects) { if (obj.Item is A) yield return new ErrorItem<A>((A)obj.Item, obj.MetaData); } } } If you want to cache the ItemsOfA you can easily do that: private List<ErrorItem<A>> _itemsOfA = null; public IEnumerable<ErrorItem<A>> ItemsOfACached { if (_itemsOfA == null) _itemsOfA = new List<ErrorItem<A>>(ItemsOfA); return _itemsOfA; } A: The answer I'm going with so far is a combination of the answers from fryguybob and Mendelt Siebenga. Adding a base class would just pollute the namespace and introduce a similar problem, as Mendelt Siebenga pointed out. I would get more control over what items can go into the collection, but I'd still need to store ErrorItem<BaseClass> and still do some casting, so I'd have a slightly different problem with the same root cause. This is why I selected the post as the answer: it points out that I'm going to have to do some casts no matter what, and KISS would dictate that the extra base class and generics are too much. I like fryguybob's answer not for the solution itself but for reminding me about yield return, which will make a non-cached version easier to write (I was going to use LINQ). I think a cached version is a little bit more wise, though the expected performance parameters won't make the non-cached version noticably slower. A: If A, B, C and D have nothing in common then adding a base class won't really get you anything. It will just be an empty class and in effect will be the same as object. I'd just create an ErrorItem class without the generics, make Item an object and do some casting when you want to use the objects referenced. If you want to use any of the properties or methods of the A, B, C or D class other than the Guid you would have had to cast them anyway.
{ "language": "en", "url": "https://stackoverflow.com/questions/54219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How would I allow a user to stream video to a web application for storage? I'd like to add some functionality to a site that would allow users to record video using their webcam and easily store it online. I don't even know if this is possible right now, but I think flash has access to local webcams running through the browser. Do you have any suggestions or resources to get me started on this? I'm primarily a java developer so If I could do it in an applet that would be great, but It may be easier to accomplish this using flash or some other technology. This would mean streaming the video back to the webserver and storing it there. Uploading a file is easy enough, but I'd rather the user not have to deal with that if it's possible. Just to be clear. I'm not talking about uploading a video. I'm talking about allowing the user to click "record" in a web application and having the video streamed to the server and stored when the user clicks "stop". A: This is possible with Adobe's Flash Media Server. Red5 is an open source alternative. The protocol they use for this is RTMP. A: On the server side, you have three alternatives: * *Adobe's Flash Media Server *Red5, an open source version *Wowza Media Server Pro, a commercial alternative You can either run any of these on your own server or else go for a web provider (I found this list to be helpful) On the client side, you'll need a flash file for the user interface. Red5 and Wowza some with the source code for their example recorders so it's easy to customize these for your own use. A: In addition to the above choices, you can also use haxevideo (open source)
{ "language": "en", "url": "https://stackoverflow.com/questions/54221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Accessing System Databases/Tables using LINQ to SQL? Right now I have an SSIS package that runs every morning and gives me a report on the number of packages that failed or succeeded from the day before. The information for these packages is contained partly within the sysjobs table (a system table) within the msdb database (a system database) in SQL Server 2005. When trying to move the package to a C# executable (mostly to gain better formatting over the email that gets sent out), I wasn't able to find a way to create a dbml file that allowed me to access these tables through LINQ. I tried to look for any properties that would make these tables visible, but I haven't had much luck. Is this possible with LINQ to SQL? A: If you're in Server Explorer, you can make them visible this way: * *Create a connection to the server you want. *Right-click the server and choose Change View > Object Type. *You should now see System Tables and User Tables. You should see sysjobs there, and you can easily drag it onto a .dbml surface. A: It may not be available in the designer, but why not just add it to the DBML file itself?
{ "language": "en", "url": "https://stackoverflow.com/questions/54222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Asp.net path compaction I have an asp.net url path which is being generated in a web form, and is coming out something like "/foo/bar/../bar/path.aspx", and is coming out in the generated html like this too. It should be shortened to "/foo/bar/path.aspx". Path.Combine didn't fix it. Is there a function to clean this path up? A: You could create a helper class which wrapped the UriBuilder class in System.Net public static class UriHelper { public static string NormalizeRelativePath(string path) { UriBuilder _builder = new UriBuilder("http://localhost"); builder.Path = path; return builder.Uri.AbsolutePath; } } which could then be used like this: string url = "foo/bar/../bar/path.aspx"; Console.WriteLine(UriHelper.NormalizeRelativePath(url)); It is a bit hacky but it would work for the specific example you gave. EDIT: Updated to reflect Andrew's comments. A: Whatever you do, don't use a static UriBuilder. This introduces all sorts of potential race conditions that you might not detect until you are under heavy load. If two different threads called UriHelper.NormalizeRelativePath at the same time, the return value for one could be passed back to the other caller arbitrarily. If you want to use UriBuilder to do this, just create a new one when you need it (it's not expensive to create). A: Sarcastic's reply is so much better than mine, but if you were working with filesystem paths, my ugly hack below could turn out to be useful too. (Translation: I typed it, so I'll be damned if I don't post it :) Path.Combine just slaps two strings together, paying attention to leading or trailing slashes. As far as I know, the only Path method that does normalization is Path.GetFullPath. The following will give you the "cleaned up" version. myPath = System.IO.Path.GetFullPath(myPath); Of course, there is the small issue that the resulting path will be rooted and the forward slashes will be converted to back slashes (like "C:\foo\bar\path.aspx"). But if you know the parent root of the original path, stripping out the root should not be a big problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/54227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: CakePHP ACL Database Setup: ARO / ACO structure? I'm struggling to implement ACL in CakePHP. After reading the documentation in the cake manual as well as several other tutorials, blog posts etc, I found Aran Johnson's excellent tutorial which has helped fill in many of the gaps. His examples seem to conflict with others I've seen though in a few places - specifically in the ARO tree structure he uses. In his examples his user groups are set up as a cascading tree, with the most general user type being at the top of the tree, and its children branching off for each more restricted access type. Elsewhere I've usually seen each user type as a child of the same generic user type. How do you set up your AROs and ACOs in CakePHP? Any and all tips appreciated! A: CakePHP's built-in ACL system is really powerful, but poorly documented in terms of actual implementation details. A system that we've used with some success in a number of CakePHP-based projects is as follows. It's a modification of some group-level access systems that have been documented elsewhere. Our system's aims are to have a simple system where users are authorised on a group-level, but they can have specific additional rights on items that were created by them, or on a per-user basis. We wanted to avoid having to create a specific entry for each user (or, more specifically for each ARO) in the aros_acos table. We have a Users table, and a Roles table. Users user_id, user_name, role_id Roles id, role_name Create the ARO tree for each role (we usually have 4 roles - Unauthorised Guest (id 1), Authorised User (id 2), Site Moderator (id 3) and Administrator (id 4)) : cake acl create aro / Role.1 cake acl create aro 1 Role.2 ... etc ... After this, you have to use SQL or phpMyAdmin or similar to add aliases for all of these, as the cake command line tool doesn't do it. We use 'Role-{id}' and 'User-{id}' for all of ours. We then create a ROOT ACO - cake acl create aco / 'ROOT' and then create ACOs for all the controllers under this ROOT one: cake acl create aco 'ROOT' 'MyController' ... etc ... So far so normal. We add an additional field in the aros_acos table called _editown which we can use as an additional action in the ACL component's actionMap. CREATE TABLE IF NOT EXISTS `aros_acos` ( `id` int(11) NOT NULL auto_increment, `aro_id` int(11) default NULL, `aco_id` int(11) default NULL, `_create` int(11) NOT NULL default '0', `_read` int(11) NOT NULL default '0', `_update` int(11) NOT NULL default '0', `_delete` int(11) NOT NULL default '0', `_editown` int(11) NOT NULL default '0', PRIMARY KEY (`id`), KEY `acl` (`aro_id`,`aco_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; We can then setup the Auth component to use the 'crud' method, which validates the requested controller/action against an AclComponent::check(). In the app_controller we have something along the lines of: private function setupAuth() { if(isset($this->Auth)) { .... $this->Auth->authorize = 'crud'; $this->Auth->actionMap = array( 'index' => 'read', 'add' => 'create', 'edit' => 'update' 'editMine' => 'editown', 'view' => 'read' ... etc ... ); ... etc ... } } Again, this is fairly standard CakePHP stuff. We then have a checkAccess method in the AppController that adds in the group-level stuff to check whether to check a group ARO or a user ARO for access: private function checkAccess() { if(!$user = $this->Auth->user()) { $role_alias = 'Role-1'; $user_alias = null; } else { $role_alias = 'Role-' . $user['User']['role_id']; $user_alias = 'User-' . $user['User']['id']; } // do we have an aro for this user? if($user_alias && ($user_aro = $this->User->Aro->findByAlias($user_alias))) { $aro_alias = $user_alias; } else { $aro_alias = $role_alias; } if ('editown' == $this->Auth->actionMap[$this->action]) { if($this->Acl->check($aro_alias, $this->name, 'editown') and $this->isMine()) { $this->Auth->allow(); } else { $this->Auth->authorize = 'controller'; $this->Auth->deny('*'); } } else { // check this user-level aro for access if($this->Acl->check($aro_alias, $this->name, $this->Auth->actionMap[$this->action])) { $this->Auth->allow(); } else { $this->Auth->authorize = 'controller'; $this->Auth->deny('*'); } } } The setupAuth() and checkAccess() methods are called in the AppController's beforeFilter() callback. There's an isMine method in the AppControler too (see below) that just checks that the user_id of the requested item is the same as the currently authenticated user. I've left this out for clarity. That's really all there is to it. You can then allow / deny particular groups access to specific acos - cake acl grant 'Role-2' 'MyController' 'read' cake acl grant 'Role-2' 'MyController' 'editown' cake acl deny 'Role-2' 'MyController' 'update' cake acl deny 'Role-2' 'MyController' 'delete' I'm sure you get the picture. Anyway, this answer's way longer than I intended it to be, and it probably makes next to no sense, but I hope it's some help to you ... -- edit -- As requested, here's an edited (purely for clarity - there's a lot of stuff in our boilerplate code that's meaningless here) isMine() method that we have in our AppController. I've removed a lot of error checking stuff too, but this is the essence of it: function isMine($model=null, $id=null, $usermodel='User', $foreignkey='user_id') { if(empty($model)) { // default model is first item in $this->uses array $model = $this->uses[0]; } if(empty($id)) { if(!empty($this->passedArgs['id'])) { $id = $this->passedArgs['id']; } elseif(!empty($this->passedArgs[0])) { $id = $this->passedArgs[0]; } } if(is_array($id)) { foreach($id as $i) { if(!$this->_isMine($model, $i, $usermodel, $foreignkey)) { return false; } } return true; } return $this->_isMine($model, $id, $usermodel, $foreignkey); } function _isMine($model, $id, $usermodel='User', $foreignkey='user_id') { $user = Configure::read('curr.loggedinuser'); // this is set in the UsersController on successful login if(isset($this->$model)) { $model = $this->$model; } else { $model = ClassRegistry::init($model); } //read model if(!($record = $model->read(null, $id))) { return false; } //get foreign key if($usermodel == $model->alias) { if($record[$model->alias][$model->primaryKey] == $user['User']['id']) { return true; } } elseif($record[$model->alias][$foreignkey] == $user['User']['id']) { return true; } return false; }
{ "language": "en", "url": "https://stackoverflow.com/questions/54230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Is there a way to highlight the target of a bookmark? (www.site.com/page.htm#bookmark)? I want to link to bookmark on a page (mysite.com/mypage.htm#bookmark) AND visually highlight the item that was bookmarked (maybe having a red border). Naturally, there would be multiple items bookmarked. So that if someone clicked on #bookmark2 then that other area would be highlighted). I can see how to do that with .asp or .aspx but I'd like to do it more simply than that. I thought maybe there was a clever way to do it with CSS. WHY I'm interested: - I want to have our programs link to a shopping page that lists all the programs on it. I'm using a bookmark so they're jumping to the particular program area (site.com/shoppingpage#Programx) but just to make it obvious I'd like to actually highlight the page being linked to. A: You can also use the target pseudo-class in CSS: <html> <head> <style type="text/css"> div#test:target { background-color: yellow; } </style> </head> <body> <p><b><a href="#test">Link</a></b></p> <div id="test"> Target </div> </body> </html> Unfortunately the target pseudo class isn't supported by IE or Opera, so if you're looking for a universal solution here this might not be sufficient. A: Use your favorite JS toolkit to add a "highlight" (or whatever) class to the item containing (or contained in) the anchor. Something like: jQuery(location.hash).addClass('highlight'); Of course, you'd need to call that onready or click if you want it triggered by other links on the page, and you'll want to have the .highlight class defined. You could also make your jQuery selector traverse up or down depending on the container you want highlighted. A: In your css you need to define a.highlight {border:1px solid red;} or something similar Then using jQuery, $(document).ready ( function () { //Work as soon as the DOM is ready for parsing var id = location.hash.substr(1); //Get the word after the hash from the url if (id) $('#'+id).addClass('highlight'); // add class highlight to element whose id is the word after the hash }); To highlight the targets on mouse over also add: $("a[href^='#']") .mouseover(function() { var id = $(this).attr('href').substr(1); $('#'+id).addClass('highlight'); }) .mouseout(function() { var id = $(this).attr('href').substr(1); $('#'+id).removeClass('highlight'); }); A: I guess if you could store this information with JavaScript and cookies for the functionality of remembering the bookmarks and even add a splash of Ajax if you wanted to interact with a database. CSS would only be able to do styling. You would have to give the bookmarked anchor a class found in your CSS file. CSS also has the a:visited selector which is used for styling links found in the browser's history.
{ "language": "en", "url": "https://stackoverflow.com/questions/54237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: In Vim is there a way to delete without putting text in the register? Using Vim I often want to replace a block of code with a block that I just yanked. But when I delete the block of code that is to be replaced, that block itself goes into the register which erases the block I just yanked. So I've got in the habit of yanking, then inserting, then deleting what I didn't want, but with large blocks of code this gets messy trying to keep the inserted block and the block to delete separate. So what is the slickest and quickest way to replace text in Vim? * *is there a way to delete text without putting it into the register? *is there a way to say e.g. "replace next word" or "replace up to next paragraph" *or is the best way to somehow use the multi-register feature? A: VIM docs: Numbered register 0 contains the text from the most recent yank command, unless the command specified another register with ["x]. E.g. we yank "foo" and delete "bar" - the registry 0 still contains "foo"! Hence "foo" can be pasted using "0p A: Well, first do this command: :h d Then you will realize that you can delete into a specific register. That way you won't alter what is in your default register. A: My situation: in Normal mode, when I delete characters using the x keypress, those deletions overwrite my latest register -- complicating edits where I want to delete characters using x and paste something I have in my most recent register (e.g. pasting the same text at two or more places). I tried the suggestion in the accepted answer ("_d), but it did not work. However, from the accepted answer/comments at https://vi.stackexchange.com/questions/122/performing-certain-operations-without-clearing-register, I added this to my ~/.vimrc, which works (you may have to restart Vim): nnoremap x "_x That is, I can now do my normal yank (y), delete (d), and paste (p) commands -- and characters I delete using x no longer populate the most recent registers. A: Vim's occasional preference for complexity over practicality burdens the user with applying registers to copy/delete actions -- when more often than not, one just wants paste what was copied and "forget" what was deleted. However, instead of fighting vim's complicated registers, make them more convenient: choose a "default" register to store your latest delete. For example, send deletes to the d register (leaving a-c open for ad-hoc usage; d is a nice mnemonic): nnoremap d "dd "send latest delete to d register nnoremap D "dD "send latest delete to d register nnoremap dd "ddd "send latest delete to d register nnoremap x "_x "send char deletes to black hole, not worth saving nnoremap <leader>p "dp "paste what was deleted nnoremap <leader>P "dP "paste what was deleted This approach prevents deletes from clobbering yanks BUT doesn't forfeit registering the delete -- one can paste (back) what was deleted instead of losing it in a black hole (as in the accepted answer). In the example above, this recalling is done with two leader p mappings. A benefit of this approach, is that it gives you the ability to choose what to paste: (a) what was just yanked, or (b) what was just deleted. A: It's handy to have an easy mapping which lets you replace the current selection with buffer. For example when you put this in your .vimrc " it's a capital 'p' at the end vmap r "_dP then, after copying something into register (i.e. with 'y'), you can just select the text which you want to be replaced, and simply hit 'r' on your keyboard. The selection will be substituted with your current register. Explanation: vmap - mapping for visual mode "_d - delete current selection into "black hole register" P - paste A: To delete something without saving it in a register, you can use the "black hole register": "_d Of course you could also use any of the other registers that don't hold anything you are interested in. A: I put the following in my vimrc: noremap y "*y noremap Y "*Y noremap p "*p noremap P "*P vnoremap y "*y vnoremap Y "*Y vnoremap p "*p vnoremap P "*P Now I yank to and put from the clipboard register, and don't have to care what happens with the default register. An added benefit is that I can paste from other apps with minimal hassle. I'm losing some functionality, I know (I am no longer able to manage different copied information for the clipboard register and the default register), but I just can't keep track of more than one register/clipboard anyway. A: Text deleted, while in insert mode, doesn't go into default register. A: The two solutions I use in the right contexts are; * *highlight what you want to replace using Vims VISUAL mode then paste the register. I use this mostly out of habit as I only found the second solution much later, eg yiw " yank the whole word viwp " replace any word with the default register * *YankRing. With this plugin you can use the keybinding <ctrl>+<p> to replace the previous numbered register with the one you just pasted. Basically you go about pasting as you would, but when you realise that you have since overwritten the default register thus losing what you actually wanted to paste you can <C-P> to find and replace from the YankRing history! One of those must have plugins... A: I found a very useful mapping for your purpose: xnoremap p "_dP Deleted text is put in "black hole register", and the yanked text remains. Source: http://vim.wikia.com/wiki/Replace_a_word_with_yanked_text A: All yank and delete operations write to the unnamed register by default. However, the most recent yank and most recent delete are always stored (separately) in the numbered registers. The register 0 holds the most recent yank. The registers 1-9 hold the 9 most recent deletes (with 1 being the most recent). In other words, a delete overwrites the most recent yank in the unnamed register, but it's still there in the 0 register. The blackhole-register trick ("_dd) mentioned in the other answers works because it prevents overwriting the unnamed register, but it's not necessary. You reference a register using double quotes, so pasting the most recently yanked text can be done like this: "0p This is an excellent reference: * *http://blog.sanctum.geek.nz/advanced-vim-registers/ A: For the specific example that you gave, if I understand the question then this might work: *Highlight what you want to put somewhere else *delete (d) *Highlight the code that you want it to replace *paste (p) A: In the windows version (probably in Linux also), you can yank into the system's copy/paste buffer using "*y (i.e. preceding your yank command with double-quotes and asterisk). You can then delete the replaceable lines normally and paste the copied text using "*p. A: I often make a mistake when following the commands to 'y'ank then '"_d'elete into a black hole then 'p'aste. I prefer to 'y'ank, then delete however I like, then '"0p' from the 0 register, which is where the last copied text gets pushed to. A: For Dvorak users, one very convenient method is to just delete unneeded text into the "1 register instead of the "_ black hole register, if only because you can press " + 1 with the same shift press and a swift pinky motion since 1 is the key immediately above " in Dvorak (PLUS d is in the other hand, which makes the whole command fast as hell). Then of course, the "1 register could be used for other things because of it's convenience, but unless you have a purpose more common than replacing text I'd say it's a pretty good use of the register. A: For emacs-evil users, the function (evil-paste-pop) can be triggered after pasting, and cycles through previous entries in the kill-ring. Assuming you bound it to <C-p> (default in Spacemacs), you would hit p followed by as many <C-p> as needed. A: For those who use JetBrans IDE (PhpStorm, WebStorm, IntelliJ IDEA) with IdeaVim. You may be experiencing problems with remapping like nnoremap d "_d and using it to delete a line dd. Possible solution for you: nnoremap dd "_dd There are issues on youtrack, please vote for them: https://youtrack.jetbrains.com/issue/VIM-1189 https://youtrack.jetbrains.com/issue/VIM-1363 A: The following vscode setting should allow e.g. dd and dw to become "_dd and "_dw, which work correctly now with our remapper. { "vim.normalModeKeyBindingsNonRecursive": [ { "before": ["d"], "after": [ "\"", "_", "d" ] } ] } A: To emphasize what EBGreen said: If you paste while selecting text, the selected text is replaced with the pasted text. If you want to copy some text and then paste it in multiple locations, use "0p to paste. Numbered register 0 contains the text from the most recent yank command. Also, you can list the contents of all of your registers: :registers That command makes it easier to figure out what register you want when doing something like dbr's answer. You'll also see the /,%,# registers. (See also :help registers) And finally, check out cW and cW to change a word including and not including an trailing space. (Using capital W includes punctuation.) A: A minimal invasive solution for the lazy ones: Register 0 always contains the last yank (as Rafael, alex2k8 and idbrii have already mentioned). Unfortunately selecting register 0 all the time can be quite annoying, so it would be nice if p uses "0 by default. This can be achieved by putting the following lines into your .vimrc: noremap p "0p noremap P "0P for s:i in ['"','*','+','-','.',':','%','/','=','1','2','3','4','5','6','7','8','9','a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z'] execute 'noremap "'.s:i.'p "'.s:i.'p' execute 'noremap "'.s:i.'P "'.s:i.'P' endfor The first line maps each p stroke to "0p. However, this prevents p from accessing any other registers. Therefore all p strokes with an explicitly selected register are mapped to the equivalent commandline expression within the for-loop. The same is done for P. This way the standard behaviour is preserved, except for the implicit p and P strokes, which now use register 0 by default. Hint 1: The cut command is now "0d instead of just d. But since I'm lazy this is way too long for me ;) Therefore I'm using the following mapping: noremap <LEADER>d "0d noremap <LEADER>D "0D The leader key is \ by default, so you can easily cut text by typing \d or \D. Hint 2: The default timeout for multi-key mappings is pretty short. You might want to increase it to have more time when selecting a register. See :help timeoutlen for details, I'm using: set timeout timeoutlen=3000 ttimeoutlen=100 A: Yep. It's slightly more convoluted than deleting the "old" text first, but: I start off with.. line1 line2 line3 line4 old1 old2 old3 old4 I shift+v select the line1, line 2, 3 and 4, and delete them with the d command Then I delete the old 1-4 lines the same way. Then, do "2p That'll paste the second-last yanked lines (line 1-4). "3p will do the third-from-last, and so on.. So I end up with line1 line2 line3 line4 Reference: Vim documentation on numbered register A: If you're using Vim then you'll have the visual mode, which is like selecting, but with the separating modes thing that's the basis of vi/vim. What you want to do is use visual mode to select the source, then yank, then use visual mode again to select the scope of the destination, and then paste to text from the default buffer. Example: In a text file with: 1| qwer 2| asdf 3| zxcv 4| poiu with the following sequence: ggVjyGVkp you'll end with: 1| qwer 2| asdf 3| qewr 4| asdf Explained: * *gg: go to first line *V: start visual mode with whole lines *j: go down one line (with the selection started on the previous lines this grows the selection one line down) *y: yank to the default buffer (the two selected lines, and it automatically exits you from visual mode) *G: go to the last line *V: start visual mode (same as before) *k: go up one line (as before, with the visual mode enabled, this grows the selection one line up) *p: paste (with the selection on the two last lines, it will replace those lines with whatever there is in the buffer -- the 2 first lines in this case) This has the little inconvenient that puts the last block on the buffer, so it's somehow not desired for repeated pastings of the same thing, so you'll want to save the source to a named buffer with something like "ay (to a buffer called "a") and paste with something like "ap (but then if you're programming, you probably don't want to paste several times but to create a function and call it, right? RIGHT?). If you are only using vi, then youll have to use invisible marks instead the visual mode, :he mark for more on this, I'm sorry but I'm not very good with this invisible marks thing, I'm pretty contaminated with visual mode. A: For 'replace word', try cw in normal mode. For 'replace paragraph', try cap in normal mode. A: For those of you who are primary interested of not overwriting the unnamed register when you replace a text via virtual selection and paste. You could use the following solution which is easy and works best for me: xnoremap <expr> p 'pgv"'.v:register.'y' It's from the answers (and comments) here: How to paste over without overwriting register It past the selection first, then the selection with the new text is selected again (gv). Now the register which was last used in visual mode (normally the unnamed register but works also side effect free if you use another register for pasting). And then yank the new value to the register again. PS: If you need a simpler variant which works also with intellj idea plugin vim-simulation you can take this one (downside: overwrite unnamed register even if another register was given for pasting): vnoremap p pgvy A: register 0 thru 9 are the latest things you yank or cut etc, and DO NOT include deleted things etc. So if you yank/cut something, it is in register 0 also, even if you say deleted stuff all day after that, register 0 is still the last stuff you yanked/cut. yiw #goes to both default register, and register 0 it yanks the word the cursor is in deleted stuff etc will go to default register, but NOT register 0 in standard mode: "0p #put the content of register 0 in insert mode: ctrl-r 0 #put the content of register 0 I also had a friend who always yanked/cut to register x lol. A: You can make a simple macro with: q"_dwq Now to delete the next word without overwriting the register, you can use @d A: The main problem is to use p when in visual mode. Following function will recover the content of unnamed register after you paste something in visual mode. function! MyPaste(ex) let save_reg = @" let reg = v:register let l:count = v:count1 let save_map = maparg('_', 'v', 0, 1) exec 'vnoremap _ '.a:ex exec 'normal gv"'.reg.l:count.'_' call mapset('v', 0, save_map) let @" = save_reg endfunction vmap p :<c-u>call MyPaste('p')<cr> vmap P :<c-u>call MyPaste('P')<cr> Usage is the same as before and the content of register " is not changed. A: I find it easier to yank into the 'p' buffer to begin with. Copy (aka: yank) # highlight text you want to copy, then: "py Paste # delete or highlight the text you want to replace, then: "pp Pros (As opposed to deleting into a specific register): * *"pp is easy on the fingers *Not going to accidentally overwrite your paste buffer on delete. A: noremap mm m and noremap mx "_x is how I deal with it.
{ "language": "en", "url": "https://stackoverflow.com/questions/54255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "525" }
Q: Tracking Refactorings in a Bug Database Let's say you work someplace where every change to source code must be associated with a bug-report or feature-request, and there is no way to get that policy reformed. In such an environment, what is the best way to deal with code refactorings (that is, changes that improve the code but do not fix a bug or add a feature)? * *Write up a bug-report and associate the refactoring with it. *Write up a feature-request and associate the refactoring with it. *Sneak in the refactorings while working on code that is associated with a bug-report/feature-request. *Just don't do any refactoring. *Other Note that all bug reports and feature descriptions will be visible to managers and customers. A: I vote for the "sneak in refactorings" approach, which is, I believe, the way refactoring is meant to be done in the first place. It's probably a bad idea to refactor just for the sake of "cleaning up the code." This means that you're making changes for no real reason. Refactoring is, by definition, modifying the without the intent of fixing bugs or adding features. If you're following the KISS principle, any new feature is going to need at least some refactoring because you're not really thinking about how to make the most extensible system possible the first time around. A: The way we work it is: There must be a good reason to refactor the code, otherwise why? If the reason is to allow another feature to use the same code, associate the changes with the other feature's request. If it's to make something faster, create a feature request for faster 'xyz' and associate the changes with that - then the customers see you're improving the product. If it's to design out a bug, log the bug. It's worth noting that in my environment, the policy cannot be enforced. But clever managers can get reports of changes and if they don't have a bug\request reference in the commit text it's followed up. A: If you're working on a block of code, in most cases that's because there's either a bug fix or new feature that requires that block of code to change, and the refactoring is either prior to the change in order to make it easier, or after the change to tidy up the result. In either case, you can associate the refactoring with that bug fix or feature. A: Lets have a look at each option: * *Write up a bug-report and associate the refactoring with it. If you feel that, in your opinion, the original code poses a security risk or potential for crashing or instability. Write a small bug report outlining the danger, and then fix it. * *Write up a feature-request and associate the refactoring with it. It might be harder to reactor code based on a feature request. But you could use valid feature request to do this which leads me onto the next point... * *Sneak in the refactorings while working on code that is associated with a bug-report/feature-request. If there is a valid bug or feature, state that function x had to be change slightly to fix the bug or add the feature. * *Just don't do any refactoring. This seems to suggest the self development through improving an application is not allowed. Developers should be allowed, if not, encourage to explorer new techniques and technologies. * *Other Perhaps you could discuss your improvement at relevant meeting, giving convincing reasons why the changes should be made. Then at least you will have management backing to the change without having to sneak in the code through another method. A: * *Other If you work at a place with that kind of inflexible (and ridiculous) policy, the best solution is to find another job!
{ "language": "en", "url": "https://stackoverflow.com/questions/54264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: What do you use to create a website architecture? Sure, we can use a simple bulleted list or a mindmap. But, there must be a better, more interactive way. What do you use when starting your website architecture? A: From a physical and logical architecture standpoint, nothing beats the whiteboard, drawing up the layers/tiers of the application in boxes. Then create an electronic copy using Visio. After that, iteratively dive into each layer and design it using appropriate tools and techniques. Here are what I commonly use: * *Database: ERD *Business Objects (and Service Contracts): UML class diagrams *UI: prototypes & wireframes *Workflows and asynchronous operations: flowcharts and sequence diagrams A: I like to sketch out a design with pen & paper. Seriously. No computer. Layout the home screen, including a navigation bar. From here, think about what you'd like 2nd and 3rd tier pages to look like. I've found that this process of writing things out on paper really helps me think about what I want out of the site. Try to come up with templates for a few of the screens. Then create an outline of the content you would like to include. A: Paper prototyping baby - post-its on a whiteboard (so you can move them around and keep the relationships fluid without having to modify the page ideas themselves). Then of course, once you get down to the nitty-gritty of interface design, 'zoom in' on each of those post-its and use them to represent individual elements. Good for usability testing, avoiding code duplication, clean structures...just about everything. Plus it's cheap and very, very fast. A: By "architecture", do you mean the initial site map? If not, please post a clarification and I'll edit my response. Our tech team starts development after our creative department has done their stuff. Part of what we get is output from the information architect. He passes off a graphical sitemap, a detailed sitemap as an Excel sheet, and a set of wireframes in a PDF.
{ "language": "en", "url": "https://stackoverflow.com/questions/54277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to write java.util.Properties to XML with sorted keys? I'd like to store a properties file as XML. Is there a way to sort the keys when doing this so that the generated XML file will be in alphabetical order? String propFile = "/path/to/file"; Properties props = new Properties(); /*set some properties here*/ try { FileOutputStream xmlStream = new FileOutputStream(propFile); /*this comes out unsorted*/ props.storeToXML(xmlStream,""); } catch (IOException e) { e.printStackTrace(); } A: The simplest hack would be to override keySet. A bit of a hack, and not guaranteed to work in future implementations: new Properties() { @Override Set<Object> keySet() { return new TreeSet<Object>(super.keySet()); } } (Disclaimer: I have not even tested that it compiles.) Alternatively, you could use something like XSLT to reformat the produced XML. A: You could sort the keys first, then loop through the items in the properties file and write them to the xml file. public static void main(String[] args){ String propFile = "/tmp/test2.xml"; Properties props = new Properties(); props.setProperty("key", "value"); props.setProperty("key1", "value1"); props.setProperty("key2", "value2"); props.setProperty("key3", "value3"); props.setProperty("key4", "value4"); try { BufferedWriter out = new BufferedWriter(new FileWriter(propFile)); List<String> list = new ArrayList<String>(); for(Object o : props.keySet()){ list.add((String)o); } Collections.sort(list); out.write("<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n"); out.write("<!DOCTYPE properties SYSTEM \"http://java.sun.com/dtd/properties.dtd\">\n"); out.write("<properties>\n"); out.write("<comment/>\n"); for(String s : list){ out.write("<entry key=\"" + s + "\">" + props.getProperty(s) + "</entry>\n"); } out.write("</properties>\n"); out.flush(); out.close(); } catch (IOException e) { e.printStackTrace(); } } A: Here's a quick and dirty way to do it: String propFile = "/path/to/file"; Properties props = new Properties(); /* Set some properties here */ Properties tmp = new Properties() { @Override public Set<Object> keySet() { return Collections.unmodifiableSet(new TreeSet<Object>(super.keySet())); } }; tmp.putAll(props); try { FileOutputStream xmlStream = new FileOutputStream(propFile); /* This comes out SORTED! */ tmp.storeToXML(xmlStream,""); } catch (IOException e) { e.printStackTrace(); } Here are the caveats: * *The tmp Properties (an anonymous subclass) doesn't fulfill the contract of Properties. For example, if you got its keySet and tried to remove an element from it, an exception would be raised. So, don't allow instances of this subclass to escape! In the snippet above, you are never passing it to another object or returning it to a caller who has a legitimate expectation that it fulfills the contract of Properties, so it is safe. * *The implementation of Properties.storeToXML could change, causing it to ignore the keySet method. For example, a future release, or OpenJDK, could use the keys() method of Hashtable instead of keySet. This is one of the reasons why classes should always document their "self-use" (Effective Java Item 15). However, in this case, the worst that would happen is that your output would revert to unsorted. * *Remember that the Properties storage methods ignore any "default" entries. A: Here's a way to produce sorted output for both store Properties.store(OutputStream out, String comments) and Properties.storeToXML(OutputStream os, String comment): Properties props = new Properties() { @Override public Set<Object> keySet(){ return Collections.unmodifiableSet(new TreeSet<Object>(super.keySet())); } @Override public synchronized Enumeration<Object> keys() { return Collections.enumeration(new TreeSet<Object>(super.keySet())); } }; props.put("B", "Should come second"); props.put("A", "Should come first"); props.storeToXML(new FileOutputStream(new File("sortedProps.xml")), null); props.store(new FileOutputStream(new File("sortedProps.properties")), null); A: In my testing, the other answers to this question don't work properly on AIX. My particular test machine is running this version: IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 AIX ppc64-64 jvmap6460sr9-20110624_85526 After looking through the implementation of the store method, I found that it relies upon entrySet. This method works well for me. public static void saveSorted(Properties props, FileWriter fw, String comment) throws IOException { Properties tmp = new Properties() { @Override public Set<Object> keySet() { return Collections.unmodifiableSet(new TreeSet<Object>(super.keySet())); } @Override public Set<java.util.Map.Entry<Object,Object>> entrySet() { TreeSet<java.util.Map.Entry<Object,Object>> tmp = new TreeSet<java.util.Map.Entry<Object,Object>>(new Comparator<java.util.Map.Entry<Object,Object>>() { @Override public int compare(java.util.Map.Entry<Object, Object> entry1, java.util.Map.Entry<Object, Object> entry2) { String key1 = entry1.getKey().toString(); String key2 = entry2.getKey().toString(); return key1.compareTo(key2); } }); tmp.addAll(super.entrySet()); return Collections.unmodifiableSet(tmp); } @Override public synchronized Enumeration<Object> keys() { return Collections.enumeration(new TreeSet<Object>(super.keySet())); } @Override public Set<String> stringPropertyNames() { TreeSet<String> set = new TreeSet<String>(); for(Object o : keySet()) { set.add((String)o); } return set; } }; tmp.putAll(props); tmp.store(fw, comment); } A: java.util.Properties is based on Hashtable, which does not store its values in alphabetical order, but in order of the hash of each item, that is why you are seeing the behaviour you are. A: java.util.Properties is a subclass of java.util.Hashtable. ('Hash', being the key here.)You'd have to come up with your own customer implementation based on something that keeps/defines order...like a TreeMap. A: You could try this: Make a new class that does what java.util.XMLUtils does but in the save method change this: Set keys = props.keySet(); Iterator i = keys.iterator(); to Set keys = props.keySet(); List<String> newKeys = new ArrayList<String>(); for(Object key : keys) { newKeys.add(key.toString()); } Collections.sort(newKeys); Iterator i = newKeys.iterator(); Extend properties and override the Properties class storeToXML method to call your new class's save method. A: Here is another solution: public static void save_sorted(Properties props, String filename) throws Throwable { FileOutputStream fos = new FileOutputStream(filename); Properties prop_sorted = new Properties() { @Override public Set<String> stringPropertyNames() { TreeSet<String> set = new TreeSet<String>(); for (Object o : keySet()) { set.add((String) o); } return set; } }; prop_sorted.putAll(props); prop_sorted.storeToXML(fos, "test xml"); } A: Why do you want the XML file to be sorted in the first place? Presumably, there is another piece of code that reads the file and puts the data in another Properties object. Do you want to do this so you can manually find and edit entries in the XML file?
{ "language": "en", "url": "https://stackoverflow.com/questions/54295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Is it possible to over OO? My question is simple; is it possible to over object-orient your code? How much is too much? At what point are you giving up readability and maintainability for the sake of OO? I am a huge OO person but sometimes I wonder if I am over-complicating my code.... Thoughts? A: If you think more objects is more object-oriented then yes. When doing object oriented design there are a couple of forces you have to balance. Most of OO design is about reducing and handling complexity. So if you get very complex solutions you're not doing too much OO but you're doing it wrong. A: is it possible to over object-orient your code Yes A: If you find that the time needed to fully implement OO in your project is needlessly causing missed deadlines, then yes. There has to be a trade off between releasing software and full OO fidelity. How to decide depends on the person, the team, the project and the organization running the project. A: Yes, of course there is :-) object oriented techniques are a tool ... if you use the wrong tool for a given job, you will be over complicating things (think spoon when all you need is a knife). To me, I judge "how much" by the size and scope of the project. If it is a small project, sometimes it does add too much complexity. If the project is large, you will still be taking on this complexity, but it will pay for itself in ease of maintainability, extensibility, etc. A: My advice is not to overthink it. That usually results in over or under doing SOMETHING (if not OO). The rule of thumb that I usually use is this: if it makes the problem easier to wrap my head around, I use an object. If another paradigm makes it easier to wrap my head around than it would be if I used an object, I use that. That strategy has yet to fail me. A: Yes, it's definitely possible -- common, either. I once worked with a guy who created a data structure to bind to a dropdown list -- so he could allow users to select a gender. True, that would be useful if the list of possible genders were to change, but they haven't as yet (we don't live in California) A: Yes, just as one can over-normalize a database design. This seems to be one of those purist vs. pragmatic debates that will never end. <:S A: A lot of people try to design their code for maximum flexibility an reuse without considering how likely that will be. Instead, break your classes up based on the program you're writing. If you will have exactly one instance of a particular object, you might consider merging it into the containing object. A: I think your question should read, "Can you over Architecture your application?" And of course the answer is yet. OO is just an approach to design. If you spend your time building unnecessary complexity into a system because "Polymorphism Rocks!". Then yes maybe you're over OOing. The very XP answer is that Regardless of what approach you favor (OO, procedural, etc.) the design should only be as complex as is demonstrably necessary. A: Yes, you can. As an example, if you find yourself creating Interfaces or abstract classes before you have two subtypes for them, then you're over-doing it. I see this kind of thinking often when developers (over)design up front. I use Test-Driven Development and Refactoring techniques to avoid this behavior. A: is it possible to over object-orient your code? No. But it is possible to over complicate your code. For example, you can use design patterns for the sake of using design patterns. But you cannot over object-orient your code. Your code is either object-oriented or it is not. Just as your code is either well designed or it is not. A: I guess it is possible, but its hard to answer your in abstract terms (no pun intended). Give an example of over OO. A: Yes. See the concept of the "golden hammer antipattern" A: I think there are times when OO design can be taken to an extreme and even in very large projects make the code less readable and maintainable. For instance, there can be use in a footware base class, and child classes of sneaker, dress shoe, etc. But I have read/reviewed people's code where it seemed as though they were going to the extreme of creating classes beneath that for NikeSneakers and ReebokSneakers. Certainly there are differences between the two, but to have readable, maintainable code, I believe its important, at times, to expand a class to adapt to differences rather than creating new child classes to handle the differences. A: Yes, and it's easy. Code is not better simply because it's object-oriented, any more than it's better simply because it's modular or functional or generic or generative or dataflow-based or aspect-oriented or anything else. Good code is good code because it's well-designed in its programming paradigm. Good design requires care. Being careful takes time. An example for your case: I've seen horrific pieces of Java in which, in the name of being "object oriented", every class implements some interface, even when no other class will ever implement that interface. Sometimes it's a hack, but in others it really is gratuitous. In whatever paradigm or idiom you write code, going too far, partaking of too much of a good thing, can make the code more complicated than the problem. Some people will say, when that point is reached, that the code isn't even really, for example, object-oriented anymore. Object-oriented code is supposed to be better organized for the purpose of being simpler, more straight-forward, or easier to understand and digest in reasonably independent portions. Using the mechanisms of object oriented coding antithetically to this goal does not result in object oriented design . A: I think the clear answer is yes, but depending on the domain you are referring to, it could be REALLY yes, or less so yes. If you are building high level .Net or Java apps, then I think this is the latter, as OO is basically built into the language. On the other hand if you are working on embedded apps, then the dangers and likelihood that your are over OO'ing are high. There is nothing worse than seeing a really high level person come onto a embedded project and over complicate things that they think are ugly, but are the most simple and fast ways to do things. A: I think anything can be "overdone". This is true with nearly every best practice I can think of. I've seen inheritance chains so complex, the code was virtually unmanageable.
{ "language": "en", "url": "https://stackoverflow.com/questions/54299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Any tools to get code churn metrics for a Subversion repository? I'm looking for any tools that can give you code churn metrics (graphs and charts would be even better) for a Subversion repository. One tool I know of is statsvn - a Java tool that creates some HTML reports and some code churn metrics. Statsvn reports the number of lines modified (churned) by user over time, some descriptive stats on LOC per file and folder/subfolder, etc. I would like to know code churn in order to get a better idea of the state of the project. Idea behind this inspired by the MS research: Use of Relative Code Churn Measures to Predict System Defect Density In a nutshell, the more that source code is churning (changing, whether adding new lines, deleting, changing,etc) the higher the probability that defects are being introduced into the system. The MS research paper says that the number of defects produced can be predicted based on a number of relative code churn measures. I wanted to know if there are any others that are maybe open source, extensible, etc. A: I have written a tool called 'svnplot' (which I admit was inspired by the output of StatSVN). Its written in python and available on Google code. http://code.google.com/p/svnplot. You can see the sample output at http://thinkingcraftsman.in/projects/svnplot/index.htm The details/output are not as elaborate as 'fisheye'. Basically it converts the Subversion log history into a 'sqlite' database and then queries sqlite database to generate graphs. You can write your own queries using the created sqlite database. See if it works for you. A: If you are willing to go the commercial route check out FishEye from Atlassian (also see their demo site ). FishEye also supports adding plugins (though this does not appear to be very well supported at this time). A: See svn-churn, a simple Python script to determine file churn and fix count of a Subversion repository. A: The only one i've ever heard of and used is statsvn, searching google doesnt return many results. A: You can probably use svn blame to get the date each line was changed and then use sed to pull out only the year and month and then use sort and uniq -c to generate a useful report. A: The Power Software tool, KEPM, is pretty focused on CHURN these days. JP A: try programeter which analyses Subversion and many other dev. tools.
{ "language": "en", "url": "https://stackoverflow.com/questions/54318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: How do I concatenate text in a query in sql server? The following SQL: SELECT notes + 'SomeText' FROM NotesTable a Give the error: The data types nvarchar and text are incompatible in the add operator. A: The only way would be to convert your text field into an nvarchar field. Select Cast(notes as nvarchar(4000)) + 'SomeText' From NotesTable a Otherwise, I suggest doing the concatenation in your application. A: You have to explicitly cast the string types to the same in order to concatenate them, In your case you may solve the issue by simply addig an 'N' in front of 'SomeText' (N'SomeText'). If that doesn't work, try Cast('SomeText' as nvarchar(8)). A: Another option is the CONCAT command: SELECT CONCAT(MyTable.TextColumn, 'Text') FROM MyTable A: You might want to consider NULL values as well. In your example, if the column notes has a null value, then the resulting value will be NULL. If you want the null values to behave as empty strings (so that the answer comes out 'SomeText'), then use the IsNull function: Select IsNull(Cast(notes as nvarchar(4000)),'') + 'SomeText' From NotesTable a A: If you are using SQL Server 2005 or greater, depending on the size of the data in the Notes field, you may want to consider casting to nvarchar(max) instead of casting to a specific length which could result in string truncation. Select Cast(notes as nvarchar(max)) + 'SomeText' From NotesTable a A: If you are using SQL Server 2005 (or greater) you might want to consider switching to NVARCHAR(MAX) in your table definition; TEXT, NTEXT, and IMAGE data types of SQL Server 2000 will be deprecated in future versions of SQL Server. SQL Server 2005 provides backward compatibility to data types, but you should probably be using VARCHAR(MAX), NVARCHAR(MAX), and VARBINARY(MAX) instead.
{ "language": "en", "url": "https://stackoverflow.com/questions/54334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81" }
Q: How can you find out where the style for a ASP .Net web page element came from? I have a quandary. My web application (C#, .Net 3.0, etc) has Themes, CSS sheets and, of course, inline style definitions. Now that's alot of chefs adding stuff to the soup. All of this results, not surprisingly, in my pages having bizarre styling on occasion. I am sure that all these styles are applied in a hierarchical method (although I am not sure of that order). The issue is that each style is applied as a "transparent" layer which just masks what it is applying. This is, I feel, a good idea as you can specifiy styles for the whole and then one-off them as needed. Unfortunately I can't tell from which layer the style actually came from. I could solve this issue by explicitly expressing the style at all layers but that gets bulky and hard to manage and the page(s) works 80% of the time. I just need to figure out where that squirrelly 20% came from. A: IMHO, Firebug is going to be your best bet. It will tell you which file the style came from and you can click on the filename to be transported instantly to the relevant line in the file. Note: You can hit ctrl+shift+C on any page to select and inspect an element with the mouse. A: Here's a quick sreencast of how to use Firebug to find out from where an element is getting it's style. http://screencast.com/t/oFpuDUoJ0 A: in Firefox use the DOM inspector, firebug, or inspect this. in IE, use the IE dev toolbar (or, maybe better, Firebug Lite) In Google Chrome, use the built-in "inspect element" functionality A: Using the IE Developer Toolbar you can select an element (either by "Select element by click" or clicking on its node in the DOM tree view) and in the Current Style pane, right click on a row and select "Trace Style". The other tools have a similar feature. A: The key to solving a complex CSS issue is to work out what is causing the weird appearance. The easiest way to find is to selectively comment out stylesheets until you find the one where commenting it out fixes the problem. Then enable the stylesheet and selectively comment out rules until you find the one causing the problem. If you need to know what takes precedence over what, the details of the cascade in CSS is detailed here and unlike the implementation of individual rules, this is fairly consistent across browsers. However, it is much better if you avoid inline styles entirely and have a set of well-crafted stylesheets, each of which has a logical function and all of whose rules you understand. for the same reason you don't put your server-side code in a random order in random files - us
{ "language": "en", "url": "https://stackoverflow.com/questions/54337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Using ActiveDirectoryMembershipProvider with two domain controllers We have an ASP.NET application running at a customer site that uses ActiveDirectory for user login via the ActiveDirectoryMembershipProvider. Their primary domain controller that we were pointing to went down this morning, and in getting everything set back up the client was wondering if we could have a redundant connection to two domain controllers; i.e. specifying a primary and a backup AD server. A Google search proved fruitless - does anyone know if this can be done? A: If ActiveDirectory couldn't handle multiple domain controllers then it wouldn't be a very good technology. You just need to make sure in your Membership configuration you are pointing to the 'Domain' rather than the 'Server' and then add two or more controllers to your domain. Generally if you are referring to the domain as "LDAP://server/DC=domain,DC=com" then you should be able to remove the "server" part and refer simply to "LDAP://DC=domain,DC=com" The following code project gives a long list of things you can do in Active Directory from C#: http://www.codeproject.com/KB/system/everythingInAD.aspx#7 A: It can be done, it will just take some work. You will need to create a class that inherits off of the ActiveDirectoryMemberhsipProvider and use it has your provider instead. That way you can maintain most of the functionality. Then setup a way to specify two connectionStringName properties, one for primary and one for secondary. You will also need to create the code to read the information from the config since you are changing it. Then just override the methods where you need to catch when the primary is down and switch to the secondary. This will be the most reusable way of doing it. There's probably other ways of doing it, but it will probably be hacky and not very reusable. Like testing the connection before each request and then setting the connectionstring that way. Based on the MSDN documentation on the class, this will probably be the only way to do it. They don't provide the functionality internal.
{ "language": "en", "url": "https://stackoverflow.com/questions/54364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Shell one liner to prepend to a file This is probably a complex solution. I am looking for a simple operator like ">>", but for prepending. I am afraid it does not exist. I'll have to do something like mv myfile tmp cat myheader tmp > myfile Anything smarter? A: When you start trying to do things that become difficult in shell-script, I would strongly suggest looking into rewriting the script in a "proper" scripting language (Python/Perl/Ruby/etc) As for prepending a line to a file, it's not possible to do this via piping, as when you do anything like cat blah.txt | grep something > blah.txt, it inadvertently blanks the file. There is a small utility command called sponge you can install (you do cat blah.txt | grep something | sponge blah.txt and it buffers the contents of the file, then writes it to the file). It is similar to a temp file but you dont have to do that explicitly. but I would say that's a "worse" requirement than, say, Perl. There may be a way to do it via awk, or similar, but if you have to use shell-script, I think a temp file is by far the easiest(/only?) way.. A: Like Daniel Velkov suggests, use tee. To me, that's simple smart solution: { echo foo; cat bar; } | tee bar > /dev/null A: EDIT: This is broken. See Weird behavior when prepending to a file with cat and tee The workaround to the overwrite problem is using tee: cat header main | tee main > /dev/null A: sed -i -e "1s/^/new first line\n/" old_file.txt A: echo '0a your text here . w' | ed some_file ed is the Standard Editor! http://www.gnu.org/fun/jokes/ed.msg.html A: The hack below was a quick off-the-cuff answer which worked and received lots of upvotes. Then, as the question became more popular and more time passed, people started reporting that it sorta worked but weird things could happen, or it just didn't work at all. Such fun. I recommend the 'sponge' solution posted by user222 as Sponge is part of 'moreutils' and probably on your system by default. (echo 'foo' && cat yourfile) | sponge yourfile The solution below exploits the exact implementation of file descriptors on your system and, because implementation varies significantly between nixes, it's success is entirely system dependent, definitively non-portable, and should not be relied upon for anything even vaguely important. Sponge uses the /tmp filesystem but condenses the task to a single command. Now, with all that out of the way the original answer was: Creating another file descriptor for the file (exec 3<> yourfile) thence writing to that (>&3) seems to overcome the read/write on same file dilemma. Works for me on 600K files with awk. However trying the same trick using 'cat' fails. Passing the prependage as a variable to awk (-v TEXT="$text") overcomes the literal quotes problem which prevents doing this trick with 'sed'. #!/bin/bash text="Hello world What's up?" exec 3<> yourfile && awk -v TEXT="$text" 'BEGIN {print TEXT}{print}' yourfile >&3 A: The one which I use. This one allows you to specify order, extra chars, etc in the way you like it: echo -e "TEXTFIRSt\n$(< header)\n$(< my.txt)" > my.txt P.S: only it's not working if files contains text with backslash, cause it gets interpreted as escape characters A: Mostly for fun/shell golf, but ex -c '0r myheader|x' myfile will do the trick, and there are no pipelines or redirections. Of course, vi/ex isn't really for noninteractive use, so vi will flash up briefly. A: With $( command ) you can write the output of a command into a variable. So I did it in three commands in one line and no temp file. originalContent=$(cat targetfile) && echo "text to prepend" > targetfile && echo "$originalContent" >> targetfile A: John Mee: your method is not guaranteed to work, and will probably fail if you prepend more than 4096 byte of stuff (at least that's what happens with gnu awk, but I suppose other implementations will have similar constraints). Not only will it fail in that case, but it will enter an endless loop where it will read its own output, thereby making the file grow until all the available space is filled. Try it for yourself: exec 3<>myfile && awk 'BEGIN{for(i=1;i<=1100;i++)print i}{print}' myfile >&3 (warning: kill it after a while or it will fill the filesystem) Moreover, it's very dangerous to edit files that way, and it's very bad advice, as if something happens while the file is being edited (crash, disk full) you're almost guaranteed to be left with the file in an inconsistent state. A: A variant on cb0's solution for "no temp file" to prepend fixed text: echo "text to prepend" | cat - file_to_be_modified | ( cat > file_to_be_modified ) Again this relies on sub-shell execution - the (..) - to avoid the cat refusing to have the same file for input and output. Note: Liked this solution. However, in my Mac the original file is lost (thought it shouldn't but it does). That could be fixed by writing your solution as: echo "text to prepend" | cat - file_to_be_modified | cat > tmp_file; mv tmp_file file_to_be_modified A: WARNING: this needs a bit more work to meet the OP's needs. There should be a way to make the sed approach by @shixilun work despite his misgivings. There must be a bash command to escape whitespace when reading a file into a sed substitute string (e.g. replace newline characters with '\n'. Shell commands vis and cat can deal with nonprintable characters, but not whitespace, so this won't solve the OP's problem: sed -i -e "1s/^/$(cat file_with_header.txt)/" file_to_be_prepended.txt fails due to the raw newlines in the substitute script, which need to be prepended with a line continuation character () and perhaps followed by an &, to keep the shell and sed happy, like this SO answer sed has a size limit of 40K for non-global search-replace commands (no trailing /g after the pattern) so would likely avoid the scary buffer overrun problems of awk that anonymous warned of. A: Why not simply use the ed command (as already suggested by fluffle here)? ed reads the whole file into memory and automatically performs an in-place file edit! So, if your file is not that huge ... # cf. "Editing files with the ed text editor from scripts.", # http://wiki.bash-hackers.org/doku.php?id=howto:edit-ed prepend() { printf '%s\n' H 1i "${1}" . wq | ed -s "${2}" } echo 'Hello, world!' > myfile prepend 'line to prepend' myfile Yet another workaround would be using open file handles as suggested by Jürgen Hötzel in Redirect output from sed 's/c/d/' myFile to myFile echo cat > manipulate.txt exec 3<manipulate.txt # Prevent open file from being truncated: rm manipulate.txt sed 's/cat/dog/' <&3 > manipulate.txt All this could be put on a single line, of course. A: Here's what I discovered: echo -e "header \n$(cat file)" >file A: sed -i -e '1rmyheader' -e '1{h;d}' -e '2{x;G}' myfile A: You can use perl command line: perl -i -0777 -pe 's/^/my_header/' tmp Where -i will create an inline replacement of the file and -0777 will slurp the whole file and make ^ match only the beginning. -pe will print all the lines Or if my_header is a file: perl -i -0777 -pe 's/^/`cat my_header`/e' tmp Where the /e will allow an eval of code in the substitution. A: Not possible without a temp file, but here goes a oneliner { echo foo; cat oldfile; } > newfile && mv newfile oldfile You can use other tools such as ed or perl to do it without temp files. A: If you need this on computers you control, install the package "moreutils" and use "sponge". Then you can do: cat header myfile | sponge myfile A: It may be worth noting that it often is a good idea to safely generate the temporary file using a utility like mktemp, at least if the script will ever be executed with root privileges. You could for example do the following (again in bash): (tmpfile=`mktemp` && { echo "prepended text" | cat - yourfile > $tmpfile && mv $tmpfile yourfile; } ) A: Using a bash heredoc you can avoid the need for a tmp file: cat <<-EOF > myfile $(echo this is prepended) $(cat myfile) EOF This works because $(cat myfile) is evaluated when the bash script is evaluated, before the cat with redirect is executed. A: This still uses a temp file, but at least it is on one line: echo "text" | cat - yourfile > /tmp/out && mv /tmp/out yourfile Credit: BASH: Prepend A Text / Lines To a File A: assuming that the file you want to edit is my.txt $cat my.txt this is the regular file And the file you want to prepend is header $ cat header this is the header Be sure to have a final blank line in the header file. Now you can prepend it with $cat header <(cat my.txt) > my.txt You end up with $ cat my.txt this is the header this is the regular file As far as I know this only works in 'bash'. A: If you're scripting in BASH, actually, you can just issue: cat - yourfile /tmp/out && mv /tmp/out yourfile That's actually in the Complex Example you yourself posted in your own question. A: If you have a large file (few hundred kilobytes in my case) and access to python, this is much quicker than cat pipe solutions: python -c 'f = "filename"; t = open(f).read(); open(f, "w").write("text to prepend " + t)' A: A solution with printf: new_line='the line you want to add' target_file='/file you/want to/write to' printf "%s\n$(cat ${target_file})" "${new_line}" > "${target_file}" You could also do: printf "${new_line}\n$(cat ${target_file})" > "${target_file}" But in that case you have to be sure there aren’t any % anywhere, including the contents of target file, as that can be interpreted and screw up your results. A: The simplest solution I found is: cat myHeader myFile | tee myFile or echo "<line to add>" | cat - myFile | tee myFile Notes: * *Use echo -n if you want to append just a piece of text (not a full line). *Add &> /dev/null to the end if you don't want to see the output (the generated file). *This can be used to append a shebang to the file. Example: # make it executable (use u+x to allow only current user) chmod +x cropImage.ts # append the shebang echo '#''!'/usr/bin/env ts-node | cat - cropImage.ts | tee cropImage.ts &> /dev/null # execute it ./cropImage.ts myImage.png A: current=`cat my_file` && echo 'my_string' > my_file && echo $current >> my_file where "my_file" is the file to prepend "my_string" to. A: I'm liking @fluffle's ed approach the best. After all, any tool's command line switches vs scripted editor commands are essentially the same thing here; not seeing a scripted editor solution "cleanliness" being any lesser or whatnot. Here's my one-liner appended to .git/hooks/prepare-commit-msg to prepend an in-repo .gitmessage file to commit messages: echo -e "1r $PWD/.gitmessage\n.\nw" | ed -s "$1" Example .gitmessage: # Commit message formatting samples: # runlevels: boot +consolekit -zfs-fuse # I'm doing 1r instead of 0r, because that will leave the empty ready-to-write line on top of the file from the original template. Don't put an empty line on top of your .gitmessage then, you will end up with two empty lines. -s suppresses diagnostic info output of ed. In connection with going through this, I discovered that for vim-buffs it is also good to have: [core] editor = vim -c ':normal gg' A: variables, ftw? NEWFILE=$(echo deb http://mirror.csesoc.unsw.edu.au/ubuntu/ $(lsb_release -cs) main universe restricted multiverse && cat /etc/apt/sources.list) echo "$NEWFILE" | sudo tee /etc/apt/sources.list A: I think this is the cleanest variation of ed: cat myheader | { echo '0a'; cat ; echo -e ".\nw";} | ed myfile as a function: function prepend() { { echo '0a'; cat ; echo -e ".\nw";} | ed $1; } cat myheader | prepend myfile A: IMHO there is no shell solution (and will never be one) that would work consistently and reliably whatever the sizes of the two files myheader and myfile. The reason is that if you want to do that without recurring to a temporary file (and without letting the shell recur silently to a temporary file, e.g. through constructs like exec 3<>myfile, piping to tee, etc. The "real" solution you are looking for needs to fiddle with the filesystem, and so it's not available in userspace and would be platform-dependent: you're asking to modify the filesystem pointer in use by myfile to the current value of the filesystem pointer for myheader and replace in the filesystem the EOF of myheader with a chained link to the current filesystem address pointed by myfile. This is not trivial and obviously can not be done by a non-superuser, and probably not by the superuser either... Play with inodes, etc. You can more or less fake this using loop devices, though. See for instance this SO thread. A: Quick and dirty, buffer everything in memory with python: $ echo two > file $ echo one | python -c "import sys; f=open(sys.argv[1]).read(); open(sys.argv[1],'w').write(sys.stdin.read()+f)" file $ cat file one two $ # or creating a shortcut... $ alias prepend='python -c "import sys; f=open(sys.argv[1]).read(); open(sys.argv[1],\"w\").write(sys.stdin.read()+f)"' $ echo zero | prepend file $ cat file zero one two A: for dash / ash: echo "hello\n$(cat myfile)" > myfile example: $ echo "line" > myfile $ cat myfile line $ echo "line1\n$(cat myfile)" > myfile $ cat myfile line1 line A: Bah! No one cared to mention about tac. endor@grid ~ $ tac --help Usage: tac [OPTION]... [FILE]... Write each FILE to standard output, last line first. With no FILE, or when FILE is -, read standard input. Mandatory arguments to long options are mandatory for short options too. -b, --before attach the separator before instead of after -r, --regex interpret the separator as a regular expression -s, --separator=STRING use STRING as the separator instead of newline --help display this help and exit --version output version information and exit Report tac bugs to [email protected] GNU coreutils home page: <http://www.gnu.org/software/coreutils/> General help using GNU software: <http://www.gnu.org/gethelp/> Report tac translation bugs to <http://translationproject.org/team/>
{ "language": "en", "url": "https://stackoverflow.com/questions/54365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "140" }
Q: Problem rolling out ADO.Net Data Service application to IIS I am adding a ADO.Net Data Service lookup feature to an existing web page. Everything works great when running from visual studio, but when I roll it out to IIS, I get the following error: Request ErrorThe server encountered an error processing the request. See server logs for more details. I get this even when trying to display the default page, i.e.: http://server/FFLookup.svc I have 3.5 SP1 installed on the server. What am I missing, and which "Server Logs" is it refering to? I can't find any further error messages. There is nothing in the Event Viewer logs (System or Application), and nothing in the IIS logs other than the GET: 2008-09-10 15:20:19 10.7.131.71 GET /FFLookup.svc - 8082 - 10.7.131.86 Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+en-US)+AppleWebKit/525.13+(KHTML,+like+Gecko)+Chrome/0.2.149.29+Safari/525.13 401 2 2148074254 There is no stack trace returned. The only response I get is the "Request Error" as noted above. Thanks Patrick A: In order to verbosely display the errors resulting from your data service you can place the following tag above your dataservice definition: [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] This will then display the error in your browser window as well as a stack trace. In addition to this dataservices throws all exceptions to the HandleException method so if you implement this method on your dataservice class you can put a break point on it and see the exception: protected override void HandleException(HandleExceptionArgs e) { try { e.UseVerboseErrors = true; } catch (Exception ex) { Console.WriteLine(ex.Message); } } A: Well I found the "Server Logs" mentioned in the error above. You need to turn on tracing in the web.config file by adding the following tags: <system.diagnostics> <sources> <source name="System.ServiceModel.MessageLogging" switchValue="Warning, ActivityTracing" > <listeners> <add name="ServiceModelTraceListener"/> </listeners> </source> <source name="System.ServiceModel" switchValue="Verbose,ActivityTracing" > <listeners> <add name="ServiceModelTraceListener"/> </listeners> </source> <source name="System.Runtime.Serialization" switchValue="Verbose,ActivityTracing"> <listeners> <add name="ServiceModelTraceListener"/> </listeners> </source> </sources> <sharedListeners> <add initializeData="App_tracelog.svclog" type="System.Diagnostics.XmlWriterTraceListener, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" name="ServiceModelTraceListener" traceOutputOptions="Timestamp"/> </sharedListeners> </system.diagnostics> This will create a file called app_tracelog.svclog in your website directory. You then use the SvcTraceViewer.exe utility to view this file. The viewer does a good job of highlighting the errors (along with lots of other information about the communications). Beware: The log file created with the above parameters grows very quickly. Only turn it on during debuging! In this particular case, the problem ended up being the incorrect version of OraDirect.Net, our Oracle Data Provider. The version we were using did not support 3.5 SP1. A: For me the error was caused by two methods having the same name (unintended overloading). Overloading is not supported but type 'abc' has an overloaded method 'Void SubmitCart(System.String, Int32)'. I found out by running the service in debug mode.
{ "language": "en", "url": "https://stackoverflow.com/questions/54380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Exposing .net methods as Excel functions? I have a set of calculation methods sitting in a .Net DLL. I would like to make those methods available to Excel (2003+) users so they can use them in their spreadsheets. For example, my .net method: public double CalculateSomethingReallyComplex(double a, double b) {...} I would like enable them to call this method just by typing a formula in a random cell: =CalculateSomethingReallyComplex(A1, B1) What would be the best way to accomplish this? A: You should also have a look at ExcelDna (http://www.codeplex.com/exceldna). ExcelDna is an open-source project (also free for commercial use) that allows you to create native .xll add-ins using .Net. Both user-defined functions (UDFs) and macros can be created. Your add-in code can be in text-based script files containing VB, C# or F# code, or in managed .dlls. Since the native Excel SDK interfaces are used, rather than COM-based automation, add-ins based on ExcelDna can be easily deployed and require no registration. ExcelDna supports Excel versions from Excel '97 to Excel 2007, and includes support for the Excel 2007 data types (large sheet and Unicode strings), as well as multi-threaded recalculation under Excel 2007. A: There are two methods - you can used Visual Studio Tools for Office (VSTO): http://blogs.msdn.com/pstubbs/archive/2004/12/31/344964.aspx or you can use COM: http://blogs.msdn.com/eric_carter/archive/2004/12/01/273127.aspx I'm not sure if the VSTO method would work in older versions of Excel, but the COM method should work fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/54387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is the best strategy for code chunks and macros in vim? As I develop more with vim, I find myself wanting to copy in blocks of useful code, similar to "templates" in Eclipse. I was thinking of making a separate file for each code chunk and just reading them in with :r code-fornext but that just seems kind of primitive. Googling around I find vim macros mentioned and something about "maps" but nothing that seems straightforward. What I am looking for are e.g. something like Eclipse's "Templates" so I pop in a code chunk with the cursor sitting in the middle of it. Or JEdit's "Macros" which I can record doing complicated deletes and renaming on one line, then I can play it again on 10 other lines so it does the same to them. Does vim have anything like these two functionalities? A: To record macros in Vim, in the command mode, hit the q key and another key you want to assign the macro to. For quick throw away macros I usually just hit qq and assign the macro to the q key. Once you are in recording mode, run through your key strokes. When you are done make sure you are back in command mode and hit q again to stop recording. Then to replay the macro manually, you can type @q. To replay the previously run macro you can type @@ or to run it 10 times you could type 10@q or 20@q, etc.. In summary: +----------------------------------+-------------------------------------+ | start recording a macro | qX (X = key to assign macro to) | +----------------------------------+-------------------------------------+ | stop recording a macro | q | +----------------------------------+-------------------------------------+ | playback macro | @X (X = key macro was assigned to) | +----------------------------------+-------------------------------------+ | replay previously played macro | @@ | +----------------------------------+-------------------------------------+ In regards to code chunks, I have found and started using a Vim plug-in called snipMate, which mimics TextMate's snippets feature. You can get the plug-in here: http://www.vim.org/scripts/script.php?script_id=2540 And a short article on using snipMate (along with a short screencast showing it in use): http://www.catonmat.net/blog/vim-plugins-snipmate-vim/ Hope you find this helpful! A: On vim.wikia, you will find a category of tips dedicated to snippets and abbreviations expansion. You will also see a list of plugins that ease the definition of complex snippets/templates-files. HTH,
{ "language": "en", "url": "https://stackoverflow.com/questions/54401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I (or can I) SELECT DISTINCT on multiple columns? I need to retrieve all rows from a table where 2 columns combined are all different. So I want all the sales that do not have any other sales that happened on the same day for the same price. The sales that are unique based on day and price will get updated to an active status. So I'm thinking: UPDATE sales SET status = 'ACTIVE' WHERE id IN (SELECT DISTINCT (saleprice, saledate), id, count(id) FROM sales HAVING count = 1) But my brain hurts going any farther than that. A: SELECT DISTINCT a,b,c FROM t is roughly equivalent to: SELECT a,b,c FROM t GROUP BY a,b,c It's a good idea to get used to the GROUP BY syntax, as it's more powerful. For your query, I'd do it like this: UPDATE sales SET status='ACTIVE' WHERE id IN ( SELECT id FROM sales S INNER JOIN ( SELECT saleprice, saledate FROM sales GROUP BY saleprice, saledate HAVING COUNT(*) = 1 ) T ON S.saleprice=T.saleprice AND s.saledate=T.saledate ) A: If your DBMS doesn't support distinct with multiple columns like this: select distinct(col1, col2) from table Multi select in general can be executed safely as follows: select distinct * from (select col1, col2 from table ) as x As this can work on most of the DBMS and this is expected to be faster than group by solution as you are avoiding the grouping functionality. A: If you put together the answers so far, clean up and improve, you would arrive at this superior query: UPDATE sales SET status = 'ACTIVE' WHERE (saleprice, saledate) IN ( SELECT saleprice, saledate FROM sales GROUP BY saleprice, saledate HAVING count(*) = 1 ); Which is much faster than either of them. Nukes the performance of the currently accepted answer by factor 10 - 15 (in my tests on PostgreSQL 8.4 and 9.1). But this is still far from optimal. Use a NOT EXISTS (anti-)semi-join for even better performance. EXISTS is standard SQL, has been around forever (at least since PostgreSQL 7.2, long before this question was asked) and fits the presented requirements perfectly: UPDATE sales s SET status = 'ACTIVE' WHERE NOT EXISTS ( SELECT FROM sales s1 -- SELECT list can be empty for EXISTS WHERE s.saleprice = s1.saleprice AND s.saledate = s1.saledate AND s.id <> s1.id -- except for row itself ) AND s.status IS DISTINCT FROM 'ACTIVE'; -- avoid empty updates. see below db<>fiddle here Old sqlfiddle Unique key to identify row If you don't have a primary or unique key for the table (id in the example), you can substitute with the system column ctid for the purpose of this query (but not for some other purposes): AND s1.ctid <> s.ctid Every table should have a primary key. Add one if you didn't have one, yet. I suggest a serial or an IDENTITY column in Postgres 10+. Related: * *In-order sequence generation *Auto increment table column How is this faster? The subquery in the EXISTS anti-semi-join can stop evaluating as soon as the first dupe is found (no point in looking further). For a base table with few duplicates this is only mildly more efficient. With lots of duplicates this becomes way more efficient. Exclude empty updates For rows that already have status = 'ACTIVE' this update would not change anything, but still insert a new row version at full cost (minor exceptions apply). Normally, you do not want this. Add another WHERE condition like demonstrated above to avoid this and make it even faster: If status is defined NOT NULL, you can simplify to: AND status <> 'ACTIVE'; The data type of the column must support the <> operator. Some types like json don't. See: * *How to query a json column for empty objects? Subtle difference in NULL handling This query (unlike the currently accepted answer by Joel) does not treat NULL values as equal. The following two rows for (saleprice, saledate) would qualify as "distinct" (though looking identical to the human eye): (123, NULL) (123, NULL) Also passes in a unique index and almost anywhere else, since NULL values do not compare equal according to the SQL standard. See: * *Create unique constraint with null columns OTOH, GROUP BY, DISTINCT or DISTINCT ON () treat NULL values as equal. Use an appropriate query style depending on what you want to achieve. You can still use this faster query with IS NOT DISTINCT FROM instead of = for any or all comparisons to make NULL compare equal. More: * *How to delete duplicate rows without unique identifier If all columns being compared are defined NOT NULL, there is no room for disagreement. A: The problem with your query is that when using a GROUP BY clause (which you essentially do by using distinct) you can only use columns that you group by or aggregate functions. You cannot use the column id because there are potentially different values. In your case there is always only one value because of the HAVING clause, but most RDBMS are not smart enough to recognize that. This should work however (and doesn't need a join): UPDATE sales SET status='ACTIVE' WHERE id IN ( SELECT MIN(id) FROM sales GROUP BY saleprice, saledate HAVING COUNT(id) = 1 ) You could also use MAX or AVG instead of MIN, it is only important to use a function that returns the value of the column if there is only one matching row. A: I want to select the distinct values from one column 'GrondOfLucht' but they should be sorted in the order as given in the column 'sortering'. I cannot get the distinct values of just one column using Select distinct GrondOfLucht,sortering from CorWijzeVanAanleg order by sortering It will also give the column 'sortering' and because 'GrondOfLucht' AND 'sortering' is not unique, the result will be ALL rows. use the GROUP to select the records of 'GrondOfLucht' in the order given by 'sortering SELECT GrondOfLucht FROM dbo.CorWijzeVanAanleg GROUP BY GrondOfLucht, sortering ORDER BY MIN(sortering)
{ "language": "en", "url": "https://stackoverflow.com/questions/54418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "527" }
Q: How to host 2 WCF services in 1 Windows Service? I have a WCF application that has two Services that I am trying to host in a single Windows Service using net.tcp. I can run either of the services just fine, but as soon as I try to put them both in the Windows Service only the first one loads up. I have determined that the second services ctor is being called but the OnStart never fires. This tells me that WCF is finding something wrong with loading up that second service. Using net.tcp I know I need to turn on port sharing and start the port sharing service on the server. This all seems to be working properly. I have tried putting the services on different tcp ports and still no success. My service installer class looks like this: [RunInstaller(true)] public class ProjectInstaller : Installer { private ServiceProcessInstaller _process; private ServiceInstaller _serviceAdmin; private ServiceInstaller _servicePrint; public ProjectInstaller() { _process = new ServiceProcessInstaller(); _process.Account = ServiceAccount.LocalSystem; _servicePrint = new ServiceInstaller(); _servicePrint.ServiceName = "PrintingService"; _servicePrint.StartType = ServiceStartMode.Automatic; _serviceAdmin = new ServiceInstaller(); _serviceAdmin.ServiceName = "PrintingAdminService"; _serviceAdmin.StartType = ServiceStartMode.Automatic; Installers.AddRange(new Installer[] { _process, _servicePrint, _serviceAdmin }); } } and both services looking very similar class PrintService : ServiceBase { public ServiceHost _host = null; public PrintService() { ServiceName = "PCTSPrintingService"; CanStop = true; AutoLog = true; } protected override void OnStart(string[] args) { if (_host != null) _host.Close(); _host = new ServiceHost(typeof(Printing.ServiceImplementation.PrintingService)); _host.Faulted += host_Faulted; _host.Open(); } } A: Type serviceAServiceType = typeof(AfwConfigure); Type serviceAContractType = typeof(IAfwConfigure); Type serviceBServiceType = typeof(ConfigurationConsole); Type serviceBContractType = typeof(IConfigurationConsole); Type serviceCServiceType = typeof(ConfigurationAgent); Type serviceCContractType = typeof(IConfigurationAgent); ServiceHost serviceAHost = new ServiceHost(serviceAServiceType); ServiceHost serviceBHost = new ServiceHost(serviceBServiceType); ServiceHost serviceCHost = new ServiceHost(serviceCServiceType); Debug.WriteLine("Enter1"); serviceAHost.Open(); Debug.WriteLine("Enter2"); serviceBHost.Open(); Debug.WriteLine("Enter3"); serviceCHost.Open(); Debug.WriteLine("Opened!!!!!!!!!"); A: Base your service on this MSDN article and create two service hosts. But instead of actually calling each service host directly, you can break it out to as many classes as you want which defines each service you want to run: internal class MyWCFService1 { internal static System.ServiceModel.ServiceHost serviceHost = null; internal static void StartService() { if (serviceHost != null) { serviceHost.Close(); } // Instantiate new ServiceHost. serviceHost = new System.ServiceModel.ServiceHost(typeof(MyService1)); // Open myServiceHost. serviceHost.Open(); } internal static void StopService() { if (serviceHost != null) { serviceHost.Close(); serviceHost = null; } } }; In the body of the windows service host, call the different classes: // Start the Windows service. protected override void OnStart( string[] args ) { // Call all the set up WCF services... MyWCFService1.StartService(); //MyWCFService2.StartService(); //MyWCFService3.StartService(); } Then you can add as many WCF services as you like to one windows service host. REMEBER to call the stop methods as well.... A: If you want one Windows service to start two WCF services, you'll need one ServiceInstaller that has two ServiceHost instances, both of which are started in the (single) OnStart method. You might want to follow the pattern for ServiceInstaller that's in the template code when you choose to create a new Windows Service in Visual Studio - in general this is a good place to start. A: you probably just need 2 service hosts. _host1 and _host2.
{ "language": "en", "url": "https://stackoverflow.com/questions/54419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is the design pattern for processing command line arguments If you are writing a program that is executable from the command line, you often want to offer the user several options or flags, along with possibly more than one argument. I have stumbled my way through this many times, but is there some sort of design pattern for looping through args and calling the appropriate handler functions? Consider: myprogram -f filename -d directory -r regex How do you organize the handler functions after you retrieve the arguments using whatever built-ins for your language? (language-specific answers welcomed, if that helps you articulate an answer) A: You didn't mention the language, but for Java we've loved Apache Commons CLI. For C/C++, getopt. A: Well, its an old post but i would still like to contribute. The question was intended on choice of design patterns however i could see a lot of discussion on which library to use. I have checked out microsoft link as per lindsay which talks about template design pattern to use. However, i am not convinced with the post. Template pattern's intent is to define a template which will be implemented by various other classes to have uniform behavior. I don't think parsing command line fits into it. I would rather go with "Command" design pattern. This pattern is best fit for menu driven options. http://www.blackwasp.co.uk/Command.aspx so in your case, -f, -d and -r all becomes commands which has common or separate receiver defined. That way more receivers can be defined in future. The next step will be to chain these responsibilities of commands, in case there a processing chain required. For which i would choose. http://www.blackwasp.co.uk/ChainOfResponsibility.aspx I guess the combination of these two are best to organize the code for command line processing or any menu driven approach. A: A few comments on this... First, while there aren't any patterns per se, writing a parser is essentially a mechanical exercise, since given a grammar, a parser can be easily generated. Tools like Bison, and ANTLR come to mind. That said, parser generators are usually overkill for the command line. So the usual pattern is to write one yourself (as others have demonstrated) a few times until you get sick of dealing with the tedious detail and find a library to do it for you. I wrote one for C++ that saves a bunch of effort that getopt imparts and makes nice use of templates: TCLAP A: The boost::program_options library is nice if you're in C++ and have the luxury of using Boost. A: Assuming you have a "config" object that you aim to setup with the flags and a suitable command line parser that takes care of parsing the command line and supply a constant stream of the options, here goes a block of pseudocode while (current_argument = cli_parser_next()) { switch(current_argument) { case "f": //Parser strips the dashes case "force": config->force = true; break; case "d": case "delete": config->delete = true; break; //So on and so forth default: printUsage(); exit; } } A: I prefer options like "-t text" and "-i 44"; I don't like "-fname" or "--very-long-argument=some_value". And "-?", "-h", and "/h" all produce a help screen. Here's how my code looks: int main (int argc, char *argv[]) { int i; char *Arg; int ParamX, ParamY; char *Text, *Primary; // Initialize... ParamX = 1; ParamY = 0; Text = NULL; Primary = NULL; // For each argument... for (i = 0; i < argc; i++) { // Get the next argument and see what it is Arg = argv[i]; switch (Arg[0]) { case '-': case '/': // It's an argument; which one? switch (Arg[1]) { case '?': case 'h': case 'H': // A cry for help printf ("Usage: whatever...\n\n"); return (0); break; case 't': case 'T': // Param T requires a value; is it there? i++; if (i >= argc) { printf ("Error: missing value after '%s'.\n\n", Arg); return (1); } // Just remember this Text = Arg; break; case 'x': case 'X': // Param X requires a value; is it there? i++; if (i >= argc) { printf ("Error: missing value after '%s'.\n\n", Arg); return (1); } // The value is there; get it and convert it to an int (1..10) Arg = argv[i]; ParamX = atoi (Arg); if ((ParamX == 0) || (ParamX > 10)) { printf ("Error: invalid value for '%s'; must be between 1 and 10.\n\n", Arg); return (1); } break; case 'y': case 'Y': // Param Y doesn't expect a value after it ParamY = 1; break; default: // Unexpected argument printf ("Error: unexpected parameter '%s'; type 'command -?' for help.\n\n", Arg); return (1); break; } break; default: // It's not a switch that begins with '-' or '/', so it's the primary option Primary = Arg; break; } } // Done return (0); } A: I'm riffing on the ANTLR answer by mes5k. This link to Codeproject is for an article that discusses ANLTR and using the visit pattern to implement the actions you want you app to take. It's well written and worth reviewing. A: I think the following answer is more along the lines of what you are looking for: You should look at applying the Template Pattern (Template Method in "Design Patterns" [Gamma, el al]) In short it's overall processing looks like this: If the arguments to the program are valid then Do necessary pre-processing For every line in the input Do necessary input processing Do necessary post-processing Otherwise Show the user a friendly usage message In short, implement a ConsoleEngineBase class that has methods for: PreProcess() ProcessLine() PostProcess() Usage() Main() Then create a chassis, that instantiates a ConsoleEngine() instance and sends the Main() message to kick it off. To see a good example of how to apply this to a console or command line program check out the following link: http://msdn.microsoft.com/en-us/magazine/cc164014.aspx The example is in C#, but the ideas are easily implemented in any other environment. You would look at the GetOpt() as just the part that fit's into the argument handling (pre-processing). Hope this helps. A: I don't know of any documented "patterns" for processing. I believe one of the oldest libraries/APIs for handling arguments is getopt. Googling "getopt" shows lots of man pages and links to implementations. Generally, I have a preferences or settings service in my application that the argument processor knows how to communicate with. Arguments are then translated into something in this service that the application than then query. This could be as simple as a dictionary of settings (like a string setting named "filename"). A: I would recommend using a command line processor library. Some Russian guy created a decent one, but there are tons of them out there. Will save you some time so you can concentrate on the purpose of your app rather than parsing command line switches! A: Getopt is the only way to go. http://sourceforge.net/projects/csharpoptparse A: How about the interpreter pattern? http://www.dofactory.com/net/interpreter-design-pattern A: You don't mention a language for this but if you are looking for a really nice Objective-C wrapper around getopt then Dave Dribin's DDCLI framework is really nice. http://www.dribin.org/dave/blog/archives/2008/04/29/ddcli A: I use the Getopts::std and Getopts::long in perl and also the Getopt function in C. This standardises the parsing and format of parameters. Other languages have different mechanisms for handling these. Hope this helps A: The standard design usually follows what getopt does, there are getopt libraries for many languages, .NET, python, C, Perl, PHP, etc. The basic design is to have a command line parser which returns part by part the arguments passed to be checked in a loop. This article discusses it in some more detail.
{ "language": "en", "url": "https://stackoverflow.com/questions/54421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: What's cleanest, shortest Javascript to submit a URL the user is at to another process via URL? Like the Delicious submission bookmark-let, I'd like to have some standard JavaScript I can use to submit any visited URL to a 3rd party site when that's possible by URL. Suggestions? For example, I've been using javascript:void(location.href="http://www.yacktrack.com/home?query="+encodeURI(location.href)) so far but wonder if there's something more sophisticated I could use or better practice. A: Do you want something exactly like the Delicious bookmarklet (as in, something the user actively clicks on to submit the URL)? If so, you could probably just copy their code and replace the target URL: javascript:(function(){ location.href='http://example.com/your-script.php?url='+ encodeURIComponent(window.location.href)+ '&title='+encodeURIComponent(document.title) })() You may need to change the query string names, etc., to match what your script expects. If you want to track a user through your website automatically, this probably won't be possible. You'd need to request the URL with AJAX, but the web browser won't allow Javascript to make a request outside of the originating domain. Maybe it's possible with iframe trickery. Edit: John beat me to it. A: document.location = "http://url_submitting_to.com?query_string_param=" + window.location; A: Another option would be to something like this: <form action="http://www.yacktrack.com/home" method="get" name="f"> <input type="hidden" name="query" /> </form> then your javascript would be: f.query.value=location.href; f.submit(); or you could combine the [save link] with the submit like this: <form action="http://www.yacktrack.com/home" method="get" name="f" onsubmit="f.query.value=location.href;"> <input type="hidden" name="query" /> <input type="submit" name="Save Link" /> </form> and if you're running server-side code, you can plug in the location so you can be JavaScript-free: <form action="http://www.yacktrack.com/home" method="get" name="f"> <input type="hidden" name="query" value="<%=Response.Url%>" /> <input type="submit" name="Save Link" /> </form>
{ "language": "en", "url": "https://stackoverflow.com/questions/54426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Adding Items Using DataBinding from TreeView to ListBox WPF I want to add the selected item from the TreeView to the ListBox control using DataBinding (If it can work with DataBinding). <TreeView HorizontalAlignment="Left" Margin="30,32,0,83" Name="treeView1" Width="133" > </TreeView> <ListBox VerticalAlignment="Top" Margin="208,36,93,0" Name="listBox1" Height="196" > </ListBox> TreeView is populated from the code behind page with some dummy data. A: You can bind to an element using ElementName, so if you wanted to bind the selected tree item to the ItemsSource of a ListBox: ItemsSource="{Binding SelectedItem, ElementName=treeView1}" A: I'm pretty sure it is possible, since WPF is really flexible with data binding, but I haven't done that specific scenario yet. I've been following a WPF Databinding FAQ from the MSDN blogs as of late and it provides a lot of insights that might help.
{ "language": "en", "url": "https://stackoverflow.com/questions/54440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Aggressive JavaScript caching I've run into a problem where I make changes to a few JavaScript files that are referenced in an HTML file, but the browser doesn't see the changes. It holds onto the copy cached in the browser, even though the web server has a newer version. Not until I force the browser to clear the cache do I see the changes. Is this a web-server configuration? Do I need to set my JavaScript files to never cache? I've seen some interesting techniques in the Google Web Toolkit where they actually create a new JavaScript file name any time an update is made. I believe this is to prevent proxies and browsers from keeping old versions of the JavaScript files with the same names. Is there a list of best practices somewhere? A: It holds onto the copy cached in the browser, even though the web server has a newer version. This is probably because the HTTP Expires / Cache-Control headers are set. http://developer.yahoo.com/performance/rules.html#expires I wrote about this here: http://www.codinghorror.com/blog/archives/000932.html This isn't bad advice, per se, but it can cause huge problems if you get it wrong. In Microsoft's IIS, for example, the Expires header is always turned off by default, probably for that very reason. By setting an Expires header on HTTP resources, you're telling the client to never check for new versions of that resource -- at least not until the expiration date on the Expires header. When I say never, I mean it -- the browser won't even ask for a new version; it'll just assume its cached version is good to go until the client clears the cache, or the cache reaches the expiration date. Yahoo notes that they change the filename of these resources when they need them refreshed. All you're really saving here is the cost of the client pinging the server for a new version and getting a 304 not modified header back in the common case that the resource hasn't changed. That's not much overhead.. unless you're Yahoo. Sure, if you have a set of images or scripts that almost never change, definitely exploit client caching and turn on the Cache-Control header. Caching is critical to browser performance; every web developer should have a deep understanding of how HTTP caching works. But only use it in a surgical, limited way for those specific folders or files that can benefit. For anything else, the risk outweighs the benefit. It's certainly not something you want turned on as a blanket default for your entire website.. unless you like changing filenames every time the content changes. A: @Jason and Darren IE6 treats anything with a query string as uncacheable. You should find another way to get the version number into the url, such as a fake directory: <script src="/js/version/MyScript.js"/> and just remove that first directory level after js on the server side before fulfilling the request. EDIT: Sorry all; it is Squid, not IE6, that won't cache with a query string. More info here. A: I've written a blog post about how we overcame this problem here: Avoiding JavaScript and CSS Stylesheet Caching Problems in ASP.NET Basically, during development you can add a random number to a query string after the filename of your CSS file. When you do a release build, the code switches to using your assembly's revision number instead. This means that in your production environment, your clients can cache the stylesheet, but whenever you release a new version of the site they'll be forced to re-load the file. A: We append a product build number to the end of all Javascript (and CSS etc.) like so: <script src="MyScript.js?4.0.8243"> Browsers ignore everything after the question mark but upgrades cause a new URL which means cache-reload. This has the additional benefit that you can set HTTP headers that mean "never cache!" A: I am also of the method of just renaming things. It never fails, and is fairly easy to do. A: is your webserver sending the right headers to tell the browser it has a new version? I've also added the date to the querystring before. ie myscripts.js?date=4/14/2008 12:45:03 (only the date would be encoded) A: With every release, we simply prepend a monotonically increasing integer to the root path of all our static assets, which forces the client to reload (we've seen the query string method break in IE6 before). For example: * *Release 1: http://www.foo.com/1/js/foo.js *Release 2: http://www.foo.com/2/js/foo.js It requires rejiggering links with each release, but we've built functionality to automatically change the links into our deployment tools. Once you do this, you can use Expires/Cache-Control headers that let the client cache JS resources "forever", since the path changes with each release, which i think is what @JasonCohen was getting at. A: Some very useful techniques in here even if you are not planning to use powershell to automate deployment. A: For what it is worth, I saw deviantART site, quite a big one, serving their JS files as 54504.js. I just checked and see they now serve them as v6core.css?-5855446573 v6core_jc.js?4150339741 etc. If the problem of query string comes from the server, I suppose you can control that more or less. A: I've resorted to dunking the class I'm working on into the main script which does not seem to suffer from the same aggressive cacheing problems as those called from main script. (For some reason? - Using Firefox). Can't see anyway to send a cache header without making things more complex like javascripts served as an asp/php page with a cache header. I'm sure it makes browsers work faster and more efficiently but it's a real PITA for development. Anyone who wants and knows how to do an RFC for HTML6, it would be good if we could go: <script src='script.js' max-age=0 nocache />
{ "language": "en", "url": "https://stackoverflow.com/questions/54475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How do I list user defined types in a SQL Server database? I need to enumerate all the user defined types created in a SQL Server database with CREATE TYPE, and/or find out whether they have already been defined. With tables or stored procedures I'd do something like this: if exists (select * from dbo.sysobjects where name='foobar' and xtype='U') drop table foobar However I can't find the equivalent (or a suitable alternative) for user defined types! I definitely can't see them anywhere in sysobjects. Can anyone enlighten me? A: Types and UDTs don't appear in sys.objects. You should be able to get what you're looking for with the following: select * from sys.types where is_user_defined = 1 A: Original comment: To expand on jwolly2's answer, here's how you get a list of definitions including the standard data type: Edit in Comment: I have just added an update to the Query aliasing/formatting to make the query more readable and updated the join key used to eliminate the need to filter out duplicate "system_type_id" values when "is_user_defined" = 0. The idea here is that we can find information about types in the sys.types table. * *When "is_user_defined" = 0, it is a built in type *When "system_type_id" matches "user_type_id" on the same record, it is a system type. *When "is_user_defined" = 1 the related system type will have the same "user_type_id" as the "system_type_id" on the user defined type *The "max_length" field refers to the max length in bytes (as opposed to characters - NVARCHAR(10) would be 20 / VARCHAR(10) would be 10) Type Info Query: SELECT UserType.[name] AS UserType , SystemType.[name] AS SystemType , UserType.[precision] , UserType.scale , UserType.max_length AS bytes --This value indicates max number of bytes as opposed to max length in characters -- NVARCHAR(10) would be 20 / VARCHAR(10) would be 10 , UserType.is_nullable FROM sys.types UserType JOIN sys.types SystemType ON SystemType.user_type_id = UserType.system_type_id AND SystemType.is_user_defined = 0 WHERE UserType.is_user_defined = 1 ORDER BY UserType.[name]; A: Although the post is old, I found it useful to use a query similar to this. You may not find some of the formatting useful, but I wanted the fully qualified type name and I wanted to see the columns listed in order. You can just remove all of the SUBSTRING stuff to just get the column name by itself. SELECT USER_NAME(TYPE.schema_id) + '.' + TYPE.name AS "Type Name", COL.column_id, SUBSTRING(CAST(COL.column_id + 100 AS char(3)), 2, 2) + ': ' + COL.name AS "Column", ST.name AS "Data Type", CASE COL.Is_Nullable WHEN 1 THEN '' ELSE 'NOT NULL' END AS "Nullable", COL.max_length AS "Length", COL.[precision] AS "Precision", COL.scale AS "Scale", ST.collation AS "Collation" FROM sys.table_types TYPE JOIN sys.columns COL ON TYPE.type_table_object_id = COL.object_id JOIN sys.systypes AS ST ON ST.xtype = COL.system_type_id where TYPE.is_user_defined = 1 ORDER BY "Type Name", COL.column_id
{ "language": "en", "url": "https://stackoverflow.com/questions/54482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: Conditional Number Formatting In Java How can I format Floats in Java so that the float component is displayed only if it's not zero? For example: 123.45 -> 123.45 99.0 -> 99 23.2 -> 23.2 45.0 -> 45 Edit: I forgot to mention - I'm still on Java 1.4 - sorry! A: If you use DecimalFormat and specify # in the pattern it only displays the value if it is not zero. See my question How do I format a number in java? Sample Code DecimalFormat format = new DecimalFormat("###.##"); double[] doubles = {123.45, 99.0, 23.2, 45.0}; for(int i=0;i<doubles.length;i++){ System.out.println(format.format(doubles[i])); } A: Check out the DecimalFormat class, e.g. new DecimalFormat("0.##").format(99.0) will return "99". A: new Formatter().format( "%f", myFloat )
{ "language": "en", "url": "https://stackoverflow.com/questions/54487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Storing Images in PostgreSQL Alright, so I'm working on an application which will use a Linux back-end running PostgreSQL to serve up images to a Windows box with the front end written in C#.NET, though the front-end should hardly matter. My question is: * *What is the best way to deal with storing images in Postgres? The images are around 4-6 megapixels each, and we're storing upwards of 3000. It might also be good to note: this is not a web application, there will at most be about two front-ends accessing the database at once. A: 2022 Answer The most common pattern now is to only store a reference to the image in your database, and store the image itself in a filesystem (i.e. S3 bucket). The benefit is that your database backups are smaller, there's no longer a single point of failure, load can now be distributed away from the database, and cloud storage buckets are often cheaper than database storage. The negative is that you have to manage images in two locations - delete one image and your app needs to keep track and delete it from the other. A: Updating to 2012, when we see that image sizes, and number of images, are growing and growing, in all applications... We need some distinction between "original image" and "processed image", like thumbnail. As Jcoby's answer says, there are two options, then, I recommend: * *use blob (Binary Large OBject): for original image store, at your table. See Ivan's answer (no problem with backing up blobs!), PostgreSQL additional supplied modules, How-tos etc. *use a separate database with DBlink: for original image store, at another (unified/specialized) database. In this case, I prefer bytea, but blob is near the same. Separating database is the best way for a "unified image webservice". *use bytea (BYTE Array): for caching thumbnail images. Cache the little images to send it fast to the web-browser (to avoiding rendering problems) and reduce server processing. Cache also essential metadata, like width and height. Database caching is the easiest way, but check your needs and server configs (ex. Apache modules): store thumbnails at file system may be better, compare performances. Remember that it is a (unified) web-service, then can be stored at a separate database (with no backups), serving many tables. See also PostgreSQL binary data types manual, tests with bytea column, etc. NOTE1: today the use of "dual solutions" (database+filesystem) is deprecated (!). There are many advantages to using "only database" instead dual. PostgreSQL have comparable performance and good tools for export/import/input/output. NOTE2: remember that PostgreSQL have only bytea, not have a default Oracle's BLOB: "The SQL standard defines (...) BLOB. The input format is different from bytea, but the provided functions and operators are mostly the same",Manual. EDIT 2014: I have not changed the original text above today (my answer was Apr 22 '12, now with 14 votes), I am opening the answer for your changes (see "Wiki mode", you can edit!), for proofreading and for updates. The question is stable (@Ivans's '08 answer with 19 votes), please, help to improve this text. A: Re jcoby's answer: bytea being a "normal" column also means the value being read completely into memory when you fetch it. Blobs, in contrast, you can stream into stdout. That helps in reducing the server memory footprint. Especially, when you store 4-6 MPix images. No problem with backing up blobs. pg_dump provides "-b" option to include the large objects into the backup. So, I prefer using pg_lo_*, you may guess. Re Kris Erickson's answer: I'd say the opposite :). When images are not the only data you store, don't store them on the file system unless you absolutely have to. It's such a benefit to be always sure about your data consistency, and to have the data "in one piece" (the DB). BTW, PostgreSQL is great in preserving consistency. However, true, reality is often too performance-demanding ;-), and it pushes you to serve the binary files from the file system. But even then I tend to use the DB as the "master" storage for binaries, with all the other relations consistently linked, while providing some file system-based caching mechanism for performance optimization. A: In the database, there are two options: * *bytea. Stores the data in a column, exported as part of a backup. Uses standard database functions to save and retrieve. Recommended for your needs. *blobs. Stores the data externally, not normally exported as part of a backup. Requires special database functions to save and retrieve. I've used bytea columns with great success in the past storing 10+gb of images with thousands of rows. PG's TOAST functionality pretty much negates any advantage that blobs have. You'll need to include metadata columns in either case for filename, content-type, dimensions, etc. A: Quick update to mid 2015: You can use the Postgres Foreign Data interface, to store the files in more suitable database. For example put the files in a GridFS which is part of MongoDB. Then use https://github.com/EnterpriseDB/mongo_fdw to access it in Postgres. That has the advantages, that you can access/read/write/backup it in Postrgres and MongoDB, depending on what gives you more flexiblity. There are also foreign data wrappers for file systems: https://wiki.postgresql.org/wiki/Foreign_data_wrappers#File_Wrappers As an example you can use this one: https://multicorn.readthedocs.org/en/latest/foreign-data-wrappers/fsfdw.html (see here for brief usage example) That gives you the advantage of the consistency (all linked files are definitely there) and all the other ACIDs, while there are still on the actual file system, which means you can use any file system you want and the webserver can serve them directly (OS caching applies too). A: Update from 10 years later In 2008 the hard drives you would run a database on would have much different characteristics and much higher cost than the disks you would store files on. These days there are much better solutions for storing files that didn't exist 10 years ago and I would revoke this advice and advise readers to look at some of the other answers in this thread. Original Don't store in images in the database unless you absolutely have to. I understand that this is not a web application, but if there isn't a shared file location that you can point to save the location of the file in the database. //linuxserver/images/imagexxx.jpg then perhaps you can quickly set up a webserver and store the web urls in the database (as well as the local path). While databases can handle LOB's and 3000 images (4-6 Megapixels, assuming 500K an image) 1.5 Gigs isn't a lot of space file systems are much better designed for storing large files than a database is. A: If your images are small, consider storing them as base64 in a plain text field. The reason is that while base64 has an overhead of 33%, with compression that mostly goes away. (See What is the space overhead of Base64 encoding?) Your database will be bigger, but the packets your webserver sends to the client won't be. In html, you can inline base64 in an <img src=""> tag, which can possibly simplify your app because you won't have to serve up the images as binary in a separate browser fetch. Handling images as text also simplifies things when you have to send/receive json, which doesn't handle binary very well. Yes, I understand you could store the binary in the database and convert it to/from text on the way in and out of the database, but sometimes ORMs make that a hassle. It can be simpler just to treat it as straight text just like all your other fields. This is definitely the right way to handle thumbnails. (OP's images are not small, so this is not really an answer to his question.)
{ "language": "en", "url": "https://stackoverflow.com/questions/54500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "143" }
Q: Problem with .net app under linux, doesn't work from shell script I'm working on a .net post-commit hook to feed data into OnTime via their Soap SDK. My hook works on Windows fine, but on our production RHEL4 subversion server, it won't work when called from a shell script. #!/bin/sh /usr/bin/mono $1/hooks/post-commit.exe "$@" When I execute it with parameters from the command line, it works properly. When executed via the shell script, I get the following error: (looks like there is some problem with the process execution of SVN that I use to get the log data for the revision): Unhandled Exception: System.InvalidOperationException: The process must exit before getting the requested information. at System.Diagnostics.Process.get_ExitCode () [0x0003f] in /tmp/monobuild/build/BUILD/mono-1.9.1/mcs/class/System/System.Diagnostics/Process.cs:149 at (wrapper remoting-invoke-with-check) System.Diagnostics.Process:get_ExitCode () at SVNLib.SVN.Execute (System.String sCMD, System.String sParams, System.String sComment, System.String sUserPwd, SVNLib.SVNCallback callback) [0x00000] at SVNLib.SVN.Log (System.String sUrl, Int32 nRevLow, Int32 nRevHigh, SVNLib.SVNCallback callback) [0x00000] at SVNLib.SVN.LogAsString (System.String sUrl, Int32 nRevLow, Int32 nRevHigh) [0x00000] at SVNCommit2OnTime.Program.Main (System.String[] args) [0x00000] I've tried using mkbundle and mkbundle2 to make a stand alone that could be named post-commit, but I get a different error message: Unhandled Exception: System.ArgumentNullException: Argument cannot be null. Parameter name: Value cannot be null. at System.Guid.CheckNull (System.Object o) [0x00000] at System.Guid..ctor (System.String g) [0x00000] at SVNCommit2OnTime.Program.Main (System.String[] args) [0x00000] Any ideas why it might be failing from a shell script or what might be wrong with the bundled version? Edit: @Herms, I've already tried it with an echo, and it looks right. As for the $1/hooks/post-commit.exe, I've tried the script with and without a full path to the .net assembly with the same results. Edit: @Leon, I've tried both $1 $2 and "$@" with the same results. It is a subversion post commit hook, and it takes two parameters, so those need to be passed along to the .net assembly. The "$@" was what was recommended at the mono site for calling a .net assembly from a shell script. The shell script is executing the .net assembly and with the correct parameters, but it is throwing an exception that does not get thrown when run directly from the command line. Edit: @Vinko, I don't see any differences in the environment other than things like BASH_LINENO and BASH_SOURCE Edit: @Luke, I tired it, but that makes no difference either. I first noticed the problem when testing from TortoiseSVN on my machine (when it runs as a sub-process of the subversion daemon), but also found that I get the same results when executing the script from the hooks directory (i.e. ./post-commit REPOS REV, where post-commit is the above sh script. Doing mono post-commit.exe REPOS REV works fine. The main problem is that to execute, I need to have something of the name post-commit so that it will be called. But it does not work from a shell script, and as noted above, the mkbundle is not working with a different problem. A: It is normal for some processes to hang around for a while after they close their stdout (ie. you get an end-of-file reading from them). You need to call proc.WaitForExit() after reading all the data but before checking ExitCode. A: Just a random thought that might help with debugging. Try changing your shell script to: #!/bin/sh echo /usr/bin/mono $1/hooks/post-commit.exe "$@" Check and see if the line it prints matches the command you're expecting it to run. It's possible your command line argument handling in the shell script isn't doing what you want it to do. I don't know what your input to the script is expected to be, but the $1 before the path looks a bit out of place to me. A: Are you sure you want to do /usr/bin/mono $1/hooks/post-commit.exe "$@" $@ expands to ALL arguments. "$@" expands to all arguments join by spaces. I suspect you shell script is incorrect. You didn't state exactly what you wanted the script to do, so that does limit our possibilities to make suggestions. A: Compare the environment variables in your shell and from within the script. A: Try putting "cd $1/hooks/" before the line that runs mono. You may have some assemblies in that folder that are found when you run mono from that folder in the shell but are not being found when you run your script. A: After having verified that my code did work from the command line, I found that it was no longer working! I went looking into my .net code to see if anything made sense. Here is what I had: static public int Execute(string sCMD, string sParams, string sComment, string sUserPwd, SVNCallback callback) { System.Diagnostics.Process proc = new System.Diagnostics.Process(); proc.EnableRaisingEvents = false; proc.StartInfo.RedirectStandardOutput = true; proc.StartInfo.CreateNoWindow = true; proc.StartInfo.UseShellExecute = false; proc.StartInfo.Verb = "open"; proc.StartInfo.FileName = "svn"; proc.StartInfo.Arguments = Cmd(sCMD, sParams, sComment, UserPass()); proc.Start(); int nLine = 0; string sLine = ""; while ((sLine = proc.StandardOutput.ReadLine()) != null) { ++nLine; if (callback != null) { callback.Invoke(nLine, sLine); } } int errorCode = proc.ExitCode; proc.Close(); return errorCode; } I changed this: while (!proc.HasExited) { sLine = proc.StandardOutput.ReadLine(); if (sLine != null) { ++nLine; if (callback != null) { callback.Invoke(nLine, sLine); } } } int errorCode = proc.ExitCode; It looks like the Process is hanging around a bit longer than I'm getting output, and thus the proc.ExitCode is throwing an error.
{ "language": "en", "url": "https://stackoverflow.com/questions/54503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What is the best practice for writing Registry calls/File Sytem calls/Process creation filter for WinXP, Vista? We needed to monitor all processes Registry calls/File Sytem calls/Process creations in the system (for the antivirus hips module) . Also time by time it will be needed to delay some calls or decline them. A: The supported method of doing this is RegNotifyChangeKeyValue Most virus checkers likely perform some sort of API hooking instead of using this function. There's lots of information out there about API hooking, like http://www.codeproject.com/KB/system/hooksys.aspx, http://www.codeguru.com/cpp/w-p/system/misc/article.php/c5667
{ "language": "en", "url": "https://stackoverflow.com/questions/54504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How should I store short text strings into a SQL Server database? varchar(255), varchar(256), nvarchar(255), nvarchar(256), nvarchar(max), etc? 256 seems like a nice, round, space-efficient number. But I've seen 255 used a lot. Why? What's the difference between varchar and nvarchar? A: There are a couple of other points to consider when defining char/varchar and the N variations. First, there is some overhead to storing variable length strings in the database. A good general rule of thumb is to use CHAR for strings less than 10 chars long, since N/VARCHAR stores both the string and the length and the difference between storing short strings in N/CHAR vs. N/VARCHAR under 10 isn't worth the overhead of the string length. Second, a table in SQL server is stored on 8KB pages, so the max size of the row of data is 8060 bytes (the other 192 are used for overhead by SQL). That's why SQL allows a max defined column of VARCHAR(8000) and NVARCHAR(4000). Now, you can use VARCHAR(MAX) and the unicode version. But there can be extra overhead associated with that. If I'm not mistaken, SQL server will try to store the data on the same page as the rest of the row but, if you attempt to put too much data into a VARCHAR(Max) column, it will treat it as binary and store it on another page. Another big difference between CHAR and VARCHAR has to do with page splits. Given that SQL Server stores data in 8KB pages, you could have any number of rows of data stored on a page. If you UPDATE a VARCHAR column with a value that is large enough that the row will no longer fit on the page, the server will split that page, moving off some number of records. If the database has no available pages and the database is set to auto grow, the server will first grow the database to allocate blank pages to it, then allocate blank pages to the table and finally split the single page into two. A: If you will be supporting languages other than English, you will want to use nvarchar. HTML should be okay as long as it contains standard ASCII characters. I've used nvarchar mainly in databases that were multi-lingual support. A: Because there are 8-bits in 1 byte and so in 1 byte you can store up to 256 distinct values which is 0 1 2 3 4 5 ... 255 Note the first number is 0 so that's a total of 256 numbers. So if you use nvarchar(255) It'll use 1 byte to store the length of the string but if you tip over by 1 and use nvarchar(256) then you're wasting 1 more byte just for that extra 1 item off from 255 (since you need 2 bytes to store the number 256). That might not be the actual implementation of SQL server but I believe that is the typical reasoning for limiting things at 255 over 256 items. and nvarchar is for Unicode, which use 2+ bytes per character and varchar is for normal ASCII text which only use 1 byte A: IIRC, 255 is the max size of a varchar in MySQL before you had to switch to the text datatype, or was at some point (actually, I think it's higher now). So keeping it to 255 might buy you some compatibility there. You'll want to look this up before acting on it, though. varchar vs nvarchar is kinda like ascii vs unicode. varchar is limited to one byte per character, nvarchar can use two. That's why you can have a varchar(8000) but only an nvarchar(4000) A: Both varchar and nvarchar auto-size to the content, but the number you define when declaring the column type is a maximum. Values in "nvarchar" take up twice the disk/memory space as "varchar" because unicode is two-byte, but when you declare the column type you are declaring the number of characters, not bytes. So when you define a column type, you should determine the maximum number of characters that the column will ever need to hold and have that as the varchar (or nvarchar) size. A good rule of thumb is to estimate the maximum sting length the column needs to hold, then add support for about 10% more characters to it to avoid problems with unexpectedly long data in the future. A: varchar(255) was also the maximum length in SQL Server 7.0 and earlier. A: In MS SQL Server (7.0 and up), varchar data is represented internally with up to three values: * *The actual string of characters, which will be from 0 to something over 8000 bytes (it’s based on page size, the other columns stored for the row, and a few other factors) *Two bytes used to indicate how long the data string is (which produces a value from 0 to 8000+) *If the column is nullable, one bit in the row’s null bitmask (so the null status of up to eight nullable columns can be represented in one byte) The important part is that two-byte data length indicator. If it was one byte, you could only properly record strings of length 0 to 255; with two bytes, you can record strings of length 0 to something over 64000+ (specifically, 2^16 -1). However, the SQL Server page length is 8k, which is where that 8000+ character limit comes from. (There's data overflow stuff in SQL 2005, but if your strings are going to be that long you should just go with varchar(max).) So, no matter how long you declare your varchar datatype column to be (15, 127, 511), what you will actually be storing for each and every row is: * *2 bytes to indicate how long the string is *The actual string, i.e. the number of characters in that string Which gets me to my point: a number of older systems used only 1 byte to store the string length, and that limited you to a maximum length of 255 characters, which isn’t all that long. With 2 bytes, you have no such arbitrary limit... and so I recommend picking a number that makes sense to the (presumed non-technically oriented) user. , I like 50, 100, 250, 500, even 1000. Given that base of 8000+ bytes of storage, 255 or 256 is just as efficient as 200 or 250, and less efficient when it comes time to explain things to the end users. This applies to single byte data (i.e. ansii, SQL_Latin1*_*General_CP1, et. al.). If you have to store data for multiple code pages or languages using different alphabets, you’ll need to work with the nvarchar data type (which I think works the same, two bytes for number of charactesr, but each actual character of data requires two bytes of storage). If you have strings likely to go over 8000, or over 4000 in nvarchar, you will need to use the [n]varchar(max) datatypes. And if you want to know why it is so very important to take up space with extra bytes just to track how long the data is, check out http://www.joelonsoftware.com/articles/fog0000000319.html Philip A: VARCHAR(255). It won't use all 255 characters of storage, just the storage you need. It's 255 and not 256 because then you have space for 255 plus the null-terminator (or size byte). The "N" is for Unicode. Use if you expect non-ASCII characters.
{ "language": "en", "url": "https://stackoverflow.com/questions/54512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Printing data into a preprinted form in C# .Net 3.5 SP1 I need to print out data into a pre-printed A6 form (1/4 the size of a landsacpe A4). I do not need to print paragraphs of text, just short lines scattered about on the page. All the stuff on MSDN is about priting paragraphs of text. Thanks for any help you can give, Roberto A: you'll have to create a PrintDocument object, handle the at least the PrintPage event and apply the appropriate changes to the PrinterSettings property. In your PrintPage event handler, do whatever you need to do with the PringPageEventArgs.Graphics object; like drawing lines, drawing images, etc. A: When finding the x,y coordinates to use for lining up your new text with the pre-printed gaps, the default settings for the graphics object's Draw____() functions are 100 pixels per inch. That might be subject to change based on your printer, but in my (very limited) experience that's always been the case.
{ "language": "en", "url": "https://stackoverflow.com/questions/54522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: win32 GUI app that writes usage text to stdout when invoked as "app.exe --help" How do I create a windows application that does the following: * *it's a regular GUI app when invoked with no command line arguments *specifying the optional "--help" command line argument causes the app to write usage text to stdout then terminate *it must be a single executable. No cheating by making a console app exec a 2nd executable. *assume the main application code is written in C/C++ *bonus points if no GUI window is created when "--help" is specified. (i.e., no flicker from a short-lived window) In my experience the standard visual studio template for console app has no GUI capability, and the normal win32 template does not send its stdout to the parent cmd shell. A: I know my answer is coming in late, but I think the preferred technique for the situation here is the ".com" and ".exe" method. This may be considered "cheating" by your definition of two executables, but it requires very little change on the programmers part and can be done one and forgot about. Also this solution does not have the disadvantages of Hugh's solution where you have a console windows displayed for a split second. In windows from the command line, if you run a program and don't specify an extension, the order of precedence in locating the executable will prefer a .com over a .exe. Then you can use tricks to have that ".com" be a proxy for the stdin/stdout/stderr and launch the same-named .exe file. This give the behavior of allowing the program to preform in a command line mode when called form a console (potentially only when certain command line args are detected) while still being able to launch as a GUI application free of a console. There are various articles describing this like "How to make an application as both GUI and Console application?" (see references in link below). I hosted a project called dualsubsystem on google code that updates an old codeguru solution of this technique and provides the source code and working example binaries. I hope that is helpful! A: Microsoft designed console and GUI apps to be mutually exclusive. This bit of short-sightedness means that there is no perfect solution. The most popular approach is to have two executables (eg. cscript / wscript, java / javaw, devenv.com / devenv.exe etc) however you've indicated that you consider this "cheating". You've got two options - to make a "console executable" or a "gui executable", and then use code to try to provide the other behaviour. * *GUI executable: cmd.exe will assume that your program does no console I/O so won't wait for it to terminate before continuing, which in interactive mode (ie not a batch) means displaying the next ("C:\>") prompt and reading from the keyboard. So even if you use AttachConsole your output will be mixed with cmd's output, and the situation gets worse if you try to do input. This is basically a non-starter. * *Console executable: Contrary to belief, there is nothing to stop a console executable from displaying a GUI, but there are two problems. The first is that if you run it from the command line with no arguments (so you want the GUI), cmd will still wait for it to terminate before continuing, so that particular console will be unusable for the duration. This can be overcome by launching a second process of the same executable (do you consider this cheating?), passing the DETACHED_PROCESS flag to CreateProcess() and immediately exiting. The new process can then detect that it has no console and display the GUI. Here's C code to illustrate this approach: #include <stdio.h> #include <windows.h> int main(int argc, char *argv[]) { if (GetStdHandle(STD_OUTPUT_HANDLE) == 0) // no console, we must be the child process { MessageBox(0, "Hello GUI world!", "", 0); } else if (argc > 1) // we have command line args { printf("Hello console world!\n"); } else // no command line args but a console - launch child process { DWORD dwCreationFlags = CREATE_DEFAULT_ERROR_MODE | DETACHED_PROCESS; STARTUPINFO startinfo; PROCESS_INFORMATION procinfo; ZeroMemory(&startinfo, sizeof(startinfo)); startinfo.cb = sizeof(startinfo); if (!CreateProcess(NULL, argv[0], NULL, NULL, FALSE, dwCreationFlags, NULL, NULL, &startinfo, &procinfo)) MessageBox(0, "CreateProcess() failed :(", "", 0); } exit(0); } I compiled it with cygwin's gcc - YMMV with MSVC. The second problem is that when run from Explorer, your program will for a split second display a console window. There's no programmatic way around this because the console is created by Windows when the app is launched, before it starts executing. The only thing you can do is, in your installer, make the shortcut to your program with a "show command" of SW_HIDE (ie. 0). This will only affect the console unless you deliberately honour the wShowWindow field of STARTUPINFO in your program, so don't do that. I've tested this by hacking cygwin's "mkshortcut.exe". How you accomplish it in your install program of choice is up to you. The user can still of course run your program by finding the executable in Explorer and double-clicking it, bypassing the console-hiding shortcut and seeing the brief black flash of a console window. There's nothing you can do about it. A: You can use AllocConsole() WinApi function to allocate a console for GUI application. You can also try attaching to a console of a parent process with AttachConsole(), this makes sense if it already has one. The complete code with redirecting stdout and stderr to this console will be like this: if(AttachConsole(ATTACH_PARENT_PROCESS) || AllocConsole()){ freopen("CONOUT$", "w", stdout); freopen("CONOUT$", "w", stderr); } I found this approach in the Pidgin sources (see WinMain() in pidgin/win32/winpidgin.c)
{ "language": "en", "url": "https://stackoverflow.com/questions/54536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Prevent Use of the Back Button (in IE) So the SMEs at my current place of employment want to try and disable the back button for certain pages. We have a page where the user makes some selections and submits them to be processed. In some instances they have to enter a comment on another page. What the users have figured out is that they don't have to enter a comment if they submit the information and go to the page with the comment and then hit the back button to return to the previous page. I know there are several different solutions to this (and many of them are far more elegant then disabling the back button), but this is what I'm left with. Is it possible to prevent someone from going back to the previous page through altering the behavior of the back button. (like a submit -> return false sorta thing). Due to double posting information I can't have it return to the previous page and then move to the current one. I can only have it not direct away from the current page. I Googled it, but I only saw posts saying that it will always return to the previous page. I was hoping that someone has some mad kung foo js skills that can make this possible. I understand that everyone says this is a bad idea, and I agree, but sometimes you just have to do what you're told. A: Nah, you're doomed. Even if you pop the page up in some different browser and hid the back button, there's always the Backspace key. The problem with marketing guys and analyst types is that some of them do not understand the fundamental concept of the web being stateless. They do not understand that the page is totally, totally unaware of the browser using it and absolute control of the browser is totally outside the capability of web pages. The best way to discourage your users to hit the back button is to make sure that your page loses all its data when they press back, e.g., the comment page is the only point where the data can be saved, and if they do press the back button they have to do everything all over again (think along the lines of pragma: nocache). Users will complain, sure, but they are the reason that this godforsaken requirement exists, right? A: Don't do this, just don't. It's bad interface design and forces the user's browser to behave in a way that they don't expect. I would regard any script that successfully stopped my back button from working to be a hack, and I would expect the IE team to release a security-fix for it. The back button is part of their program interface, not your website. In your specific case I think the best bet is to add an unload event to the page that warns the user if they haven't completed the form. The back button would be unaffected and the user would be warned of their action. A: I've seen this before: window.onBack = history.forward(); It is most definitely a dirty hack and, if at all possible, I would attempt to not disable the back button. And the user can probably still get around it quite easily. And depending on caching, there is no telling if the server code will be processed or if the cached page with JavaScript will run first. So, yeah, use at your own risk :) A: I came up with a little hack that disables the back button using JavaScript. I checked it on chrome 10, firefox 3.6 and IE9: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <title>Untitled Page</title> <script type = "text/javascript" > function changeHashOnLoad() { window.location.href += "#"; setTimeout("changeHashAgain()", "50"); } function changeHashAgain() { window.location.href += "1"; } // If you want to skip the auto-positioning at the top of browser window,you can add the below code: window.location.hash=' '; var storedHash = window.location.hash; window.setInterval(function () { if (window.location.hash != storedHash) { window.location.hash = storedHash; } }, 50); </script> </head> <body onload="changeHashOnLoad(); "> Try to hit the back button! </body> </html> A: Do you have access to the server-side source code? If so, you can put a check on the first page that redirects to the second page if the information has been submitted already (you'll need to use sessions for this, obviously). At a former job, this is how we handled multi-step applications (application as in application for acceptance). A: Could you move the comment to the previous page and make it a required field there? Disabling the back button will not work. A: Because of the security isolation of javascript in the browser, you cannot change what the back button does. Perhaps you could store something in the user's session that indicates that a comment is needed, and have any page in the app that the user tries to load redirect to the comment page? What if the user closes their browser when he/she gets tot he comment page? I know that you have not been given a choice here, but since what they are asking for seems to be impossible... Perhaps you could just not consider the item as completed until the user enters comments. Thus, you would need to keep track of both in-progress items and completed items, and make that distinction in the UI, but this might be the most robust method. Or just put the comment field on the form page? A: What the users have figured out is that they don't have to enter a comment if they submit the information and go to the page with the comment and then hit the back button to return to the previous page. Then they are probably also smart enough to type 'no comment' into the comments field. You can try to force people to add comments, but you will probably just end up with bad unusable software, annoyed users, and still not get comments. This is usually a good time to take a step back and reconsider what you are doing from the users' point of view. A: Disabling the back button seems kind of a "brute force" approach. Another option would be that you could jump out to a modal dialog that doesn't have command buttons, walk users through the workflow, and close the dialog when the process is complete. A: You should secure your application against double submission instead of breaking user interface to hide the bug. A: There simply is no reliable way to do this. You cannot guarantee that 100% of the time you can stop the user from doing this. With that in mind, is it worth going to extremely exotic solutions to disable "most" of the time? That's for you to decide. Good luck. A: AS a simple solution: try this one. Insert an update panel and a button in there and use javascript to hide it and then press it on page load. Yes I understand that it will cause your page to post back on load and may not work if javascript is disabled but certainly will help you achieve a half decent response to the back button issue. Andy A: You can prevent them from going back to the previous page. location.replace() replaces the current page's history entry with a new page, so... page1.html: user clicks a link that goes to page2.html page2.html: user clicks a link that calls location.replace('page3.html'); page3.html: user clicks back button and goes to page1.html This may not fit well with doing a POST, but you could post the data to a web service via AJAX, then call location.replace() A: If you are starting a new web app from scratch, or you have enough time to rework your app, you can use JavaScript and AJAX to avoid the browser's history, back, and forward functions. * *Open your app in a new window, either before or immediately after login. *If you wish, use window options to hide the nav bar (with the back and forward buttons). *Use AJAX for all server requests, without ever changing the window's location URL. *Use a simple web API to get data and perform actions, and render the app using JavaScript. *There will be only one URL in the window's history. *The back and forward buttons will do nothing. *The window can be closed automatically on logging out. *No information is leaked to the browser history, which can help with security. This technique answers the question, but it also contradicts best practice in several ways: * *The back and forward buttons should behave as expected. *An app should not open new browser windows. *An app should still function without JavaScript. Please carefully consider your requirements and your users before using this technique. A: I don't see this solution : function removeBack(){ window.location.hash="nbb"; window.location.hash=""; window.onhashchange=function(){window.location.hash="";} }
{ "language": "en", "url": "https://stackoverflow.com/questions/54539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How can I indicate that multiple versions of a dependent assembly are okay? Assemblies A and B are privately deployed and strongly named. Assembly A contains references to Assembly B. There are two versions of Assembly B: B1 and B2. I want to be able to indicate for Assembly A that it may bind to either B1 or B2 -- ideally, by incorporating this information into the assembly itself. What are my options? I'm somewhat familiar with versioning policy and the way it applies to the GAC, but I don't want to be dependent on these assemblies being in the GAC. A: There are several places you can indicate to the .Net Framework that a specific version of a strongly typed library should be preferred over another. These are: * *Publisher Policy file *machine.config file *app.config file All these methods utilise the "<bindingRedirect>" element which can instruct the .Net Framework to bind a version or range of versions of an assembly to a specific version. Here is a short example of the tag in use to bind all versions of an assembly up until version 2.0 to version 2.5: <assemblyBinding> <dependantAssembly> <assemblyIdentity name="foo" publicKeyToken="00000000000" culture="neutral" /> <bindingRedirect oldVersion="0.0.0.0 - 2.0.0.0" newVersion="2.5.0.0" /> </dependantAssembly> </assemblyBinding> There are lots of details so it's best if you read about Redirecting Assembly Versions on MSDN to decide which method is best for your case. A: You can set version policy in your app.config file. Alternatively you can manually load these assemblies with a call to Assembly.LoadFrom() when this is done assembly version is not considered.
{ "language": "en", "url": "https://stackoverflow.com/questions/54546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Call to a member function on a non-object So I'm refactoring my code to implement more OOP. I set up a class to hold page attributes. class PageAtrributes { private $db_connection; private $page_title; public function __construct($db_connection) { $this->db_connection = $db_connection; $this->page_title = ''; } public function get_page_title() { return $this->page_title; } public function set_page_title($page_title) { $this->page_title = $page_title; } } Later on I call the set_page_title() function like so function page_properties($objPortal) { $objPage->set_page_title($myrow['title']); } When I do I receive the error message: Call to a member function set_page_title() on a non-object So what am I missing? A: Either $objPage is not an instance variable OR your are overwriting $objPage with something that is not an instance of class PageAttributes. A: It means that $objPage is not an instance of an object. Can we see the code you used to initialize the variable? As you expect a specific object type, you can also make use of PHPs type-hinting featureDocs to get the error when your logic is violated: function page_properties(PageAtrributes $objPortal) { ... $objPage->set_page_title($myrow['title']); } This function will only accept PageAtrributes for the first parameter. A: It could also mean that when you initialized your object, you may have re-used the object name in another part of your code. Therefore changing it's aspect from an object to a standard variable. IE $game = new game; $game->doGameStuff($gameReturn); foreach($gameArray as $game) { $game['STUFF']; // No longer an object and is now a standard variable pointer for $game. } $game->doGameStuff($gameReturn); // Wont work because $game is declared as a standard variable. You need to be careful when using common variable names and were they are declared in your code. A: There's an easy way to produce this error: $joe = null; $joe->anything(); Will render the error: Fatal error: Call to a member function anything() on a non-object in /Applications/XAMPP/xamppfiles/htdocs/casMail/dao/server.php on line 23 It would be a lot better if PHP would just say, Fatal error: Call from Joe is not defined because (a) joe is null or (b) joe does not define anything() in on line <##>. Usually you have build your class so that $joe is not defined in the constructor or A: I recommend the accepted answer above. If you are in a pinch, however, you could declare the object as a global within the page_properties function. $objPage = new PageAtrributes; function page_properties() { global $objPage; $objPage->set_page_title($myrow['title']); } A: function page_properties($objPortal) { $objPage->set_page_title($myrow['title']); } looks like different names of variables $objPortal vs $objPage A: I realized that I wasn't passing $objPage into page_properties(). It works fine now. A: you can use 'use' in function like bellow example function page_properties($objPortal) use($objPage){ $objPage->set_page_title($myrow['title']); }
{ "language": "en", "url": "https://stackoverflow.com/questions/54566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "80" }
Q: How do I get InputVerifier to work with an editable JComboBox I've got an JComboBox with a custom inputVerifyer set to limit MaxLength when it's set to editable. The verify method never seems to get called. The same verifyer gets invoked on a JTextField fine. What might I be doing wrong? A: I found a workaround. I thought I'd let the next person with this problem know about. Basically. Instead of setting the inputVerifier on the ComboBox you set it to it's "Editor Component". JComboBox combo = new JComboBox(); JTextField tf = (JTextField)(combo.getEditor().getEditorComponent()); tf.setInputVerifier(verifyer); A: Show us a small section of your code. package inputverifier; import javax.swing.*; class Go { public static void main(String[] args) { java.awt.EventQueue.invokeLater(new Runnable() { public void run() { runEDT(); }}); } private static void runEDT() { new JFrame("combo thing") {{ setLayout(new java.awt.GridLayout(2, 1)); add(new JComboBox() {{ setEditable(true); setInputVerifier(new InputVerifier() { @Override public boolean verify(JComponent input) { System.err.println("Hi!"); return true; } }); }}); add(new JTextField()); setDefaultCloseOperation(EXIT_ON_CLOSE); pack(); setVisible(true); }}; } } Looks like it's a problem with JComboBox being a composite component. I'd suggest avoiding such nasty UI solutions.
{ "language": "en", "url": "https://stackoverflow.com/questions/54567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to capture output of "pnputil.exe -e" How do I capture the output of "%windir%/system32/pnputil.exe -e"? (assume windows vista 32-bit) Bonus for technical explanation of why the app normally writes output to the cmd shell, but when stdout and/or stderr are redirected then the app writes nothing to the console or to stdout/stderr? C:\Windows\System32>PnPutil.exe --help Microsoft PnP Utility {...} C:\Windows\System32>pnputil -e > c:\foo.txt C:\Windows\System32>type c:\foo.txt C:\Windows\System32>dir c:\foo.txt Volume in drive C has no label. Volume Serial Number is XXXX-XXXX Directory of c:\ 09/10/2008 12:10 PM 0 foo.txt 1 File(s) 0 bytes A: I think I found the technical answer for why it behaves this way. The MSDN page for WriteConsole says that redirecting standard output to a file causes WriteConsole to fail and that WriteFile should be used instead. The debugger confirms that pnputil.exe does call kernel32!WriteConsoleW and kernel32!WriteConsoleInputW. Hmm, I should have asked this as two separate questions. I'm still looking for an answer for how to scrape the output from this command. The accepted answer will be one that answers this part of the question. A: Doesn't seem like there is an easy way at all. You would have to start hooking the call to WriteConsole and dumping the string buffers. See this post for a similar discussion. Of course, if this is a one off for interactive use then just select all the output from the command window and copy it to the clipboard. (Make sure you cmd window buffer is big enough to store all the output). A: If you know the driver name and have the driver, pnputil.exe -a d:\pnpdriver*.inf This gives list of corresponding oemXX.inf for the drivers you are looking. A: Some applications are written so that it works in piping scenarios well e.g. svn status | find "? " is a command that pipes output of svn status into find "? " so it would filter subversion output down to unknown files (marked with a question mark) in my repos. Imagine if svn status would also output a header that says "Copyright ? 2009" That very specific header line would also show up. Which is not what I expect. So certain tools, like those of Sysinternals' will write any header information only if it is printed directly to the command window, if any kind of redirection is detected, then those header information will not be written as by the reason above. Header information becomes noise when used in piping/automation scenarios. I suppose if you can't use > to output to a file, its because the tool is hardwired not to do so. You'll need an indirect means to capture it. Hope this helps. A: As alluded to in the question, but not clearly stated, "pnputil -e 2> c:\foo.txt" does not have the intended result either. This one directs nothing into the file but it does send the output to the console. A: There's only two output streams. If "> c:\foo.txt" doesn't work, and "2> C:\foo.txt" doesn't work then nothing is being output. You can merge the standard error into the standard output (2>&1) so all output is through standard output: pnputil -e 1>c:\foo.txt 2>&1 If that doesn't output anything to foo.txt then pnputil must be detecting redirection and stopping output. A: You could have tried Expect for Windows to do this kind of things, it would tell the tool that there was a console and hook the WriteConsole calls for you. Expect for Windows A: Click on the system menu icon (upper left hand corner->properties->layout) Change the screen buffer size cls pnputil -e ;-P A: So I am looking for the same type of information, and came across this: https://sysadminstricks.com/tricks/windows-tricks/cleaning-up-windows-driver-store-folder.html. While the syntax is wrong, it seems to have done the trick for me (with a little correction): pnputil.exe -e > c:\driveroutput.txt When that command is executed, it does not output to the command line, but it does generate driveroutput.txt to the root of C:. Opening the text file does, in fact, show that I now have an output of the enumerated OEM drivers on my PC.
{ "language": "en", "url": "https://stackoverflow.com/questions/54578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: WCF Configuration without a config file Does anyone know of a good example of how to expose a WCF service programatically without the use of a configuration file? I know the service object model is much richer now with WCF, so I know it's possible. I just have not seen an example of how to do so. Conversely, I would like to see how consuming without a configuration file is done as well. Before anyone asks, I have a very specific need to do this without configuration files. I would normally not recommend such a practice, but as I said, there is a very specific need in this case. A: It is not easy on the server side.. For client side, you can use ChannelFactory A: All WCF configuration can be done programatically. So it's possible to create both servers and clients without a config file. I recommend the book "Programming WCF Services" by Juval Lowy, which contains many examples of programmatic configuration. A: I found the blog post at the link below around this topic very interesting. One idea I like is that of being able to just pass in a binding or behavior or address XML section from the configuration to the appropriate WCF object and let it handle the assigning of the properties - currently you cannot do this. Like others on the web I am having issues around needing my WCF implementation to use a different configuration file than that of my hosting application (which is a .NET 2.0 Windows service). http://salvoz.com/blog/2007/12/09/programmatically-setting-wcf-configuration/ A: It's very easy to do on both the client and the server side. Juval Lowy's book has excellent examples. As to your comment about the configuration files, I would say that the configuration files are a poor man's second to doing it in code. Configuration files are great when you control every client that will connect to your server and make sure they're updated, and that users can't find them and change anything. I find the WCF configuration file model to be limiting, mildly difficult to design, and a maintenance nightmare. All in all, I think it was a very poor decision by MS to make the configuration files the default way of doing things. EDIT: One of the things you can't do with the configuration file is to create services with non-default constructors. This leads to static/global variables and singletons and other types of non-sense in WCF. A: If you are interested in eliminating the usage of the System.ServiceModel section in the web.config for IIS hosting, I have posted an example of how to do that here (http://bejabbers2.blogspot.com/2010/02/wcf-zero-config-in-net-35-part-ii.html). I show how to customize a ServiceHost to create both metadata and wshttpbinding endpoints. I do it in a general purpose way that doesn't require additional coding. For those who aren't immediately upgrading to .NET 4.0 this can be pretty convenient. A: Here, this is complete and working code. I think it will help you a lot. I was searching and never finds a complete code that's why I tried to put complete and working code. Good luck. public class ValidatorClass { WSHttpBinding BindingConfig; EndpointIdentity DNSIdentity; Uri URI; ContractDescription ConfDescription; public ValidatorClass() { // In constructor initializing configuration elements by code BindingConfig = ValidatorClass.ConfigBinding(); DNSIdentity = ValidatorClass.ConfigEndPoint(); URI = ValidatorClass.ConfigURI(); ConfDescription = ValidatorClass.ConfigContractDescription(); } public void MainOperation() { var Address = new EndpointAddress(URI, DNSIdentity); var Client = new EvalServiceClient(BindingConfig, Address); Client.ClientCredentials.ServiceCertificate.Authentication.CertificateValidationMode = X509CertificateValidationMode.PeerTrust; Client.Endpoint.Contract = ConfDescription; Client.ClientCredentials.UserName.UserName = "companyUserName"; Client.ClientCredentials.UserName.Password = "companyPassword"; Client.Open(); string CatchData = Client.CallServiceMethod(); Client.Close(); } public static WSHttpBinding ConfigBinding() { // ----- Programmatic definition of the SomeService Binding ----- var wsHttpBinding = new WSHttpBinding(); wsHttpBinding.Name = "BindingName"; wsHttpBinding.CloseTimeout = TimeSpan.FromMinutes(1); wsHttpBinding.OpenTimeout = TimeSpan.FromMinutes(1); wsHttpBinding.ReceiveTimeout = TimeSpan.FromMinutes(10); wsHttpBinding.SendTimeout = TimeSpan.FromMinutes(1); wsHttpBinding.BypassProxyOnLocal = false; wsHttpBinding.TransactionFlow = false; wsHttpBinding.HostNameComparisonMode = HostNameComparisonMode.StrongWildcard; wsHttpBinding.MaxBufferPoolSize = 524288; wsHttpBinding.MaxReceivedMessageSize = 65536; wsHttpBinding.MessageEncoding = WSMessageEncoding.Text; wsHttpBinding.TextEncoding = Encoding.UTF8; wsHttpBinding.UseDefaultWebProxy = true; wsHttpBinding.AllowCookies = false; wsHttpBinding.ReaderQuotas.MaxDepth = 32; wsHttpBinding.ReaderQuotas.MaxArrayLength = 16384; wsHttpBinding.ReaderQuotas.MaxStringContentLength = 8192; wsHttpBinding.ReaderQuotas.MaxBytesPerRead = 4096; wsHttpBinding.ReaderQuotas.MaxNameTableCharCount = 16384; wsHttpBinding.ReliableSession.Ordered = true; wsHttpBinding.ReliableSession.InactivityTimeout = TimeSpan.FromMinutes(10); wsHttpBinding.ReliableSession.Enabled = false; wsHttpBinding.Security.Mode = SecurityMode.Message; wsHttpBinding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Certificate; wsHttpBinding.Security.Transport.ProxyCredentialType = HttpProxyCredentialType.None; wsHttpBinding.Security.Transport.Realm = ""; wsHttpBinding.Security.Message.NegotiateServiceCredential = true; wsHttpBinding.Security.Message.ClientCredentialType = MessageCredentialType.UserName; wsHttpBinding.Security.Message.AlgorithmSuite = System.ServiceModel.Security.SecurityAlgorithmSuite.Basic256; // ----------- End Programmatic definition of the SomeServiceServiceBinding -------------- return wsHttpBinding; } public static Uri ConfigURI() { // ----- Programmatic definition of the Service URI configuration ----- Uri URI = new Uri("http://localhost:8732/Design_Time_Addresses/TestWcfServiceLibrary/EvalService/"); return URI; } public static EndpointIdentity ConfigEndPoint() { // ----- Programmatic definition of the Service EndPointIdentitiy configuration ----- EndpointIdentity DNSIdentity = EndpointIdentity.CreateDnsIdentity("tempCert"); return DNSIdentity; } public static ContractDescription ConfigContractDescription() { // ----- Programmatic definition of the Service ContractDescription Binding ----- ContractDescription Contract = ContractDescription.GetContract(typeof(IEvalService), typeof(EvalServiceClient)); return Contract; } } A: Consuming a web service without a config file is very simple, as I've discovered. You simply need to create a binding object and address object and pass them either to the constructor of the client proxy or to a generic ChannelFactory instance. You can look at the default app.config to see what settings to use, then create a static helper method somewhere that instantiates your proxy: internal static MyServiceSoapClient CreateWebServiceInstance() { BasicHttpBinding binding = new BasicHttpBinding(); // I think most (or all) of these are defaults--I just copied them from app.config: binding.SendTimeout = TimeSpan.FromMinutes( 1 ); binding.OpenTimeout = TimeSpan.FromMinutes( 1 ); binding.CloseTimeout = TimeSpan.FromMinutes( 1 ); binding.ReceiveTimeout = TimeSpan.FromMinutes( 10 ); binding.AllowCookies = false; binding.BypassProxyOnLocal = false; binding.HostNameComparisonMode = HostNameComparisonMode.StrongWildcard; binding.MessageEncoding = WSMessageEncoding.Text; binding.TextEncoding = System.Text.Encoding.UTF8; binding.TransferMode = TransferMode.Buffered; binding.UseDefaultWebProxy = true; return new MyServiceSoapClient( binding, new EndpointAddress( "http://www.mysite.com/MyService.asmx" ) ); }
{ "language": "en", "url": "https://stackoverflow.com/questions/54579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "90" }
Q: When should you use a class vs a struct in C++? In what scenarios is it better to use a struct vs a class in C++? A: As others have pointed out * *both are equivalent apart from default visibility *there may be reasons to be forced to use the one or the other for whatever reason There's a clear recommendation about when to use which from Stroustrup/Sutter: Use class if the class has an invariant; use struct if the data members can vary independently However, keep in mind that it is not wise to forward declare sth. as a class (class X;) and define it as struct (struct X { ... }). It may work on some linkers (e.g., g++) and may fail on others (e.g., MSVC), so you will find yourself in developer hell. A: An advantage of struct over class is that it save one line of code, if adhering to "first public members, then private". In this light, I find the keyword class useless. Here is another reason for using only struct and never class. Some code style guidelines for C++ suggest using small letters for function macros, the rationale being that when the macro is converted to an inline function, the name shouldn't need to be changed. Same here. You have your nice C-style struct and one day, you find out you need to add a constructor, or some convenience method. Do you change it to a class? Everywhere? Distinguishing between structs and classes is just too much hassle getting into the way of doing what we should be doing - programming. Like so many of C++'s problems it arises out of the strong desire for backwards compatibility. A: Both struct and class are the same under the hood though with different defaults as to visibility, struct default is public and class default is private. You can change either one to be the other with the appropriate use of private and public. They both allow inheritance, methods, constructors, destructors, and all the rest of the goodies of an object oriented language. However one huge difference between the two is that struct as a keyword is supported in C whereas class is not. This means that one can use a struct in an include file that can be #include into either C++ or C so long as the struct is a plain C style struct and everything else in the include file is compatible with C, i.e. no C++ specific keywords such as private, public, no methods, no inheritance, etc. etc. etc. A C style struct can be used with other interfaces which support using C style struct to carry data back and forth over the interface. A C style struct is a kind of template (not a C++ template but rather a pattern or stencil) that describes the layout of a memory area. Over the years interfaces usable from C and with C plug-ins (here's looking at you Java and Python and Visual Basic) have been created some of which work with C style struct. A: The only time I use a struct instead of a class is when declaring a functor right before using it in a function call and want to minimize syntax for the sake of clarity. e.g.: struct Compare { bool operator() { ... } }; std::sort(collection.begin(), collection.end(), Compare()); A: They are pretty much the same thing. Thanks to the magic of C++, a struct can hold functions, use inheritance, created using "new" and so on just like a class The only functional difference is that a class begins with private access rights, while a struct begins with public. This is the maintain backwards compatibility with C. In practice, I've always used structs as data holders and classes as objects. A: From the C++ FAQ Lite: The members and base classes of a struct are public by default, while in class, they default to private. Note: you should make your base classes explicitly public, private, or protected, rather than relying on the defaults. struct and class are otherwise functionally equivalent. OK, enough of that squeaky clean techno talk. Emotionally, most developers make a strong distinction between a class and a struct. A struct simply feels like an open pile of bits with very little in the way of encapsulation or functionality. A class feels like a living and responsible member of society with intelligent services, a strong encapsulation barrier, and a well defined interface. Since that's the connotation most people already have, you should probably use the struct keyword if you have a class that has very few methods and has public data (such things do exist in well designed systems!), but otherwise you should probably use the class keyword. A: Class. Class members are private by default. class test_one { int main_one(); }; Is equivalent to class test_one { private: int main_one(); }; So if you try int two = one.main_one(); We will get an error: main_one is private because its not accessible. We can solve it by initializing it by specifying its a public ie class test_one { public: int main_one(); }; Struct. A struct is a class where members are public by default. struct test_one { int main_one; }; Means main_one is private ie class test_one { public: int main_one; }; I use structs for data structures where the members can take any value, it's easier that way. A: After years of programming in C++, my main language, I come to the dead conclusion that this is another one of C++ dumb feature. There is no real difference between the two, and no reason why I should spend extra time deciding whether I should define my entity as a struct or a class. To answer this question, feel free to always define your entity as a struct. Members will be public by default which is the norm. But even more importantly, inheritance will be public by default. Protected inheritance, and even worse, private inheritance, are the exceptions. I have never had a case where private inheritance was the right thing to do. Yes I tried to invent problems to use private inheritance but it didn't work. And Java, the role model of Object Oriented programming defaults to public inheritance if you don't use the accessor keywords. And by the way, Java doesn't allow accessor keywords on inherited classes, they can only be publicly inherited. So you can see, the cpp team really fell down here. Another frustrating thing about this, is that if you define as a class and declare as a struct you get compilation warning. As though this is something that impacted the performance or accuracy of your program. One answer also noted that MSVC may propogate a compiler error instead. Those persons that use classes when it is raining and structs when it is shining are doing so based on what they have been taught. It's not something they discovered to be true. Java does not have a pair of names for classes, and only have the class keyword. If you want a data structure, simply make all your members public and don't add functions. This works in Java and I don't see any problem. What's the problem? You need 4 or 5 characters of BOM code to determine how to interpret the context of a class entity. A: they're the same thing with different defaults (private by default for class, and public by default for struct), so in theory they're totally interchangeable. so, if I just want to package some info to move around, I use a struct, even if i put a few methods there (but not many). If it's a mostly-opaque thing, where the main use would be via methods, and not directly to the data members, i use a full class. A: Structs by default have public access and classes by default have private access. Personally I use structs for Data Transfer Objects or as Value Objects. When used as such I declare all members as const to prevent modification by other code. A: Just to address this from a C++20 Standardese perspective (working from N4860)... A class is a type. The keywords "class" and "struct" (and "union") are - in the C++ grammar - class-keys, and the only functional significance of the choice of class or struct is: The class-key determines whether ... access is public or private by default (11.9). Data member default accessibility That the class keyword results in private-by-default members, and `struct keyword results in public-by-default members, is documented by the examples in 11.9.1: class X { int a; // X::a is private by default: class used ...vs... struct S { int a; // S::a is public by default: struct used Base class default accessibility 1.9 also says: In the absence of an access-specifier for a base class, public is assumed when the derived class is defined with the class-key struct and private is assumed when the class is defined with the class-key class. Circumstances where consistent use of struct or class is required... There's a requirement: In a redeclaration, partial specialization, explicit specialization or explicit instantiation of a class template, the class-key shall agree in kind with the original class template declaration (9.2.8.3). ...in any elaborated-type-specifier, the enum keyword shall be used to refer to an enumeration (9.7.1), the union class-key shall be used to refer to a union (11.5), and either the class or struct class-key shall be used to refer to a non-union class (11.1). The following example (of when consistency is not required) is provided: struct S { } s; class S* p = &s; // OK Still, some compilers may warn about this. Interestingly, while the types you create with struct, class and union are all termed "classes", we have... A standard-layout struct is a standard layout class defined with the class-key struct or the class-key class. ...so in Standardese, when there's talk of a standard-layout struct it's using "struct" to imply "not a union"s. I'm curious if there are similar use of "struct" in other terminology, but it's too big a job to do an exhaustive search of the Standard. Comments about that welcome. A: As everyone else notes there are really only two actual language differences: * *struct defaults to public access and class defaults to private access. *When inheriting, struct defaults to public inheritance and class defaults to private inheritance. (Ironically, as with so many things in C++, the default is backwards: public inheritance is by far the more common choice, but people rarely declare structs just to save on typing the "public" keyword. But the real difference in practice is between a class/struct that declares a constructor/destructor and one that doesn't. There are certain guarantees to a "plain-old-data" POD type, that no longer apply once you take over the class's construction. To keep this distinction clear, many people deliberately only use structs for POD types, and, if they are going to add any methods at all, use classes. The difference between the two fragments below is otherwise meaningless: class X { public: // ... }; struct X { // ... }; (Incidentally, here's a thread with some good explanations about what "POD type" actually means: What are POD types in C++?) A: You can use "struct" in C++ if you are writing a library whose internals are C++ but the API can be called by either C or C++ code. You simply make a single header that contains structs and global API functions that you expose to both C and C++ code as this: // C access Header to a C++ library #ifdef __cpp extern "C" { #endif // Put your C struct's here struct foo { ... }; // NOTE: the typedef is used because C does not automatically generate // a typedef with the same name as a struct like C++. typedef struct foo foo; // Put your C API functions here void bar(foo *fun); #ifdef __cpp } #endif Then you can write a function bar() in a C++ file using C++ code and make it callable from C and the two worlds can share data through the declared struct's. There are other caveats of course when mixing C and C++ but this is a simplified example. A: There are lots of misconceptions in the existing answers. Both class and struct declare a class. Yes, you may have to rearrange your access modifying keywords inside the class definition, depending on which keyword you used to declare the class. But, beyond syntax, the only reason to choose one over the other is convention/style/preference. Some people like to stick with the struct keyword for classes without member functions, because the resulting definition "looks like" a simple structure from C. Similarly, some people like to use the class keyword for classes with member functions and private data, because it says "class" on it and therefore looks like examples from their favourite book on object-oriented programming. The reality is that this completely up to you and your team, and it'll make literally no difference whatsoever to your program. The following two classes are absolutely equivalent in every way except their name: struct Foo { int x; }; class Bar { public: int x; }; You can even switch keywords when redeclaring: class Foo; struct Bar; (although this breaks Visual Studio builds due to non-conformance, so that compiler will emit a warning when you do this.) and the following expressions both evaluate to true: std::is_class<Foo>::value std::is_class<Bar>::value Do note, though, that you can't switch the keywords when redefining; this is only because (per the one-definition rule) duplicate class definitions across translation units must "consist of the same sequence of tokens". This means you can't even exchange const int member; with int const member;, and has nothing to do with the semantics of class or struct. A: One place where a struct has been helpful for me is when I have a system that's receiving fixed format messages (over say, a serial port) from another system. You can cast the stream of bytes into a struct that defines your fields, and then easily access the fields. typedef struct { int messageId; int messageCounter; int messageData; } tMessageType; void processMessage(unsigned char *rawMessage) { tMessageType *messageFields = (tMessageType *)rawMessage; printf("MessageId is %d\n", messageFields->messageId); } Obviously, this is the same thing you would do in C, but I find that the overhead of having to decode the message into a class is usually not worth it. A: As every one says, the only real difference is the default access. But I particularly use struct when I don't want any sort of encapsulation with a simple data class, even if I implement some helper methods. For instance, when I need something like this: struct myvec { int x; int y; int z; int length() {return x+y+z;} }; A: To answer my own question (shamelessly), As already mentioned, access privileges are the only difference between them in C++. I tend to use a struct for data-storage only. I'll allow it to get a few helper functions if it makes working with the data easier. However as soon as the data requires flow control (i.e. getters/setters that maintain or protect an internal state) or starts acquring any major functionality (basically more object-like), it will get 'upgraded' to a class to better communicate intent. A: For C++, there really isn't much of a difference between structs and classes. The main functional difference is that members of a struct are public by default, while they are private by default in classes. Otherwise, as far as the language is concerned, they are equivalent. That said, I tend to use structs in C++ like I do in C#, similar to what Brian has said. Structs are simple data containers, while classes are used for objects that need to act on the data in addition to just holding on to it. A: Structs (PODs, more generally) are handy when you're providing a C-compatible interface with a C++ implementation, since they're portable across language borders and linker formats. If that's not a concern to you, then I suppose the use of the "struct" instead of "class" is a good communicator of intent (as @ZeroSignal said above). Structs also have more predictable copying semantics, so they're useful for data you intend to write to external media or send across the wire. Structs are also handy for various metaprogramming tasks, like traits templates that just expose a bunch of dependent typedefs: template <typename T> struct type_traits { typedef T type; typedef T::iterator_type iterator_type; ... }; ...But that's really just taking advantage of struct's default protection level being public... A: The differences between a class and a struct in C++ are: * *struct members and base classes/structs are public by default. *class members and base classes/structs are private by default. Both classes and structs can have a mixture of public, protected and private members, can use inheritance, and can have member functions. I would recommend you: * *use struct for plain-old-data structures without any class-like features; *use class when you make use of features such as private or protected members, non-default constructors and operators, etc. A: Technically both are the same in C++ - for instance it's possible for a struct to have overloaded operators etc. However : I use structs when I wish to pass information of multiple types simultaneously I use classes when the I'm dealing with a "functional" object. Hope it helps. #include <string> #include <map> using namespace std; struct student { int age; string name; map<string, int> grades }; class ClassRoom { typedef map<string, student> student_map; public : student getStudentByName(string name) const { student_map::const_iterator m_it = students.find(name); return m_it->second; } private : student_map students; }; For instance, I'm returning a struct student in the get...() methods over here - enjoy. A: When would you choose to use struct and when to use class in C++? I use struct when I define functors and POD. Otherwise I use class. // '()' is public by default! struct mycompare : public std::binary_function<int, int, bool> { bool operator()(int first, int second) { return first < second; } }; class mycompare : public std::binary_function<int, int, bool> { public: bool operator()(int first, int second) { return first < second; } }; A: I use structs when I need to create POD type or functor. A: All class members are private by default and all struct members are public by default. Class has default private bases and Struct has default public bases. Struct in case of C cannot have member functions where as in case of C++ we can have member functions being added to the struct. Other than these differences, I don't find anything surprising about them. A: I use struct only when I need to hold some data without any member functions associated to it (to operate on the member data) and to access the data variables directly. Eg: Reading/Writing data from files and socket streams etc. Passing function arguments in a structure where the function arguments are too many and function syntax looks too lengthy. Technically there is no big difference between class and struture except default accessibility. More over it depends on programming style how you use it. A: I thought that Structs was intended as a Data Structure (like a multi-data type array of information) and classes was inteded for Code Packaging (like collections of subroutines & functions).. :( A: I never use "struct" in C++. I can't ever imagine a scenario where you would use a struct when you want private members, unless you're willfully trying to be confusing. It seems that using structs is more of a syntactic indication of how the data will be used, but I'd rather just make a class and try to make that explicit in the name of the class, or through comments. E.g. class PublicInputData { //data members };
{ "language": "en", "url": "https://stackoverflow.com/questions/54585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1238" }
Q: Solid Config for webdev in emacs under linux AND windows? I have a windows laptop (thinkpad) and somewhat recently rediscovered emacs and the benefit that all those wacky shortcuts can be when the arrow keys are located somewhere near you right armpit. I was discouraged after php-mode, css-mode, etc, under mmm-mode was inconsistent, buggy, and refused to properly interpret some of my files. (In all fairness, I'm most likely doin' it wrong) So I eventually found the nxhtml package which worked pretty well. However, nxhtml causes weird bugs and actually crashes on certain files (certain combinations of nested modes I supposed) under linux! (using Ubuntu 7.10 and Kubuntu 8.04) I'd like to be able to work on the laptop as well as the home linux pc without having to deal with inconsistent implementations of something that shouldn't be this hard. I've googled and looked around and there's a good chance I'm the only human on the planet having these problems... Anyone got some advice? (in lieu of an emacs solutions, a good enough cross-platform lightweight text editor with the dev features would also work I suppose...) A: Personally, I like mumamo-mode. I'm not sure if you're including that in your problem description, since it does rely on (and is usually downloaded with) nxhtml-mode. So I don't know if you're using mumamo or just some aspect of nxhtml that lets you use multiple modes. If you're not using mumamo-mode, then I'd recommend trying it. It won't fix your nxhtml issues, but it is a pretty simple way to do editing of multi-mode files (works great for me, for HTML, CSS, JS, PHP, etc.) A: Although I use emacs when I have to (ie. when I'm at the command line), I use Eclipse for all my real development work. If you get the Web Standards Toolkit plug-in for it, it can do syntax coloring, tag auto-completion, and other fun stuff. Alternatively, if Eclipse is to "heavy" for you, jEdit is another excellent program for doing web development (it has most of it's web dev support built in, but you can also get some additional plug-ins for features like HTML Tidy). Both programs are open source and Java-based, which means they're both free and run on (virtually) any platform. A: You could try mmm-mode and multi-mode. I haven't tried them: I'm happy with nxhtml for now. What sort of problems did you encounter? A: Five years after the OP, let me recommend Emacs web-mode. It has excellent support for combined web documents (html+php+css+js+asp+jsp...). Snippets. Syntax highlighting. Auto-completion. css-colorization. Automatic working indentation. Auto-close tags. web-mode has completely replaced php-mode/html etc for my daily development. Easy installation through MELPA. There is a Github page for reporting issues, which the developer has been very quick to fix.
{ "language": "en", "url": "https://stackoverflow.com/questions/54586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Fast Disk Cloning Is there a way to have Linux read ahead when cloning a disk? I use the program named "dd" to clone disks. The last time I did this it seemed as though the OS was reading then writing but never at the same time. Ideally, the destination disk would be constantly writing without waiting that's of course if the source disk can keep up. UPDATE: I normally choose a large block size when cloning (ex. 16M or 32MB). A: You might try increasing the block size using the bs argument; by default, I believe dd uses a block size equal to the disk's preferred block size, which will mean many more reads and writes to copy an entire disk. Linux's dd supports human-readable suffixes: dd if=/dev/sda of=/dev/sdb bs=1M A: The fastest for me: dd if=/dev/sda bs=1M iflag=direct | dd of=/dev/sdb bs=1M oflag=direct reaches ~100MiB/s, whereas other options (single process, no direct, default 512b block size, ...) don't even reach 30MiB/s... To watch the progress, run in another console: watch -n 60 killall -USR1 dd A: if the two disks use different channel (e.g., SATA) you can use high performance tool like fastDD. The authors claim: "In this work, we reviewed the problem of reliably and efficiently copying data, recalling all the hardware and software mechanisms which intervene and interfer in the copying process. Our consideration have been coded in fastdd, a C++ program able to copy data very efficiently, as we show in our test." Moreover the tool keeps a syntax very similar to the old dd. http://www.dei.unipd.it/~zagonico/fastdd/ https://github.com/zagonico86/fastdd A: Commodore Jaeger is right about: dd if=/dev/sda of=/dev/sdb bs=1M Also, adjusting "readahead" on the drives usually improves performance. The default may be something like 256, and optimal 1024. Each setup is different, so you would have to run benchmarks to find the best value. # blockdev --getra /dev/sda 256 # blockdev --setra 1024 /dev/sda # blockdev --getra /dev/sda 1024 # blockdev --help Usage: blockdev -V blockdev --report [devices] blockdev [-v|-q] commands devices Available commands: --getsz (get size in 512-byte sectors) --setro (set read-only) --setrw (set read-write) --getro (get read-only) --getss (get sectorsize) --getbsz (get blocksize) --setbsz BLOCKSIZE (set blocksize) --getsize (get 32-bit sector count) --getsize64 (get size in bytes) --setra READAHEAD (set readahead) --getra (get readahead) --flushbufs (flush buffers) --rereadpt (reread partition table) --rmpart PARTNO (disable partition) --rmparts (disable all partitions) # A: Maybe you can use two processes dd if=indevfile | dd of=outdevfile I'll assume you can set the other dd options as it suits you. This has some overhead but should allow asynchrony between reading one disk and writing the other. A: Are you sure it isn't doing that at the same time? I would expect the disk caches to make sure it that happens. If not, non-blocking or even asynchronous reads/writes may help, A: About your update: How big are the caches of your HDs? (specially the writing one). It may be that that is too much and you may need to reduce it to prevent unnecessary blocking.
{ "language": "en", "url": "https://stackoverflow.com/questions/54612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I get an XML file as XML (and not a string) with Ajax in Prototype.js? This code is from Prototype.js. I've looked at probably 20 different tutorials, and I can't figure out why this is not working. The response I get is null. new Ajax.Request(/path/to / xml / file.xml, { method: "get", contentType: "application/xml", onSuccess: function(transport) { alert(transport.responseXML); } }); If I change the responseXML to responseText, then it alerts to me the XML file as a string. This is not a PHP page serving up XML, but an actual XML file, so I know it is not the response headers. A: If transport.responseXML is null but you have a value for transport.responseText then I believe it's because it's not a valid XML file. Edit: I just noticed that in our code here whenever we request an XML file we set the content type to 'text/xml'. I have no idea if that makes a difference or not. A: Just want to share my afternoon working on the issue with a NULL result for responseXML responses. My results were exactly as described in the question: responseText was filled with the XML file, responseXML was NULL. As i was totally sure my file is in valid XML format, the error must be somewhere different. As mentioned in the Prototype v1.7 documentation, i set the content type to "application/xml". The response sent was constantly "text/html", no matter what. To make it short, the problem i ran into was, that my XML file had the ending ".gpx" as it's a de facto standard for GPS coordinates. The mime-types collection of my local XAMPP Apache installation only foresees the endings "xml" and "xsl". After adding "gpx" and restarting the server the program ran smoothly as it's supposed to be. In my case, there are three solutions: 1) edit the "mime.types" file of Apache. Using an XAMPP installation, you might find it under "C:\xampp\apache\conf\mime.types". Search for the "application/xml" record and change as follows: application/xml xml xsl gpx Don't forget to restart the server! 2) add the mime type in a .htaccess of the appropriate folder. Open or create a .htaccess file and add following line: AddType application/xml xml xsl gpx 3) during upload process, change file type to "xml" instead of whatever you have Hope i safe some time to one of you guys.
{ "language": "en", "url": "https://stackoverflow.com/questions/54626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Is there a ClientScriptManager.RegisterClientScriptInclude equivalent for CSS The ClientScriptManager.RegisterClientScriptInclude method allows you to register a JavaScript reference with the Page object (checking for duplicates). Is there an equivalent of this method for CSS references? Similar questions apply for ClientScriptManager.RegisterClientScriptBlock and ClientScriptManager.RegisterClientScriptResource A: You can add header links to CSS files in ASP.Net codebehind classes like this: HtmlLink link = new HtmlLink(); link.Href = "Cases/EditStyles.css"; link.Attributes.Add("type", "text/css"); link.Attributes.Add("rel", "stylesheet"); this.Header.Controls.Add(link); You can iterate through the header controls beforehand to see if it is already there. The example shown is from a Page_Load in one of my projects and is inside a conditional expression that only adds the EditStyles.css if the page is supposed to be in "Edit" mode. For ClientScriptManager.RegisterClientScriptBlock and ClientScriptManager.RegisterClientScriptResource, they have equivalent functions for checking if they've already been registered (e.g., IsClientScriptrResourceRegistered). A: Short answer: no. You could certainly roll your own functions (as CMPalmer suggests) to take CSS embedded resources (as Gulzar suggests) and embed them on the page. As a best-practice matter, though, I'm not sure why you would want to do this. If you're making a reusable control that has a lot of CSS styling, my advice would be to just hard-code the class names into the standards-compliant output of your control, and ship the control accompanied by a suggested stylesheet. This gives your users/customers the option of overriding your suggested styles to fit their needs, and in general allows them to manage their CSS setup as they see fit. Separating style from markup is a Good Thing - you're already headed down the right path in avoiding the various built-in ASP.NET style attributes, but you should take it all the way and keep the CSS out of your .dll. A: I have used css files as Embedded resources. A: Just check for the existance of a registered script, and if you find that it's not there, then you will known this is the first time your control is being created. At which point you can just drop a literal control into your page that points to the css file you want. Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load With Page.ClientScript If Not .IsClientScriptIncludeRegistered("JQuery") Then .RegisterClientScriptInclude("JQuery", "Scripts/jquery-1.4.2.min.js") Dim l As New Literal() l.Text = "<link href='Uploadify/uploadify.css' rel='stylesheet' type='text/css' />" sender.controls.add(l) End If End With End Sub Hope this helps someone. A: One more thought: You might want to consider "namespacing" your class names to avoid collisions with common class names that your consumers might already be using. E.g. <div class="SmillerControls_Toolbar"> <a class="SmillerControls_Button" ...>...</a> ... </div> or you could wrap the whole thing in a single "namespace" class and then write your CSS to that: <div class="SmillerControls"> <div class="Toolbar"> <a class="Button" ...>...</a> </div> </div> your CSS would be like div.SmillerControls div.Toolbar { ... } div.SmillerControls div.Toolbar a.Button { ... } A: What I do is use an <asp:Literal id="cssliteral" runat="server" /> in head and then a StringBuilder on PageLoad that contains the dynamic css script. StingBuilder str = new StringBuilder(); str.Append("<style type="text/css">"); str.Append(".myclass {background-color:#" + mycolor); str.Append("</style>"); cssLiteral.Text = str.ToString();
{ "language": "en", "url": "https://stackoverflow.com/questions/54658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Apache Download: Make sure that page was viewed before download The job at hand: I want to make sure that my website's users view a page before they start a download. If they have not looked at the page but try to hotlink to the files directly they should go to the webpage before the download is allowed. Any suggestions that are better than my idea to send out a cookie and - before the download starts - check if the cookie exists (via .htaccess)? The webpage and the download files will be located on different servers. Environment: * *Apache 2 on all machines *PHP 5 on all machines *MySQL 5 available on the "webpage" server (no access from the download servers) Nathan asked what the problem is that I try to solve, and in fact it is that I want to prevent hotlinks from - for example - forums. If people download from our server, using our bandwidth, I want to show them an page with an ad before the download starts. It doesn't need to be totally secure, but we need to make some money to finance the servers, right? :) A: Instead of allowing hotlinking to the files, store them outside of a web accessible route. Instead make download requests go to a php script that initiates the download. You can put checks in place to ensure that the requesting page was the one you wanted read. A: An Apache mod_rewrite RewriteRule can do this. RewriteEngine on RewriteCond %{HTTP_REFERER} !^http://www.example.com/page.html$ RewriteRule file.exe http://www.example.com/page.html [R=301,L] Basically, if a request for file.exe didn't come with page.html as the referrer, it'll 301 redirect the user to page.html. A: You could use distributed server side sessions instead of cookies, but that is probably more trouble than it's worth. You could also forbid access to requests without a referrer or with a wrong referrer. But that's even more fakable than a cookie. It'd depend on how much you care. A: The solution here depends on the problem you are trying to solve. If you are just trying make sure a direct link doesn't get posted in forums and whatnot, then just checking the referrer with .htaccess should be enough. Of course the referrer can be faked easy enough, so if the risk of someone doing that is a problem, then you'll need to do something else. A cookie should do the trick if you need something a little more secure. We cannot just use php sessions because the file server and the webserver are on different boxes. But we could create a cookie based on a hash of the time, and some secret value. cookievalue = sha1('secretvalue'.date('z-H')); When the user requests the actual file, the fileserver generates this cookie again and makes sure it matches the users. This means, even if the user forges the cookie, it will be invalid after an hour, so who cares, and they can't generate their own new one, because they do not know the secret value. A: I was about to suggest the .htaccess referral trick, but it isn't a very secure method. It's easy to make a PHP script, with a custom http-referral added. If you enter the download page there, it will think you come from that page. Is this a relevant problem? Can you tell something about the context of your download page? A: I'm going to use mod_auth_token to make sure that the user has a "recent link". With mod_auth_token you can create expiring links, i.e. if someone decides to take an existing link and post it on another website, this link will expire after a specified time. That will result in a 403 FORBIDDEN which I can catch and redirect the user to the HTML page I want him on.
{ "language": "en", "url": "https://stackoverflow.com/questions/54669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Any good building tools for a C++ project, which can replace make? i'm wondering if there is any nice and neat tool to replace the GNU Autotools or Make to build a very large C++ project, which are such a complicated thing to use. It is simple to generate all the files that de Autotools require if the project is small, but if the source code is divided in many directories, with multiple third party libraries and many dependencies, you fall into the "Autotools Hell".. thanks for any recommendations A: We use Jam for a complex C++ project - one benefit is that it is nicely cross platform. Rather than me spout off the benefits, just have a quick look at this link: http://www.perforce.com/jam/jam.html A: Noel Llopis has written a few articles comparing build systems. Part 1 of "The Quest for the Perfect Build System" is at http://gamesfromwithin.com/the-quest-for-the-perfect-build-system. Part 2 follows on the same site. A retry of Scons is reported at http://gamesfromwithin.com/?p=104. Conclusions: SCons is too slow ... Jam is the winner. A: Cook is another tool that can be used to replace make. I've seen several large companies using it. So, it is enterprise ready even though the website looks rather dated. http://miller.emu.id.au/pmiller/software/cook/ A: I have using SCons on a big c++ project (on both Linux and Windows), and it works really well. scons all -j8 (which compiles object files in parallel) is very cool! A: CMake? (generates makefiles, so technically not a replacement as such). I've also seen "SCons" pop up in a few places recently. Haven't created anything with it myself though. A: The Google V8 JavaScript Engine is written in C++ and uses SCons, so I guess that's one vote for it. A: Take a look at waf. I think you can consider it as a complete replacement for make and autotools. It is based on python. One thing I like about waf is that the waf script itself is ~100kb standalone that you place in your project root directory. This is in contrast to make or rake and friends, where the build system must be installed first. You do have to have python >=2.3 installed though. ~$ ./waf configure && ./waf && ./waf install Waf's equivalent to Makefiles is the wscript file. It is a python script waf reads, and it defines at least 3 functions: set_options(), configure(conf) and build(bld). You can guess what each of them does. To jumpstart, I recommend looking in the demos/cpp/* files in the source distribution. Also take a look at the doc/waf.pdf file; it's a 12-page document that will quickly get you up and running. A: For a comparison of the speed of various C++ build tools, you can have a look at this benchmark: https://psycledelics.github.io/wonderbuild/benchmarks/time.xml A: I use bakefile for my build process and I became a big fan! I never have to write a Makefile myself anymore, let alone horrible GNU autotools scripts. All I have to do is provide an XML file that describes the build targets. Bakefile can convert this into a Makefile that gets all the (header file) dependencies right etc, where different Makefile formats may be chosen (pasting the list from the documentation): available formats are: autoconf GNU autoconf Makefile.in files borland Borland C/C++ makefiles dmars Digital Mars makefiles dmars_smake Digital Mars makefiles for SMAKE gnu GNU toolchain makefiles (Unix) mingw MinGW makefiles (mingw32-make) msevc4prj MS eMbedded Visual C++ 4 project files msvc MS Visual C++ nmake makefiles msvc6prj MS Visual C++ 6.0 project files msvs2003prj MS Visual Studio 2003 project files msvs2005prj MS Visual Studio 2005 project files symbian Symbian development files watcom OpenWatcom makefiles xcode2 Xcode 2.4 project files I usually use the autoconf option, and it writes the annoying GNU autotools scripts for me. I did have to adapt the configure.ac script, so that configure finds a certain library on any system. But it wasn't too bad. Getting the autoconf scripts in this way is nice, because I don't have to write them all by myself, and when I distribute my project it will look as if I had written them, and users can still build my project in the god-given way, with ./configure && make && make install
{ "language": "en", "url": "https://stackoverflow.com/questions/54674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: How Do You Insert XML Into an existing XML node I'm not even sure if it's possible but say I have some XML: <source> <list> <element id="1"/> </list> </source> And I would like to insert into list: <element id="2"/> Can I write an XSLT to do this? A: Add these 2 template definitions to an XSLT file: <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> <xsl:template match="list"> <list> <xsl:apply-templates select="@* | *"/> <element id="2"/> </list> </xsl:template>
{ "language": "en", "url": "https://stackoverflow.com/questions/54683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How to get a list of current open windows/process with Java? Does any one know how do I get the current open windows or process of a local machine using Java? What I'm trying to do is: list the current open task, windows or process open, like in Windows Taskmanager, but using a multi-platform approach - using only Java if it's possible. A: YAJSW (Yet Another Java Service Wrapper) looks like it has JNA-based implementations of its org.rzo.yajsw.os.TaskList interface for win32, linux, bsd and solaris and is under an LGPL license. I haven't tried calling this code directly, but YAJSW works really well when I've used it in the past, so you shouldn't have too many worries. A: You can easily retrieve the list of running processes using jProcesses List<ProcessInfo> processesList = JProcesses.getProcessList(); for (final ProcessInfo processInfo : processesList) { System.out.println("Process PID: " + processInfo.getPid()); System.out.println("Process Name: " + processInfo.getName()); System.out.println("Process Used Time: " + processInfo.getTime()); System.out.println("Full command: " + processInfo.getCommand()); System.out.println("------------------"); } A: Finally, with Java 9+ it is possible with ProcessHandle: public static void main(String[] args) { ProcessHandle.allProcesses() .forEach(process -> System.out.println(processDetails(process))); } private static String processDetails(ProcessHandle process) { return String.format("%8d %8s %10s %26s %-40s", process.pid(), text(process.parent().map(ProcessHandle::pid)), text(process.info().user()), text(process.info().startInstant()), text(process.info().commandLine())); } private static String text(Optional<?> optional) { return optional.map(Object::toString).orElse("-"); } Output: 1 - root 2017-11-19T18:01:13.100Z /sbin/init ... 639 1325 www-data 2018-12-04T06:35:58.680Z /usr/sbin/apache2 -k start ... 23082 11054 huguesm 2018-12-04T10:24:22.100Z /.../java ProcessListDemo A: There is no platform-neutral way of doing this. In the 1.6 release of Java, a "Desktop" class was added the allows portable ways of browsing, editing, mailing, opening, and printing URI's. It is possible this class may someday be extended to support processes, but I doubt it. If you are only curious in Java processes, you can use the java.lang.management api for getting thread/memory information on the JVM. A: For windows I use following: Process process = new ProcessBuilder("tasklist.exe", "/fo", "csv", "/nh").start(); new Thread(() -> { Scanner sc = new Scanner(process.getInputStream()); if (sc.hasNextLine()) sc.nextLine(); while (sc.hasNextLine()) { String line = sc.nextLine(); String[] parts = line.split(","); String unq = parts[0].substring(1).replaceFirst(".$", ""); String pid = parts[1].substring(1).replaceFirst(".$", ""); System.out.println(unq + " " + pid); } }).start(); process.waitFor(); System.out.println("Done"); A: This might be useful for apps with a bundled JRE: I scan for the folder name that i'm running the application from: so if you're application is executing from: C:\Dev\build\SomeJavaApp\jre-9.0.1\bin\javaw.exe then you can find if it's already running in J9, by: public static void main(String[] args) { AtomicBoolean isRunning = new AtomicBoolean(false); ProcessHandle.allProcesses() .filter(ph -> ph.info().command().isPresent() && ph.info().command().get().contains("SomeJavaApp")) .forEach((process) -> { isRunning.set(true); }); if (isRunning.get()) System.out.println("SomeJavaApp is running already"); } A: Using code to parse ps aux for linux and tasklist for windows are your best options, until something more general comes along. For windows, you can reference: http://www.rgagnon.com/javadetails/java-0593.html Linux can pipe the results of ps aux through grep too, which would make processing/searching quick and easy. I'm sure you can find something similar for windows too. A: The below program will be compatible with Java 9+ version only... To get the CurrentProcess information, public class CurrentProcess { public static void main(String[] args) { ProcessHandle handle = ProcessHandle.current(); System.out.println("Current Running Process Id: "+handle.pid()); ProcessHandle.Info info = handle.info(); System.out.println("ProcessHandle.Info : "+info); } } For all running processes, import java.util.List; import java.util.stream.Collectors; public class AllProcesses { public static void main(String[] args) { ProcessHandle.allProcesses().forEach(processHandle -> { System.out.println(processHandle.pid()+" "+processHandle.info()); }); } } A: On Windows there is an alternative using JNA: import com.sun.jna.Native; import com.sun.jna.platform.win32.*; import com.sun.jna.win32.W32APIOptions; public class ProcessList { public static void main(String[] args) { WinNT winNT = (WinNT) Native.loadLibrary(WinNT.class, W32APIOptions.UNICODE_OPTIONS); WinNT.HANDLE snapshot = winNT.CreateToolhelp32Snapshot(Tlhelp32.TH32CS_SNAPPROCESS, new WinDef.DWORD(0)); Tlhelp32.PROCESSENTRY32.ByReference processEntry = new Tlhelp32.PROCESSENTRY32.ByReference(); while (winNT.Process32Next(snapshot, processEntry)) { System.out.println(processEntry.th32ProcessID + "\t" + Native.toString(processEntry.szExeFile)); } winNT.CloseHandle(snapshot); } } A: String line; Process process = Runtime.getRuntime().exec("ps -e"); process.getOutputStream().close(); BufferedReader input = new BufferedReader(new InputStreamReader(process.getInputStream())); while ((line = input.readLine()) != null) { System.out.println(line); //<-- Parse data here. } input.close(); We have to use process.getOutputStream.close() otherwise it will get locked in while loop. A: This is another approach to parse the the process list from the command "ps -e": try { String line; Process p = Runtime.getRuntime().exec("ps -e"); BufferedReader input = new BufferedReader(new InputStreamReader(p.getInputStream())); while ((line = input.readLine()) != null) { System.out.println(line); //<-- Parse data here. } input.close(); } catch (Exception err) { err.printStackTrace(); } If you are using Windows, then you should change the line: "Process p = Runtime.getRun..." etc... (3rd line), for one that looks like this: Process p = Runtime.getRuntime().exec (System.getenv("windir") +"\\system32\\"+"tasklist.exe"); Hope the info helps! A: The only way I can think of doing it is by invoking a command line application that does the job for you and then screenscraping the output (like Linux's ps and Window's tasklist). Unfortunately, that'll mean you'll have to write some parsing routines to read the data from both. Process proc = Runtime.getRuntime().exec ("tasklist.exe"); InputStream procOutput = proc.getInputStream (); if (0 == proc.waitFor ()) { // TODO scan the procOutput for your data } A: package com.vipul; import java.applet.Applet; import java.awt.Checkbox; import java.awt.Choice; import java.awt.Font; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import java.util.List; public class BatchExecuteService extends Applet { public Choice choice; public void init() { setFont(new Font("Helvetica", Font.BOLD, 36)); choice = new Choice(); } public static void main(String[] args) { BatchExecuteService batchExecuteService = new BatchExecuteService(); batchExecuteService.run(); } List<String> processList = new ArrayList<String>(); public void run() { try { Runtime runtime = Runtime.getRuntime(); Process process = runtime.exec("D:\\server.bat"); process.getOutputStream().close(); InputStream inputStream = process.getInputStream(); InputStreamReader inputstreamreader = new InputStreamReader( inputStream); BufferedReader bufferedrReader = new BufferedReader( inputstreamreader); BufferedReader bufferedrReader1 = new BufferedReader( inputstreamreader); String strLine = ""; String x[]=new String[100]; int i=0; int t=0; while ((strLine = bufferedrReader.readLine()) != null) { // System.out.println(strLine); String[] a=strLine.split(","); x[i++]=a[0]; } // System.out.println("Length : "+i); for(int j=2;j<i;j++) { System.out.println(x[j]); } } catch (IOException ioException) { ioException.printStackTrace(); } } } You can create batch file like TASKLIST /v /FI "STATUS eq running" /FO "CSV" /FI "Username eq LHPL002\soft" /FI "MEMUSAGE gt 10000" /FI "Windowtitle ne N/A" /NH A: This is my code for a function that gets the tasks and gets their names, also adding them into a list to be accessed from a list. It creates temp files with the data, reads the files and gets the task name with the .exe suffix, and arranges the files to be deleted when the program has exited with System.exit(0), it also hides the processes being used to get the tasks and also java.exe so that the user can't accidentally kill the process that runs the program all together. private static final DefaultListModel tasks = new DefaultListModel(); public static void getTasks() { new Thread() { @Override public void run() { try { File batchFile = File.createTempFile("batchFile", ".bat"); File logFile = File.createTempFile("log", ".txt"); String logFilePath = logFile.getAbsolutePath(); try (PrintWriter fileCreator = new PrintWriter(batchFile)) { String[] linesToPrint = {"@echo off", "tasklist.exe >>" + logFilePath, "exit"}; for(String string:linesToPrint) { fileCreator.println(string); } fileCreator.close(); } int task = Runtime.getRuntime().exec(batchFile.getAbsolutePath()).waitFor(); if(task == 0) { FileReader fileOpener = new FileReader(logFile); try (BufferedReader reader = new BufferedReader(fileOpener)) { String line; while(true) { line = reader.readLine(); if(line != null) { if(line.endsWith("K")) { if(line.contains(".exe")) { int index = line.lastIndexOf(".exe", line.length()); String taskName = line.substring(0, index + 4); if(! taskName.equals("tasklist.exe") && ! taskName.equals("cmd.exe") && ! taskName.equals("java.exe")) { tasks.addElement(taskName); } } } } else { reader.close(); break; } } } } batchFile.deleteOnExit(); logFile.deleteOnExit(); } catch (FileNotFoundException ex) { Logger.getLogger(Functions.class.getName()).log(Level.SEVERE, null, ex); } catch (IOException | InterruptedException ex) { Logger.getLogger(Functions.class.getName()).log(Level.SEVERE, null, ex); } catch (NullPointerException ex) { // This stops errors from being thrown on an empty line } } }.start(); } public static void killTask(String taskName) { new Thread() { @Override public void run() { try { Runtime.getRuntime().exec("taskkill.exe /IM " + taskName); } catch (IOException ex) { Logger.getLogger(Functions.class.getName()).log(Level.SEVERE, null, ex); } } }.start(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/54686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "107" }
Q: Modifying the AJAX Control Toolkit Dropdown extender I am using the example on the AJAX website for the DropDownExtender. I'm looking to make the target control (the label) have the DropDown image appear always, instead of just when I hover over it. Is there any way to do this? A: This can be done using the following script tag: <script> function pageLoad() { $find('TextBox1_DropDownExtender')._dropWrapperHoverBehavior_onhover(); $find('TextBox1_DropDownExtender').unhover = VisibleMe; } function VisibleMe() { $find('TextBox1_DropDownExtender')._dropWrapperHoverBehavior_onhover(); } </script> I found this and some other tips at this dot net curry example. It works but I'd also consider writing a new control based on the drop down extender exposing a property to set the behaviour you want on or off. Writing a new AJAX control isn't too hard, more fiddly than anything.
{ "language": "en", "url": "https://stackoverflow.com/questions/54702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Should I choose scripting or compiled code for small tasks? I'm a Java programmer, and I like my compiler, static analysis tools and unit testing frameworks as tools that help me quickly deliver robust and efficient code. The JRE is pretty much everywhere I would work, too. Given that situation, I can't see a reason why I would ever choose to use shell scripting, vb scripting etc, no matter how small the task is if I wear one of my other hats like my cool black sysadmin fedora. I don't wear the other hats too often, under what circumstances should I choose scripting over writing compiled code? A: If you are comfortable with Java, and the JRE is everywhere you work, then I would say keep using it. There are, however, languages like perl and python that are particularly suited to quickly solving problems. I would suggest learning either perl or python, and then use your judgement on when to use it. A: If I have a small problem that I'd like to solve quickly, I tend to use a scripting language. The code tax is smaller, and, for me at least, the result comes faster. A: Whatever you think will be most efficient for you! I had a co-worker who seemed to use a different language for every task; Perl for quick text processing, PHP for small internal web applications, .NET for our main product, cygwin for filesystem stuff. He preferred to use the technology which was most specific to the task at hand. Personally, I find that context switching between technologies is painful. My day-to-day work is in .NET, so that's pretty much the terms I think in. For most tasks I find it more efficient to knock something up in C# using SnippetCompiler than I would to hack around in PowerShell or a scripting environment. A: I would say where it makes sense. If it's going to take you longer to open up your IDE, compile the script, etc. than it would to edit a script file and be done with it than use script file. If you're not going to be changing the thing often and are quicker at Java coding then go that route :) A: It is usually quicker to write scripts than compiled programmes. You don't have to worry so much about portability between different platforms and environments. A shell script will run pretty much every where on most platforms. Because you're a java developer and you mention that you have java everywhere you might look at groovy (http://groovy.codehaus.org/). It is a scripting language written in java with the ability to use java libraries. A: The way I see it (others disagree) all your code needs to be maintainable. The smallest useful collection of code is that which a single person maintains. Even that benefits from the language and tools you mentioned. However, there may obviously be tasks where specialised languages are more advantageous than a single general purpose language. A: If you can write it quicker in Java, then go for it. Just try and be aware of what the various scripting languages can do. e.g. Don't make a full blown Java app when you can do the same with a bash one-liner. A: Weigh the importance of the tool against popping open a text editor for a quick edit vs. opening IDE, recompiling, redeploying, etc. A: Of course, the prime directive should be to "use whatever you're comfortable with." If Java is getting the job done right and on time, stick to it. But a lot of the scripting languages could save you some time because they're attuned to different problems. If you're using regular expressions, the scripting languages are a good fit. If you're dropping into shell commands, scripts are nice. I tend to use Ruby scripts whenever I'm writing something that's small, because it's quick to write, easy to maintain, and (with Gems) easy to bolt on additional functionality without needed to use JARs or anything. Your milage will, of course, vary. A: At the end of the day this is a question that only you can answer for yourself. Based on the fact that you said "I can't see a reason why I would ever choose to use shell scripting , ..." then it's probably the case that you should never choose it right now. But if I were you I would pick a scripting language like python, ruby or perl and start trying to solve some of these small problems with this language. Over time you will start to get a feel for when it is more appropriate to write a quick script than build a full-blown solution. A: I use scripting languages for writing programs which are not expected to be maintained beyond few executions. Most of these languages are light on boiler-plate syntax and do have a REPL. Both these features enable rapid prototyping. Since you already know Java, you can try JVM languages like Groovy, JRuby, BeanShell etc. Scala has much lighter syntax than Java, has a REPL, is statically typed and runs on the JVM - you might give that a shot as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/54703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Programmatically accessing Data in an ASP.NET 2.0 Repeater This is an ASP.Net 2.0 web app. The Item template looks like this, for reference: <ItemTemplate> <tr> <td class="class1" align=center><a href='url'><img src="img.gif"></a></td> <td class="class1"><%# DataBinder.Eval(Container.DataItem,"field1") %></td> <td class="class1"><%# DataBinder.Eval(Container.DataItem,"field2") %></td> <td class="class1"><%# DataBinder.Eval(Container.DataItem,"field3") %></td> <td class="class1"><%# DataBinder.Eval(Container.DataItem,"field4") %></td> </tr> </ItemTemplate> Using this in codebehind: foreach (RepeaterItem item in rptrFollowupSummary.Items) { string val = ((DataBoundLiteralControl)item.Controls[0]).Text; Trace.Write(val); } I produce this: <tr> <td class="class1" align=center><a href='url'><img src="img.gif"></a></td> <td class="class1">23</td> <td class="class1">1/1/2000</td> <td class="class1">-2</td> <td class="class1">11</td> </tr> What I need is the data from Field1 and Field4 I can't seem to get at the data the way I would in say a DataList or a GridView, and I can't seem to come up with anything else on Google or quickly leverage this one to do what I want. The only way I can see to get at the data is going to be using a regex to go and get it (Because a man takes what he wants. He takes it all. And I'm a man, aren't I? Aren't I?). Am I on the right track (not looking for the specific regex to do this; forging that might be a followup question ;) ), or am I missing something? The Repeater in this case is set in stone so I can't switch to something more elegant. Once upon a time I did something similar to what Alison Zhou suggested using DataLists, but it's been some time (2+ years) and I just completely forgot about doing it this way. Yeesh, talk about overlooking something obvious. . . So I did as Alison suggested and it works fine. I don't think the viewstate is an issue here, even though this repeater can get dozens of rows. I can't really speak to the question if doing it that way versus using the instead (but that seems like a fine solution to me otherwise). Obviously the latter is less of a viewstate footprint, but I'm not experienced enough to say when one approach might be preferrable to another without an extreme example in front of me. Alison, one question: why literals and not labels? Euro Micelli, I was trying to avoid a return trip to the database. Since I'm still a little green relative to the rest of the development world, I admit I don't necessarily have a good grasp of how many database trips is "just right". There wouldn't be a performance issue here (I know the app's load enough to know this), but I suppose I was trying to avoid it out of habit, since my boss tends to emphasize fewer trips where possible. A: Off the top of my head, you can try something like this: <ItemTemplate> <tr> <td "class1"><asp:Literal ID="litField1" runat="server" Text='<%# Bind("Field1") %>'/></td> <td "class1"><asp:Literal ID="litField2" runat="server" Text='<%# Bind("Field2") %>'/></td> <td "class1"><asp:Literal ID="litField3" runat="server" Text='<%# Bind("Field3") %>'/></td> <td "class1"><asp:Literal ID="litField4" runat="server" Text='<%# Bind("Field4") %>'/></td> </tr> </ItemTemplate> Then, in your code behind, you can access each Literal control as follows: foreach (RepeaterItem item in rptrFollowupSummary.Items) { Literal lit1 = (Literal)item.FindControl("litField1"); string value1 = lit1.Text; Literal lit4 = (Literal)item.FindControl("litField4"); string value4 = lit4.Text; } This will add to your ViewState but it makes it easy to find your controls. A: Since you are working with tabular data, I'd recommend using the GridView control. Then you'll be able to access individual cells. Otherwise, you can set the td's for Field1 and Field4 to runat="server" and give them ID's. Then in the codebehind, access the InnerText property for each td. A: If you can afford a smidge more overhead in the generation, go for DataList and use the DataKeys property, which will save the data fields you need. You could also use labels in each of your table cells and be able to reference items with e.Item.FindControl("LabelID"). A: The <%#DataBinder.Eval(...) %> mechanism is not Data Binding in a "strict" sense. It is a one-way technique to put text in specific places in the template. If you need to get the data back out, you have to either: * *Get it from your source data *Populate the repeater with a different mechanism Note that the Repeater doesn't save the DataSource between postbacks, You can't just ask it to give you the data later. The first method is usually easier to work with. Don't assume that it's too expensive to reacquire your data from the source, unless you prove it to yourself by measuring; it's usually pretty fast. The biggest problem with this technique is if the source data can change between calls. For the second method, a common technique is to use a Literal control. See Alison Zhou's post for an example of how to do it. I usually personally prefer to fill the Literal controls inside of the OnItemDataBound instead A: @peacedog: Correct; Alison's method is perfectly acceptable. The trick with the database roundtrips: they are not free, obviously, but web servers tend to be very "close" (fast, low-latency connection) to the database, while your users are probably "far" (slow, high-latency connection). Because of that, sending data to/from the browser via cookies, ViewState, hidden fields or any other method can actually be "worse" than reading it again from your database. There are also security implications to keep in mind (Can an "evil" user fake the data coming back from the browser? Would it matter if they do?). But quite often it doesn't make any difference in performance. That's why you should do what works more naturally for your particular problem and worry about it only if performance starts to be a real-world issue. Good luck!
{ "language": "en", "url": "https://stackoverflow.com/questions/54708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the best style/syntax to use with Rhino Mocks? Multiple approaches exist to write your unit tests when using Rhino Mocks: * *The Standard Syntax *Record/Replay Syntax *The Fluent Syntax What is the ideal and most frictionless way? A: For .NET 2.0, I recommend the record/playback model. We like this because it separates clearly your expectations from your verifications. using(mocks.Record()) { Expect.Call(foo.Bar()); } using(mocks.Playback()) { MakeItAllHappen(); } If you're using .NET 3.5 and C# 3, then I'd recommend the fluent syntax. A: Interesting question! My own preference is the for the reflection-based syntax (what I guess you mean by the Standard Syntax). I would argue that this is the most frictionless, as it does not add much extra code: you reference the stubs directly on your interfaces as though they were properly implemented. I do also quite like the Fluent syntax, although this is quite cumbersome. The Record/Replay syntax is as cumbersome as the Fluent syntax (if not more so, seemingly), but less intuitive (to me at least). I've only used NMock2, so the Record/Replay syntax is a bit alien to me, whilst the Fluent syntax is quite familiar. However, as this post suggests, if you prefer separating your expectations from your verifications/assertions, you should opt for the Fluent syntax. It's all a matter of style and personal preference, ultimately :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/54709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Change the "From:" address in Unix "mail" Sending a message from the Unix command line using mail TO_ADDR results in an email from $USER@$HOSTNAME. Is there a way to change the "From:" address inserted by mail? For the record, I'm using GNU Mailutils 1.1/1.2 on Ubuntu (but I've seen the same behavior with Fedora and RHEL). [EDIT] $ mail -s Testing [email protected] Cc: From: [email protected] Testing . yields Subject: Testing To: <[email protected]> X-Mailer: mail (GNU Mailutils 1.1) Message-Id: <E1KdTJj-00025z-RK@localhost> From: <chris@localhost> Date: Wed, 10 Sep 2008 13:17:23 -0400 From: [email protected] Testing The "From: [email protected]" line is part of the message body, not part of the header. A: Plus it's good to use -F option to specify Name of sender. Something like this: mail -s "$SUBJECT" $MAILTO -- -F $MAILFROM -f ${MAILFROM}@somedomain.com Or just look at available options: http://www.courier-mta.org/sendmail.html A: It's also possible to set both the From name and from address using something like: echo test | mail -s "test" [email protected] -- -F'Some Name<[email protected]>' -t For some reason passing -F'Some Name' and [email protected] doesn't work, but passing in the -t to sendmail works and is "easy". A: I derived this from all the above answers. Nothing worked for me when I tried each one of them. I did lot of trail and error by combining all the above answers and concluded on this. I am not sure if this works for you but it worked for me on Ununtu 12.04 and RHEL 5.4. echo "This is the body of the mail" | mail -s 'This is the subject' '<[email protected]>,<[email protected]>' -- -F '<SenderName>' -f '<[email protected]>' One can send the mail to any number of people by adding any number of receiver id's and the mail is sent by SenderName from [email protected] Hope this helps. A: On Centos 5.3 I'm able to do: mail -s "Subject" [email protected] -- -f [email protected] < body The double dash stops mail from parsing the -f argument and passes it along to sendmail itself. A: Here are some options: * *If you have privelige enough, configure sendmail to do rewrites with the generics table *Write the entire header yourself (or mail it to yourself, save the entire message with all headers, and re-edit, and send it with rmail from the command line *Send directly with sendmail, use the "-f" command line flag and don't include your "From:" line in your message These aren't all exactly the same, but I'll leave it to you look into it further. On my portable, I have sendmail authenticating as a client to an outgoing mail server and I use generics to make returning mail come to another account. It works like a charm. I aggregate incoming mail with fetchmail. A: I don't know if it's the same with other OS, but in OpenBSD, the mail command has this syntax: mail to-addr ... -sendmail-options ... sendmail has -f option where you indicate the email address for the FROM: field. The following command works for me. mail [email protected] -f [email protected] A: Thanks BEAU mail -s "Subject" [email protected] -- -f [email protected] I just found this and it works for me. The man pages for mail 8.1 on CentOS 5 doesn't mention this. For -f option, the man page says: -f Read messages from the file named by the file operand instead of the system mailbox. (See also folder.) If no file operand is specified, read messages from mbox instead of the system mailbox. So anyway this is great to find, thanks. A: On CentOS this worked for me: echo "email body" | mail -s "Subject here" -r from_email_address email_address_to A: On Debian 7 I was still unable to correctly set the sender address using answers from this question, (would always be the hostname of the server) but resolved it this way. Install heirloom-mailx apt-get install heirloom-mailx ensure it's the default. update-alternatives --config mailx Compose a message. mail -s "Testing from & replyto" -r "sender <[email protected]>" -S replyto="[email protected]" [email protected] < <(echo "Test message") A: GNU mailutils's 'mail' command doesn't let you do this (easily at least). But If you install 'heirloom-mailx', its mail command (mailx) has the '-r' option to override the default '$USER@$HOSTNAME' from field. echo "Hello there" | mail -s "testing" -r [email protected] [email protected] Works for 'mailx' but not 'mail'. $ ls -l /usr/bin/mail lrwxrwxrwx 1 root root 22 2010-12-23 08:33 /usr/bin/mail -> /etc/alternatives/mail $ ls -l /etc/alternatives/mail lrwxrwxrwx 1 root root 23 2010-12-23 08:33 /etc/alternatives/mail -> /usr/bin/heirloom-mailx A: mail -s "$(echo -e "This is the subject\nFrom: Paula <[email protected]>\n Reply-to: [email protected]\nContent-Type: text/html\n")" [email protected] < htmlFileMessage.txt the above is my solution....any extra headers can be added just after the from and before the reply to...just make sure you know your headers syntax before adding them....this worked perfectly for me. A: In my version of mail ( Debian linux 4.0 ) the following options work for controlling the source / reply addresses * *the -a switch, for additional headers to apply, supplying a From: header on the command line that will be appended to the outgoing mail header *the $REPLYTO environment variable specifies a Reply-To: header so the following sequence export [email protected] mail -aFrom:[email protected] -s 'Testing' The result, in my mail clients, is a mail from [email protected], which any replies to will default to [email protected] NB: Mac OS users: you don't have -a , but you do have $REPLYTO NB(2): CentOS users, many commenters have added that you need to use -r not -a NB(3): This answer is at least ten years old(1), please bear that in mind when you're coming in from Google. A: echo "body" | mail -S [email protected] "Hello" -S lets you specify lots of string options, by far the easiest way to modify headers and such. A: echo "test" | mailx -r [email protected] -s 'test' [email protected] It works in OpenBSD. A: this worked for me echo "hi root"|mail [email protected] -s'testinggg' root A: On CentOS 5.5, the easiest way I've found to set the default from domain is to modify the hosts file. If your hosts file contains your WAN/public IP address, simply modify the first hostname listed for it. For example, your hosts file may look like: ... 11.22.33.44 localhost default-domain whatever-else.com ... To make it send from whatever-else.com, simply modify it so that whatever-else.com is listed first, for example: ... 11.22.33.44 whatever-else.com localhost default-domain ... I can't speak for any other distro (or even version of CentOS) but in my particular case, the above works perfectly. A: What allowed me to have a custom reply-to address on an Ubuntu 16.04 with UTF-8 encoding and a file attachment: Install the mail client: sudo apt-get install heirloom-mailx Edit the SMTP configuration: sudo vim /etc/ssmtp/ssmtp.conf mailhub=smtp.gmail.com:587 FromLineOverride=YES [email protected] AuthPass=??? UseSTARTTLS=YES Send the mail: sender='[email protected]' recipient='[email protected]' zipfile="results/file.zip" today=`date +\%d-\%m-\%Y` mailSubject='My subject on the '$today read -r -d '' mailBody << EOM Find attached the zip file. Regards, EOM mail -s "$mailSubject" -r "Name <$sender>" -S replyto="$sender" -a $zipfile $recipient < <(echo $mailBody) A: None of the above solutions are working for me... #!/bin/bash # Message echo "My message" > message.txt # Mail subject="Test" mail_header="From: John Smith <[email protected]>" recipients="[email protected]" ####################################################################### cat message.txt | mail -s "$subject" -a "$mail_header" -t "$recipients" A: I recent versions of GNU mailutils mail it is simply mail -r [email protected]. Looking at the raw sent mail, it seems to set both Return-Path: <[email protected]> and From: [email protected]. A: The answers provided before didn't work for me on CentOS5. I installed mutt. It has a lot of options. With mutt you do this this way: export [email protected] export [email protected] mutt -s Testing [email protected]
{ "language": "en", "url": "https://stackoverflow.com/questions/54725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "99" }
Q: Quality vs. ROI - When is Good Enough, good enough? UPDATED: I'm asking this from a development perspective, however to illustrate, a canoical non-development example that comes to mind is that if it costs, say, $10,000 to keep a uptime rate of 99%, then it theoretically can cost $100,000 to keep a rate of 99.9%, and possibly $1,000,000 to keep a rate of 99.99%. Somewhat like calculus in approaching 0, as we closely approach 100%, the cost can increase exponentially. Therefore, as a developer or PM, where do you decide that the deliverable is "good enough" given the time and monetary constraints, e.g.: are you getting a good ROI at 99%, 99.9%, 99.99%? I'm using a non-development example because I'm not sure of a solid metric for development. Maybe in the above example "uptime" could be replaced with "function point to defect ratio", or some such reasonable measure rate of bugs vs. the complexity of code. I would also welcome input regarding all stages of a software development lifecycle. Keep the classic Project Triangle constraints in mind (quality vs. speed vs. cost). And let's assume that the customer wants the best quality you can deliver given the original budget. A: There's no way to answer this without knowing what happens when your application goes down. * *If someone dies when your application goes down, uptime is worth spending millions or even billions of dollars on (aerospace, medical devices). *If someone may be injured if your software goes down, uptime is worth hundreds of thousands or millions of dollars (industrial control systems, auto safety devices) *If someone looses millions of dollars if your software goes down, uptime is worth spending millions on (financial services, large e-commerce apps). *If someone looses thousands of dollars if your software goes down, uptime is worth spending thousands on (retail, small e-commerce apps). *If someone will swear at the computer and looses productivity while it reboots when your software goes down, then uptime is worth spending thousands on (most internal software). *etc. Basically take (cost of going down) x (number of times the software will go down) and you know how much to spend on uptime. A: The Quality vs Good Enough discussion I've seen has a practical ROI at 95% defect fixes. Obviously show stoppers / critical defects are fixed (and always there are the exceptions like air-plane autopilots etc, that need to not have so many defects). I can't seem to find the reference to the 95% defect fixes, it is either in Rapid Development or in Applied Software Measurement by Caper Jones. Here is a link to a useful strategy for attacking code quality: http://www.gamedev.net/reference/articles/article1050.asp A: The client, of course, would likely balk at that number and might say no more than 1 hour of downtime per year is acceptable. That's 12 times more stable. Do you tell the customer, sorry, we can't do that for $100,000, or do you make your best attempt, hoping your analysis was conservative? Flat out tell the customer what they want isn't reasonable. In order to gain that kind of uptime, a massive amount of money would be needed, and realistically, the chances of reaching that percentage of uptime constantly just isn't possible. I personally would go back to the customer and tell them that you'll provide them with the best setup with 100k and set up an outage report guideline. Something like, for every outage you have, we will complete an investigation as to why this outage happened, and how what we will do to make the chances of it happening again almost non existent. I think offering SLAs is just a mistake. A: I think the answer to this question depends entirely on the individual application. Software that has an impact on human safety has much different requirements than, say, an RSS feed reader. A: The project triangle is a gross simplification. In lots of cases you can actually save time by improving quality. For example by reducing repairs and avoiding costs in maintenance. This is not only true in software development.Toyota lean production proved that this works in manufacturing too. The whole process of software development is far too complex to make generalizations on cost vs quality. Quality is a fuzzy concept that consists of multiple factors. Is testable code of higher quality than performant code? Is maintainable code of higher quality than testable code? Do you need testable code for an RSS reader or performant code? And for a fly-by-wire F16? It's more productive to make informed desisions on a case-by-case basis. And don't be afraid to over-invest in quality. It's usually much cheaper and safer than under-investing. A: To answer in an equally simplistic way.. ..When you stop hearing from the customers (and not because they stopped using your product).. except for enhancement requests and bouquets :) And its not a triangle, it has 4 corners - Cost Time Quality and Scope. A: To expand on what "17 of 26" said, the answer depends on value to the customer. In the case of critical software, like aircrafct controller applications, the value to the customer of a high quality rating by whatever measure they use is quite high. To the user of an RSS feed reader, the value of high quality is considerably lower. It's all about the customer (notice I didn't say user - sometimes they're the same, and sometimes they're not). A: Chasing the word "Quality" is like chasing the horizon. I have never seen anything (in the IT world or outside) that is 100% quality. There's always room for improvement. Secondly, "quality" is an overly broad term. It means something different to everyone and subjective in it's degree of implementation. That being said, every effort boils down to what "engineering" means--making the right choices to balance cost, time and key characteristics (ie. speed, size, shape, weight, etc.) These are constraints.
{ "language": "en", "url": "https://stackoverflow.com/questions/54737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Tool for debugging makefiles I have a large legacy codebase with very complicated makefiles, with lots of variables. Sometimes I need to change them, and I find that it's very difficult to figure out why the change isn't working the way I expect. What I'd like to find is a tool that basically does step-through-debugging of the "make" process, where I would give it a directory, and I would be able to see the value of different variables at different points in the process. None of the debug flags to make seem to show me what I want, although it's possible that I'm missing something. Does anyone know of a way to do this? A: Have you been looking at the output from running make -n and make -np, and the biggie make -nd? Are you using a fairly recent version of gmake? Have you looked at the free chapter on Debugging Makefiles available on O'Reilly's site for their excellent book "Managing Projects with GNU Make" (Amazon Link). A: I'm not aware of any specific flag that does exactly what you want, but --print-data-base sounds like it might be useful. A: I'm sure that remake is what you are looking for. From the homepage: remake is a patched and modernized version of GNU make utility that adds improved error reporting, the ability to trace execution in a comprehensible way, and a debugger. It has gdb-like interface and is supported by mdb-mode in (x)emacs which means breakponts, watches etc. And there's DDD if you don't like (x)emacs A: remake --debugger all More info https://vimeo.com/97397484 https://github.com/rocky/remake/wiki/Installing A: From the man page on make command-line options: -n, --just-print, --dry-run, --recon Print the commands that would be executed, but do not execute them. -d Print debugging information in addition to normal processing. The debugging information says which files are being considered for remaking, which file-times are being compared and with what results, which files actually need to be remade, which implicit rules are considered and which are applied--- everything interesting about how make decides what to do. --debug[=FLAGS] Print debugging information in addition to normal processing. If the FLAGS are omitted, then the behaviour is the same as if -d was specified. FLAGS may be: 'a' for all debugging output same as using -d, 'b' for basic debugging, 'v' for more verbose basic debugging, 'i' for showing implicit rules, 'j' for details on invocation of commands, and 'm' for debugging while remaking makefiles. A: There is a GNU make debugger project at http://gmd.sf.net which looks quite useful. The main feature supported by gmd is breakpointing, which may be more useful than stepping. To use this, you download gmd from http://gmd.sf.net and gmsl from http://gmsl.sf.net, and do an 'include gmd' in your makefile.
{ "language": "en", "url": "https://stackoverflow.com/questions/54753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "87" }
Q: How can you do paging with NHibernate? For example, I want to populate a gridview control in an ASP.NET web page with only the data necessary for the # of rows displayed. How can NHibernate support this? A: You can also take advantage of the Futures feature in NHibernate to execute the query to get the total record count as well as the actual results in a single query. Example // Get the total row count in the database. var rowCount = this.Session.CreateCriteria(typeof(EventLogEntry)) .Add(Expression.Between("Timestamp", startDate, endDate)) .SetProjection(Projections.RowCount()).FutureValue<Int32>(); // Get the actual log entries, respecting the paging. var results = this.Session.CreateCriteria(typeof(EventLogEntry)) .Add(Expression.Between("Timestamp", startDate, endDate)) .SetFirstResult(pageIndex * pageSize) .SetMaxResults(pageSize) .Future<EventLogEntry>(); To get the total record count, you do the following: int iRowCount = rowCount.Value; A good discussion of what Futures give you is here. A: I suggest that you create a specific structure to deal with pagination. Something like (I'm a Java programmer, but that should be easy to map): public class Page { private List results; private int pageSize; private int page; public Page(Query query, int page, int pageSize) { this.page = page; this.pageSize = pageSize; results = query.setFirstResult(page * pageSize) .setMaxResults(pageSize+1) .list(); } public List getNextPage() public List getPreviousPage() public int getPageCount() public int getCurrentPage() public void setPageSize() } I didn't supply an implementation, but you could use the methods suggested by @Jon. Here's a good discussion for you to take a look. A: From NHibernate 3 and above, you can use QueryOver<T>: var pageRecords = nhSession.QueryOver<TEntity>() .Skip((PageNumber - 1) * PageSize) .Take(PageSize) .List(); You may also want to explicitly order your results like this: var pageRecords = nhSession.QueryOver<TEntity>() .OrderBy(t => t.AnOrderFieldLikeDate).Desc .Skip((PageNumber - 1) * PageSize) .Take(PageSize) .List(); A: public IList<Customer> GetPagedData(int page, int pageSize, out long count) { try { var all = new List<Customer>(); ISession s = NHibernateHttpModule.CurrentSession; IList results = s.CreateMultiCriteria() .Add(s.CreateCriteria(typeof(Customer)).SetFirstResult(page * pageSize).SetMaxResults(pageSize)) .Add(s.CreateCriteria(typeof(Customer)).SetProjection(Projections.RowCountInt64())) .List(); foreach (var o in (IList)results[0]) all.Add((Customer)o); count = (long)((IList)results[1])[0]; return all; } catch (Exception ex) { throw new Exception("GetPagedData Customer da hata", ex); } } When paging data is there another way to get typed result from MultiCriteria or everyone does the same just like me ? Thanks A: How about using Linq to NHibernate as discussed in this blog post by Ayende? Code Sample: (from c in nwnd.Customers select c.CustomerID) .Skip(10).Take(10).ToList(); And here is a detailed post by the NHibernate team blog on Data Access With NHibernate including implementing paging. A: ICriteria has a SetFirstResult(int i) method, which indicates the index of the first item that you wish to get (basically the first data row in your page). It also has a SetMaxResults(int i) method, which indicates the number of rows you wish to get (i.e., your page size). For example, this criteria object gets the first 10 results of your data grid: criteria.SetFirstResult(0).SetMaxResults(10); A: Most likely in a GridView you will want to show a slice of data plus the total number of rows (rowcount) of the total amount of data that matched your query. You should use a MultiQuery to send both the Select count(*) query and .SetFirstResult(n).SetMaxResult(m) queries to your database in a single call. Note the result will be a list that holds 2 lists, one for the data slice and one for the count. Example: IMultiQuery multiQuery = s.CreateMultiQuery() .Add(s.CreateQuery("from Item i where i.Id > ?") .SetInt32(0, 50).SetFirstResult(10)) .Add(s.CreateQuery("select count(*) from Item i where i.Id > ?") .SetInt32(0, 50)); IList results = multiQuery.List(); IList items = (IList)results[0]; long count = (long)((IList)results[1])[0]; A: You don't need to define 2 criterias, you can define one and clone it. To clone nHibernate criteria you can use a simple code: var criteria = ... (your criteria initializations)...; var countCrit = (ICriteria)criteria.Clone();
{ "language": "en", "url": "https://stackoverflow.com/questions/54754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "109" }
Q: Getting the back/fwd history of the WebBrowser Control In C# WinForms, what's the proper way to get the backward/forward history stacks for the System.Windows.Forms.WebBrowser? A: Check out http://www.bsalsa.com/downloads.html. This is a series of Delphi components (free source code, you can see an example of this here: http://staruml.cvs.sourceforge.net/staruml/staruml/staruml/components/plastic-components/src/embeddedwb.pas?revision=1.1&view=markup - it's the starUML projects code) and they have, among other things, a way to get at the history, favorites, etc using the IE MSHTML interfaces. It's written in Object Pascal but it shouldn't be too hard to figure out what's going on. If you download the "Embedded Web Browser Components Package" take a look at the stuff in EmbeddedWB_D2005\Source - there's all sorts of goodies there. A: It doesn't look like it's possible. My suggestion would be to catch the Navigated event and maintain your own list. A possible problem with that is when the user clicks back in the browser, you don't know to unwind the stack.
{ "language": "en", "url": "https://stackoverflow.com/questions/54758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Unfiltering NSPasteboard Is there a way to unfilter an NSPasteboard for what the source application specifically declared it would provide? I'm attempting to serialize pasteboard data in my application. When another application places an RTF file on a pasteboard and then I ask for the available types, I get eleven different flavors of said RTF, everything from the original RTF to plain strings to dyn.* values. Saving off all that data into a plist or raw data on disk isn't usually a problem as it's pretty small, but when an image of any considerable size is placed on the pasteboard, the resulting output can be tens of times larger than the source data (with multiple flavors of TIFF and PICT data being made available via filtering). I'd like to just be able to save off what the original app made available if possible. John, you are far more observant than myself or the gentleman I work with who's been doing Mac programming since dinosaurs roamed the earth. Neither of us ever noticed the text you highlighted... and I've not a clue why. Starting too long at the problem, apparently. And while I accepted your answer as the correct answer, it doesn't exactly answer my original question. What I was looking for was a way to identify flavors that can become other flavors simply by placing them on the pasteboard AND to know which of these types were originally offered by the provider. While walking the types list will get me the preferred order for the application that provided them, it won't tell me which ones I can safely ignore as they'll be recreated when I refill the pasteboard later. I've come to the conclusion that there isn't a "good" way to do this. [NSPasteboard declaredTypesFromOwner] would be fabulous, but it doesn't exist. A: -[NSPasteboard types] will return all the available types for the data on the clipboard, but it should return them "in the order they were declared." The documentation for -[NSPasteboard declareTypes:owner:] says that "the types should be ordered according to the preference of the source application." A properly implemented pasteboard owner should, therefore, declare the richest representation of the content (probably the original content) as the first type; so a reasonable single representation should be: [pb dataForType:[[pb types] objectAtIndex:0]] A: You may be able to get some use out of +[NSPasteboard typesFilterableTo:]. I'm picturing a snippet like this: NSArray *allTypes = [pb types]; NSAssert([allTypes count] > 0, @"expected at least one type"); // We always require the first declared type, as a starting point. NSMutableSet *requiredTypes = [NSMutableSet setWithObject:[allTypes objectAtIndex:0]]; for (NSUInteger index = 1; index < [allTypes count]; index++) { NSString *aType = [allTypes objectAtIndex:index]; NSSet *filtersFrom = [NSSet setWithArray:[NSPasteboard typesFilterableTo:aType]]; // If this type can't be re-created with a filter we already use, add it to the // set of required types. if (![requiredTypes intersectsSet:filtersFrom]) [requiredTypes addObject:aType]; } I'm not sure how effective this would be at picking good types, however.
{ "language": "en", "url": "https://stackoverflow.com/questions/54760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: ClickOnce Deployment, system update required Microsoft.mshtml We have an application that works with MS Office and uses Microsoft.mshtml.dll. We use ClickOnce to deploy the application. The application deploys without issues on most machines, but sometimes we get errors saying "System Update Required, Microsoft.mshtl.dll should be in the GAC". We tried installing the PIA for Office without luck. Since Microsoft.mshtml.dll is a system dependent file we cannot include it in the package and re-distribute it. What would be the best way to deploy the application? A: Do you know which version of MS Office you are targeting? These PIAs are very specific to the version of Office. I remember when we were building a smart client application, we used to have Build VM machines, each one targeting a specific version of Outlook. Another hurdle was not being able to specify these PIAs as pre-requisites or bundle them with the app. These PIAs needs to be installed on the client using Office CD (at least for 2003 version). A: You can set up prerequisites in a clickonce app, which would check for specific assemblies in the GAC before allowing users to install. You would still need to manually install an app that includes the required office dll outside of ClickOnce, but you would at least avoid throwing errors. A: We are targeting Office 2003 and Office 2007, but using the Office 11 (2003) dlls as Office 2007 is backward compatible. The problem occurs only for Microsoft.mshtml.dll file. This file is setup as a prerequisite in the ClickOnce app. On this particular install we tried installing both the Office 2003 and Office 2007 PIA's to no avail. A: I had this problem too. The solution to this is to go to the References folder in the solution explorer, then right click Microsoft.mshtml, then Properties. In the Propoerties page mark "Copy Local" as True. This should work.
{ "language": "en", "url": "https://stackoverflow.com/questions/54770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Website Monitoring Libraries There has been some talk of Website performance monitoring tools and services on stackoverflow, however, they seem fairly expensive for what they actually do. Are there any good opensource libraries for automating checking/monitoring the availability of a website? A: If you just want to know if your server is serving out content or not, take a look at Montastic. I use it, and am pleased. Plus its free! It will ping your site periodically, and if it doesn't get a 200 status, it lets you know. A: Intelligent website monitoring by simulating a human user is done with Sahi + OMD. http://www.nagios-wiki.de/_media/workshop/2012/sahi2omd_simon_meggle_monitoring_workshop_2012.pdf A: I have always used Zabbix especially for critical web sites. It uses MySql for the database and it has a PHP frontend. Of course it is open source and it is very flexible. It uses servers to stick data in the database and agents collect the data and send it to the servers. It is very scalable with this respect. I cannot recommend this software enough. I have all kinds of monitoring going on, not just web servers. A: Check out mon.itor.us as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/54771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is the correct .NET exception to throw when try to insert a duplicate object into a collection? I have an Asset object that has a property AssignedSoftware, which is a collection. I want to make sure that the same piece of Software is not assigned to an Asset more than once. In Add method I check to see if the Software already exist, and if it does, I want to throw an exception. Is there a standard .NET exception that I should be throwing? Or does best practices dictate I create my own custom exception? A: .Net will throw a System.ArgumentException if you try to add an item to a hashtable twice with the same key value, so it doesnt look like there is anything more specific. You may want to write your own exception if you need something more specific. A: You should probably throw ArgumentException, as that is what the base library classes do. A: From the Class Library design guidelines for errors (http://msdn.microsoft.com/en-us/library/8ey5ey87(VS.71).aspx): In most cases, use the predefined exception types. Only define new exception types for programmatic scenarios, where you expect users of your class library to catch exceptions of this new type and perform a programmatic action based on the exception type itself. This is in lieu of parsing the exception string, which would negatively impact performance and maintenance. ... Throw an ArgumentException or create an exception derived from this class if invalid parameters are passed or detected. Throw the InvalidOperationException exception if a call to a property set accessor or method is not appropriate given the object's current state. This seems like an "Object state invalid" scenario to me, so I'd pick InvalidOperationException over ArgumentException: The parameters are valid, but not at this point in the objects life. A: Why has InvalidOperationException been accepted as the answer?! It should be an ArgumentException?! InvalidOperationException should be used if the object having the method/property called against it is not able to cope with the request due to uninit'ed state etc. The problem here is not the object being Added to, but the object being passed to the object (it's a dupe). Think about it, if this Add call never took place, would the object still function as normal, YES! This should be an ArgumentException. A: Well, if you really want an collection with unique items, you might want to take a look at the HashSet object (available in C# 3.0). Otherwise, there are two approaches that you can take: * *Create a custom exception for your operation, just as you had stated *Implement an Add() method that returns a boolean result: true if the item is added and false if the item already has a duplicate in the collection Either approach can be considered best practice, just as long as you are consistent in its use.
{ "language": "en", "url": "https://stackoverflow.com/questions/54789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is it possible to build MSBuild files (visual studio sln) from the command line in Mono? Is it possible to build Visual Studio solutions without having to fire up MonoDevelop? A: Current status (Mono 2.10, 2011): xbuild is now able to build all versions of Visual Studio / MSBuild projects, including .sln files. Simply run xbuild just as you would execute msbuild on Microsoft .Net Framework. You don't need Monodevelop installed, xbuild comes with the standard Mono installation. If your build uses custom tasks, they should still work if they don't depend on Windows executables (such as rmdir or xcopy). When you are editing project files, use standard Windows path syntax - they will be converted by xbuild, if necessary. One important caveat to this rule is case sensitivity - don't mix different casings of the same file name. If you have a project that does this, you can enable compatibility mode by invoking MONO_IOMAP=case xbuild foo.sln (or try MONO_IOMAP=all). Mono has a page describing more advanced MSBuild project porting techniques. Mono 2.0 answer (2008): xbuild is not yet complete (it works quite well with VS2005 .csproj files, has problems with VS2008 .csproj and does not handle .sln). Mono 2.1 plans to merge the code base of mdtool (MonoDevelop command line build engine) into it, but currently mdtool is a better choice. mdtool build -f:project.sln or man mdtool if you have MonoDevelop installed. A: xbuild now supports solutions and projects, both VS2005 and VS2008. A: I think you are looking for xbuild: http://www.mono-project.com/Microsoft.Build A: for now as per August 2017 we can use msbuild command as xbuild is depreciated.
{ "language": "en", "url": "https://stackoverflow.com/questions/54790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: Conferences and training for architects, best practices, It's getting close to the time when I need to submit training and travel requests. I'm looking for conferences and classes in the coming 12 months that are geared toward improving coding and software development, best practices, system architecture, etc. They need to be in the US or Canada since I'll never get approval for anything else. Here are a few that I've found, but I'm looking for other suggestions. Also feedback on any of these would be appreciated too: Software Process Symposium Software Development Best Practices Better Software Conference IASA Connections The IASA event looks like the closest match for me but doesn't give me enough lead time to request & schedule it. A: If you are using Microsoft technologies, the patterns & practices Summit is a good event -- if you are not working pretty much 100% in the Microsoft .NET space, though, it would be of less use. A: I highly recommend the Nothin’ but .NET Developer Boot Camp more information http://www.jpboodhoo.com/training.oo A: Carnegie Mellon University's Software Engineering Institute--in Pittsburgh, PA--has certification programs and trainings on Software Architecture: * *http://www.sei.cmu.edu/architecture/certificate_program.html A: InfoQ's conferences seem to be valuable. Some parts of their materials is available online. A: I also found this, but there is not a class currently scheduled: Agile Boot Camp for .NET Journeyman to Master Series Here is one that is scheduled that looks like it might actually be the same thing: Headspring Agile Boot Camp A: If you aren't successful in getting budget, maybe you could consider attending local user groups such as OWASP. I participate in OWASP Hartford. A: You can do IASA online training and certification and similarly from another source for .NET http://www.soaschool.com/certifications/net
{ "language": "en", "url": "https://stackoverflow.com/questions/54791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Limiting CPU speed for profiling I'm trying to optimize several bottlenecks on an application which is supposed to run on a really wide range of CPUs and architectures (some of them very close to embeded devices). The results of my profiler, however, aren't really significant because of the speed of my CPU. Is there any way (preferably under Windows or Mac OS X) to limit the speed of my CPU for profiling purposes? I've thought about using a virtual machine, but haven't found any with such functionality. A: This works well and supports multicore. http://www.cpukiller.com/ A: It's a common misconception that you need to know how fast your code is to know where your performance problems are. That confuses problem-finding with problem-measurement. This is the method I use. If there is some wasteful logic in the program, it will be wasteful no matter what CPU runs it. What you need to know is where it is. For measurement, you don't need to know how big it is; you only need to know that it is big enough to need to be fixed. Usually there are a number of problems, of different sizes. You will probably find the biggest ones first, but no matter what order you fix them in, each one you fix will make it easier to find the remaining ones, because they will take a larger percentage. A: I'm afraid I don't know any answer other than to start looking around in your area for old hardware. The CPU isn't the only variable that can (usually) affect things. L1/L2 cache size, memory bus speed, memory speed/latency, hard drive speed, etc. are all significant factors in many applications. A: There was an app on Downloadsquad.com recently. I dont remember the name of it but it did some fun stiff woth processors and task manager. It may have only been to manage what apps are on what cpu but maybe it would give you this. I will try to look for it this afternoon, and respond back if I find it. A: Many profilers (for example oprofile - but thats linux only) let you set the frequency that they collect data. See if your profiler supports this, and if not try a different one that does. A: I've thought about using a virtual machine, but haven't found any with such functionality. Why do you need a VM that explicitly offers that functionality? Just limit the CPU usage of the VM in the host OS (where it is just a regular process). That should have exactly the same effect. You can do this e.g. using cpulimit on Linux; similar solutions exist for MS Windows.
{ "language": "en", "url": "https://stackoverflow.com/questions/54795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How do you implement Levenshtein distance in Delphi? I'm posting this in the spirit of answering your own questions. The question I had was: How can I implement the Levenshtein algorithm for calculating edit-distance between two strings, as described here, in Delphi? Just a note on performance: This thing is very fast. On my desktop (2.33 Ghz dual-core, 2GB ram, WinXP), I can run through an array of 100K strings in less than one second. A: function EditDistance(s, t: string): integer; var d : array of array of integer; i,j,cost : integer; begin { Compute the edit-distance between two strings. Algorithm and description may be found at either of these two links: http://en.wikipedia.org/wiki/Levenshtein_distance http://www.google.com/search?q=Levenshtein+distance } //initialize our cost array SetLength(d,Length(s)+1); for i := Low(d) to High(d) do begin SetLength(d[i],Length(t)+1); end; for i := Low(d) to High(d) do begin d[i,0] := i; for j := Low(d[i]) to High(d[i]) do begin d[0,j] := j; end; end; //store our costs in a 2-d grid for i := Low(d)+1 to High(d) do begin for j := Low(d[i])+1 to High(d[i]) do begin if s[i] = t[j] then begin cost := 0; end else begin cost := 1; end; //to use "Min", add "Math" to your uses clause! d[i,j] := Min(Min( d[i-1,j]+1, //deletion d[i,j-1]+1), //insertion d[i-1,j-1]+cost //substitution ); end; //for j end; //for i //now that we've stored the costs, return the final one Result := d[Length(s),Length(t)]; //dynamic arrays are reference counted. //no need to deallocate them end;
{ "language": "en", "url": "https://stackoverflow.com/questions/54797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: How to best merge information, at a server, into a "form", a PDF being generated as the final output Background: I have a VB6 application I've "inherited" that generates a PDF for the user to review using unsupported Acrobat Reader OCX integration. The program generates an FDF file with the data, then renders the merged result when the FDF is merged with a PDF. It only works correctly with Acrobat Reader 4 :-(. Installing a newer version of Acrobat Reader breaks this application, making the users very unhappy. I want to re-architect this app so that it will send the data to be merged to a PDF output generation server. This server will merge the data passed to it onto the form, generate a PDF image of this, and store it, so that any user wishing to view the final result can then simply get the PDF (it is generated just once). If the underlying data is changed, the PDF will be deleted and regenerated next time it is requested. The client program can then have any version of Acrobat Reader they wish, as it will be used exclusively for displaying PDF files (as it was intended). The server will most likely be written in .NET (C#) with Visual Studio 2005, probably as a Web Service... Question: How would others recommend I go about this? Should I use Adobe's Acrobat 9 at the server to do this, puting the data into FDF or Adobe's XML format, and letting Acrobat do the merge? Are there great competitors in the "merge data onto form and output a PDF" space? How do others do this? It has to be API based, no GUI at the server, of course... While some output is generated via FDF/PDF, another part of the application actually sends lines, graphics, and text to the printer (or a form for preview purposes) one page at a time, giving the proper x/y coordinates, font, size, etc. for each, knowing when it is at the end of a page, etc. This code is currently in the program that displays this for the user to review, and it is also in the program that prints the final form to the printer. For consistency between reviewer and printer, I'd like to move this output generation logic to a server as well, either using a good PDF generation API tool or use the code as is and generate a PDF with a PDF printer... and saving this PDF for display by the clients. Googling "Form software" or "fill form software" or similar searches returns sooooooooo much unrelated material, mostly related to UI for users to fill in forms, I just don't know how to properly narrow down my search. This site seems the perfect place to ask such a question, as other programmers must also need to generate similar outputs, and have tried out some great tools. EDIT: I've added PDF tag as well as PDF-generation. Also, my current customer insists on PDF output, but I appreciate the alternative suggestions. A: can't help with VB6 solution, can help with .net or java solution on the server. Get iText or iTextSharp from http://www.lowagie.com/iText/. It has a PdfStamper class that can merge a PDF and FDF FDFReader/FDFWriter classes to generate FDF files, get field names out of PDF files, etc... A: Take my advice. Ditch PDF for XPS. I am working on two apps, both server based. One displays image-based documents as PDFs in a browser. The second uses FixedPage templates to construct XPS documents bound to data sources. My conclusion after working on both projects is that PDFs suck; XPS documents less so. You have to pay cash money for a decent PDF library, whereas XPS comes with the framework. PDF document generation is a memory hog, has lots of potholes and isn't very server friendly. XPS docs have a much smaller footprint and less chances of shooting yourself in the foot. A: I have had great success using Microsoft Word. The form is designed in Word and composited with XML data. The document is run through a PDF converter (Neevia in this case, but there are better) to generate the PDF. All of this is done in C#. A: Same boat. We're currently making pdfs this way: vb6 app drops a record into sql (with filename, create date, user, and final destination) and the xls (or doc) gets moved into a server directory (share) and the server has a vb.net service that has a filewatcher monitoring that directory. The file shows up, the service kicks off excel (word) to pdf the file via adobe, looks to sql to figure out what to do with the PDF when it is made, and logs the finish time. This was the cheap solution- it only took about a day to do the code and another day to debug both ends and roll the build out. This is not the way to do it. Adobe crashes at random times when trying to make the pdfs. it will run for two weeks with no issues at all, and then (like today) it will crash every 5 minutes. or every other hour. or at 11:07, 2:43, 3:05, and 6:11. We're going to convert the stuff out of excel and word and drop the data directly into pdfs using PDFTron in the next revision. Yes, PDFTron costs money, (we bought the 1-kilobuck-per-processor license) but it will do the job, and nicely. XPS are nice but I, like you, have to provide PDFs. It is the way of things. Check out pdfTron (google it) and see if it will do what you want. Then you just got to figure out which license you need and how you gonna pay for it. If someone comes up with something better, hope they'll vote it to the top of the list!!!
{ "language": "en", "url": "https://stackoverflow.com/questions/54808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Prevent Multi-Line ASP:Textbox from trimming line feeds I have the following webform: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="TestWebApp.Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <asp:TextBox ID="txtMultiLine" runat="server" Width="400px" Height="300px" TextMode="MultiLine"></asp:TextBox> <br /> <asp:Button ID="btnSubmit" runat="server" Text="Do A Postback" OnClick="btnSubmitClick" /> </div> </form> </body> </html> and each time I post-back the leading line feeds in the textbox are being removed. Is there any way that I can prevent this behavior? I was thinking of creating a custom-control that inherited from the textbox but I wanted to get a sanity check here first. A: I ended up doing the following in the btnSubmitClick() public void btnSubmitClick(object sender, EventArgs e) { if (this.txtMultiLine.Text.StartsWith("\r\n")) { this.txtMultiLine.Text = "\r\n" + this.txtMultiLine.Text; } } I must be really tired or sick or something. A: I think that the problem here is in the way that the browser renders the textarea contents, not with ASP.NET per se. Doing this: public void btnSubmitClick(object sender, EventArgs e) { this.txtMultiLine.Text = "\r\n" + this.txtMultiLine.Text; } will let you reach the desired screen output, but you'll add an extra newline to the Text that the user didn't enter. The ideal solution would be for the TextBox control in ASP.NET to always write the newline AFTER writing the open tag and BEFORE writing the contents of Text. This way, you'd reach the desired effect without trumping the contents of the textbox. We could inherit from TextBox and fix this by overriding RenderBeginTag: public override void RenderBeginTag(HtmlTextWriter writer) { base.RenderBeginTag(writer); if (this.TextMode == TextBoxMode.MultiLine) { writer.Write("\r\n"); // or Environment.NewLine } } Now, creating a new class for this small issue seems really overkill, so your pragmatic approach is completely acceptable. But, I'd change it to run in the PreRender event of the page, which is very late in the page lifecycle and would not interfere with the processing of the submitted text in the OnSubmit event of the button: protected void Page_Load(object sender, EventArgs e) { this.PreRender += Page_OnPreRender; } protected void Page_OnPreRender(object sender, EventArgs e) { this.txtMultiLine.Text = "\r\n" + this.txtMultiLine.Text; }
{ "language": "en", "url": "https://stackoverflow.com/questions/54833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I monitor trace output of a .Net app? I'm working on some code that uses the System.Diagnostics.Trace class and I'm wondering how to monitor what is written via calls to Trace.WriteLine() both when running in debug mode in Visual Studio and when running outside the debugger. A: Try Debug View. It works quite nicely. A: I use a simple little program called 'BareTail' which displays plain text files, updating it's display as the file gets written to and follows (or wraps) to the bottom of the file. When running outside the debugger you'll need to attach a file-writer to write out the trace information, which you can do by adding a few lines to the .exe.config file Hope that Helps ;o) A: Have a look at DevTracer. It also allows monitoring a .NET application remotely. DISCLAIMER: I am the developer of DevTracer and therfore my opinion may not be neutral.
{ "language": "en", "url": "https://stackoverflow.com/questions/54836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }