text
stringlengths
8
267k
meta
dict
Q: How can I run a Windows GUI application on as a service? I have an existing GUI application that should have been implemented as a service. Basically, I need to be able to remotely log onto and off of the Windows 2003 server and still keep this program running. Is this even possible? EDIT: Further refinement here... I do not have the source, it's not my application. A: Has anyone used a third party product like: Always Up? Seems to do what I need. It's the capability to keep running through login / logout cycles I need. And the capability to ignore that it's a GUI app and run it anyway. They must be linking into the exe manually and calling WinMain or something. A: You can wrap it up into srvany, though you may need to assign it an actual user account (as opposed to LocalService or some such) A: I've had good experience with winsw. I was able to convert quite easily my batch files to services using it. I've used it for nginx as well, per this answer. A: Do you actually need it to run as a service or do you just need it to stay running when you aren't connected? If the latter, you can disconnect instead of logging off and the application will continue running. The option should be in the drop down list after choosing Shut Down or you can call tsdiscon.exe. A: Windows services cannot have GUIs, so you will need to either get rid of the GUI or separate your application into two pieces - a service with no UI, and a "controller" application. If you have the source code, converting the non-GUI code into a service is easy - Visual Studio has a 'Windows Service' project type that takes care of the wrapping for you, and there is a simple walkthrough that shows you how to create a deployment project that will take care of installation. If you opt for the second route and need to put some of the original GUI code into a controller, the controller and service can communicate via WCF, .NET Remoting or plain socket connections with a protocol you define yourself. If you use Remoting, be sure to use a "chunky" interface that transfers data with as few method invocations as possible - each call has a fair amount of overhead. If the UI is fairly simple, you may be able to get away with using configuration files for input and log files or the Windows Event Log for output. A: Do you have the source? In many cases the difference between a stand alone application and a service are minimal. Most of the changes are related to hooking the code into the service manager properly. Once done, you'll know that any problems that occur are a result of your programming and not any other program. A: What happens if you create a service. That service is configure to interact with the desktop. Configure it to run a some user and to start automatic. From the service CreateProcess on this other application. I'd guess this is quick to try using C# (C/C++ was alot of code to even be a service if I recall). Would that work?? BUT! My first thought would be to create a virtual computer in a server-class virtual host (like Virtual Server, HyperV, VMWare). Those virtual machines will run as service (or whatever Hyper V does). The virtual machine would always be running - regardless of logging in and out. Make this virtual computer auto login to windows (TweakUI can set this up) and then just launch the GUI app using a shortcut to the Startup folder. You can even remote desktop into it use the program's GUI (I bet Always Up can't do that). A: You can use ServiceMill to achieve this operation. Basically you install ServiceMill Server on your server. Then click on right button over your executable file and "Install as a ServiceMill Service". Next you configure some things (user/password, if you want to interact with desktop or if you prefer to hide the ui... and set the start mode to automatic). Another tool from Active+ Software can be a solution, ServiceMill Exe Builder which allows you to create services from Command Line and this is great if you are using a Continuous Integration Server or if you plan to distribute your component as a service without having to think about service integration (plus it is royalty free). A: FireDaemonPro turns most GUI apps into services; it's not free, but it might be worth getting it. A: First I would have to ask why your service needs a user interface. Most likely it does not but you probably need a client that gets data from this service. The reason services don't usually have GUI's is they may not have a window environment to run in. Services can start and run without a user logged in to the machine. In this case there would be no desktop for the service GUI to run in. Having said that you can set properties on the service to run as a user as suggested by Mark. You can also specify in the properties of the service to "Allow service to interact with desktop". Only do this if you know a user will be logged in. A: A service shouldn't have a GUI, since it should run without any needing any intervention from a user, and there are all sorts of problems associated with finding and communicating with the correct users desktop. Since, presumably the reason for asking this is to be able to remotely monitor the application, the way to do it would be to have two applications. The service side (written basically as a console application) and the client/monitoring GUI side. The service would use some remote connectivity (when I did this I used Named Pipes) to communicate with the client/monitoring application. Either should be able to run without the other, and certainly the service should be able to run with out the client.
{ "language": "en", "url": "https://stackoverflow.com/questions/53232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: Should we stop using Zend WinEnabler? (This question is over 6 years old and probably no longer has any relevance.) Our system uses Zend WinEnabler. * *Do you use it? *Is it obsolete? *Should we stop using it? *Is it known to cause handle/memory leaks? Here is an (old) introduction to it: PHP Creators Unveil New Product that Makes PHP Truly Viable for Windows Environments A: Since Zend appears to not be selling it anymore and all its functionality is available for free (through FastCGI), I would say so. Look at the Zend Core (installing Zend Core) if you really want to run PHP on Windows. But really, you should be asking yourself why you are running PHP on Windows at all (we used to do it, and the headaches where enormous, especially since nobody else was doing it).
{ "language": "en", "url": "https://stackoverflow.com/questions/53253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Getting closest element by id I have two elements: <input a> <input b onclick="..."> When b is clicked, I want to access a and manipulate some of its data. A does not have a globally unique name, so document.getElementsByName is out. Looking into the event object, I thought event.target.parentNode would have some function like getElementsByName, but this does not seem to be the case with <td>s. Is there any simple way to do this? A: If a and b are next to each other and have the same parent, you can use the prevSibling property of b to find a. A: * *You should be able to find the element that was clicked from the event object. Depending on your browser you might want e.target or e.srcElement. The code below is from this w3schools example: function whichElement(e) { var targ; if (!e) var e = window.event; if (e.target) { targ=e.target; } else if (e.srcElement) { targ = e.srcElement; } if (targ.nodeType==3) { // defeat Safari bug targ = targ.parentNode; } var tname; tname = targ.tagName; alert("You clicked on a " + tname + " element."); } *You may then use the nextSibling and prevSibling DOM traversal functions. Some more information here. And yet again a w3schools reference for XML DOM Nodes. A: Prototype also has nice functions to move around in the DOM. In your example something like the following would do the trick: b.up().down('a') And if there are is more than one a element at that level you have the power of CSS selectors at your hand to specify exactly which element you want A: Leave your plain vanilla JavaScript behind. Get JQuery--it will save you a ton of time. http://docs.jquery.com/Selectors
{ "language": "en", "url": "https://stackoverflow.com/questions/53256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Retaining HTTP POST data when a request is interrupted by a login page Say a user is browsing a website, and then performs some action which changes the database (let's say they add a comment). When the request to actually add the comment comes in, however, we find we need to force them to login before they can continue. Assume the login page asks for a username and password, and redirects the user back to the URL they were going to when the login was required. That redirect works find for a URL with only GET parameters, but if the request originally contained some HTTP POST data, that is now lost. Can anyone recommend a way to handle this scenario when HTTP POST data is involved? Obviously, if necessary, the login page could dynamically generate a form with all the POST parameters to pass them along (though that seems messy), but even then, I don't know of any way for the login page to redirect the user on to their intended page while keeping the POST data in the request. Edit : One extra constraint I should have made clear - Imagine we don't know if a login will be required until the user submits their comment. For example, their cookie might have expired between when they loaded the form and actually submitted the comment. A: You might want to investigate why Django removed this feature before implementing it yourself. It doesn't seem like a Django specific problem, but rather yet another cross site forgery attack. A: Just store all the necessary data from the POST in the session until after the login process is completed. Or have some sort of temp table in the db to store in and then retrieve it. Obviously this is pseudo-code but: if ( !loggedIn ) { StorePostInSession(); ShowLoginForm(); } if ( postIsStored ) { RetrievePostFromSession(); } Or something along those lines. A: 2 choices: * *Write out the messy form from the login page, and JavaScript form.submit() it to the page. *Have the login page itself POST to the requesting page (with the previous values), and have that page's controller perform the login verification. Roll this into whatever logic you already have for detecting the not logged in user (frameworks vary on how they do this). In pseudo-MVC: CommentController { void AddComment() { if (!Request.User.IsAuthenticated && !AuthenticateUser()) { return; } // add comment to database } bool AuthenticateUser() { if (Request.Form["username"] == "") { // show login page foreach (Key key in Request.Form) { // copy form values ViewData.Form.Add("hidden", key, Request.Form[key]); } ViewData.Form.Action = Request.Url; ShowLoginView(); return false; } else { // validate login return TryLogin(Request.Form["username"], Request.Form["password"]); } } } A: This is one good place where Ajax techniques might be helpful. When the user clicks the submit button, show the login dialog on client side and validate with the server before you actually submit the page. Another way I can think of is showing or hiding the login controls in a DIV tag dynamically in the main page itself. A: Collect the data on the page they submitted it, and store it in your backend (database?) while they go off through the login sequence, hide a transaction id or similar on the page with the login form. When they're done, return them to the page they asked for by looking it up using the transaction id on the backend, and dump all the data they posted into the form for previewing again, or just run whatever code that page would run. Note that many systems, eg blogs, get around this by having login fields in the same form as the one for posting comments, if the user needs to be logged in to comment and isn't yet. A: I know it says language-agnostic, but why not take advantage of the conventions provided by the server-side language you are using? If it were Java, the data could persist by setting a Request attribute. You would use a controller to process the form, detect the login, and then forward through. If the attributes are set, then just prepopulate the form with that data? Edit: You could also use a Session as pointed out, but I'm pretty sure if you use a forward in Java back to the login page, that the Request attribute will persist.
{ "language": "en", "url": "https://stackoverflow.com/questions/53260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How could I get my SVN-only host to pull from a git repository? I'd really like to get our host to pull from our Git repository instead of uploading files manually, but it doesn't have Git installed. So is there a way to trick Subversion (which they do have) into checking out a Git repository? I think I already know the answer, namely bug my host to add Git and live with it until they do, but I thought I would ask anyway. A: This page should provide a workaround for your problem. http://code.google.com/p/support/wiki/ImportingFromGit Basically, you create a read-only clone of your Git repository in the SVN repository format, exporting updates as you go. An SVN hook could be written that fires after each update to copy the new files where you need them.
{ "language": "en", "url": "https://stackoverflow.com/questions/53290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Ruby Package Include Problems I'm trying to use the Optiflag package in my Ruby code and whenever I try to do the necessary require optiflag.rb, my program fails with the standard no such file to load -- optiflag message. I added the directory with that library to my $PATH variable, but it's still not working. Any ideas? A: is it a gem? Are you doing require 'rubygems' require 'optiflag' or equivalent? A: It looks like it's a gem, so you need to enable ruby gems before requiring it. This site explains many ways of how to do it. But to have the cheat sheet here these are: 1) Require the rubygems package before using a gem. require "rubygems" require "optiflag" # etc 2) Add the -rubygems flag to wherever you execute ruby. I.e: ruby -rubygems Something.rb 3) Add an environment variable called RUBYOPT, giving it an option of rubygems. I.e: RUBYOPT=rubygems A: I also keep having this problem with RubyXL, tried to use single and double quotes. Is there something else that needs to be done? Maybe putting a file somewhere? I already succesfully installed the gem with sudo gem install rubyXL (RubyXL actually din't work).
{ "language": "en", "url": "https://stackoverflow.com/questions/53292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Java Web Services API, however I can't run a JVM on my server I'm trying to use some data from a PlanPlusOnline account. They only provide a java web services API. The server for the site where the data will be used does not allow me to install Tomcat (edit: or a JVM for that matter). I'm not going to lie, I am a Java software engineer, and I do some web work on the side. I'm not familiar with web services or servlets, but I was willing to give it a shot. I'd much rather they have JSON access to the data, but as far as I know they don't. Any ideas? EDIT: to clarify. The web service provided by planplusonline is Java based. I am trying to access the data from this web service without using Java. I believe this is possible now, but I need to do more research. Anyone who can help point me in the right direction is appreciated. A: To follow up with jodonnell's comment, a Web service connection can be made in just about any server-side language. It is just that the API example they provided was in Java probably because PlanPlusOnline is written in Java. If you have a URL for the service, and an access key, then all you really need to do is figure out how to traverse the XML returned. If you can't do Java, then I suggest PHP because it could be already installed, and have the proper modules loaded. This link might be helpful: http://www.onlamp.com/pub/a/php/2007/07/26/php-web-services.html A: Are you trying to implement a client to a web service hosted somewhere else? If so, Java's not necessary. You can do web service clients in .NET, PHP, Ruby, or pretty much any modern web technology out there. All you need is a WSDL document to provide metadata about how to invoke the services. A: If I am understanding your question correctly, you only need to connect to an existing web service and not create your own web service. If that is a case, and maybe I am missing something, I do not believe you will need Tomcat at all. If you are using Netbeans you can create a new Desktop or Web application, and then right click the project name. Select New and then other, and select Web Client. Enter the information for where to find the WSDL (usually a URL) and the other required information. Once you added the WebClient create a new class that actually makes your calls to the webservice. If the web service name was PlanPlusOnline then you could have something like: public final class PlanPlusOnlineClient { //instance to this class so that we do not have to reinstantiate it every time private static PlanPlusOnlineClient _instance = new PlanPlusOnlineClient(); //generated class by netbeans with information about the web service private PlanPlusOnlineService service = null; //another generated class by netbeans but this is a property of the service //that contains information about the individual methods available. private PlanPlusOnline port = null; private PlanPlusOnlineClient() { try { service = new PlanPlusOnlineService(); port = service.getPlanPlusOnlinePort(); } catch (MalformedURLException ex) { MessageLog.error(this, ex.getClass().getName(), ex); } } public static PlanPlusOnlineClient getInstance() { return _instance; } public static String getSomethingInteresting(String param) { //this will call one of the actual methods the web //service provides. return port.getSomethingIntersting(param); } } I hope this helps you along your way with this. You should also check out http://www.netbeans.org/kb/60/websvc/ for some more information about Netbeans and web services. I am sure it is similar in other IDEs.
{ "language": "en", "url": "https://stackoverflow.com/questions/53295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Can you bind a DataTrigger to an Attached Property? In WPF, is it possible for a DataTrigger to bind to an attached property? I essentially want to use a converter on an attached property to provide a style when a particular validation rule has been broken. I am using markup like the following: <DataTrigger Binding="{Binding Path=Validation.Errors, RelativeSource={RelativeSource Self}, Converter={StaticResource RequiredToBoolConverter}}" Value="True"> <Setter Property="Background" Value="LightGreen" /> </DataTrigger> However, when this runs, I get the following: System.Windows.Data Error: 39 : BindingExpression path error: 'Validation' property not found on 'object' ''TextBox' (Name='')'. BindingExpression:Path=Validation.Errors; DataItem='TextBox' (Name=''); target element is 'TextBox' (Name=''); target property is 'NoTarget' (type 'Object') If I change my DataTrigger binding path to "Text", I do not get the databinding error (but of course it does not provide the behaviour I am seeking). A: You need to wrap the property in parentheses: <DataTrigger Binding="{Binding Path=(Validation.Errors).YourAttachedProperty,...
{ "language": "en", "url": "https://stackoverflow.com/questions/53301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Best way to draw text with OpenGL and Cocoa? Does anyone know an easy way to draw arbitrary text in a Cocoa NSOpenGLView? I have a couple of constraints. * *The text on screen may change from frame to frame (for example, a framerate display in the corner) *I would like to be able to select any font installed on the system at any size A: Have you taken a look at the Cocoa OpenGL sample code? It includes "a texture class for strings, showing how to use an NSImage to write a string into and then texture from for high quality font rendering."
{ "language": "en", "url": "https://stackoverflow.com/questions/53309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Hibernate crops clob values oddly I have a one to many relationship between two tables. The many table contains a clob column. The clob column looks like this in hibernate: @CollectionOfElements(fetch = EAGER) @JoinTable(name = NOTE_JOIN_TABLE, joinColumns = @JoinColumn(name = "note")) @Column(name = "substitution") @IndexColumn(name = "listIndex", base = 0) @Lob private List<String> substitutions; So basically I may have a Note with some subsitutions, say "foo" and "fizzbuzz". So in my main table I could have a Note with id 4 and in my NOTE_JOIN_TABLE I would have two rows, "foo" and "fizzbuzz" that both have a relationship to the Note. However, when one of these is inserted into the DB the larger substitution values are cropped to be as long as the shortest. So in this case I would have "foo" and "fiz" in the DB instead of "foo" and "fizzbuzz". Do you have any idea why this is happening? I have checked and confirmed they aren't being cropped anywhere in our code, it's defintely hibernate. A: LOB/CLOB column may not be large enough. Hibernate has some default column sizes for LOB/CLOB that are relatively small (may depend on db). Anyway, try something like this: @Lob @Column(length=2147483648) Adjust the length (in bytes) based on your needs. A: Many JDBC drivers, early versions of Oracle in particular, have problems while inserting LOBs. Did you make sure that the query Hibernate fires, with the same parameters bound works successfully in your JDBC driver?
{ "language": "en", "url": "https://stackoverflow.com/questions/53316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What are the disadvantages of Typed DataSets I come from a world that favors building your own rather than rely on libraries and frameworks built by others. After escaping this world I have found the joy, and ease, of using such tools as Typed DataSets within Visual Studio. So besides the loss of flexibility what else do you lose? Are there performance factors (disregarding the procs vs dynamic sql debate)? Limitations? A: Performance is improved with typed datasets over untyped datasets (though I've never found performance issues with trivial things like that worth worrying about). I'd say the biggest pain is just keeping them in sync with your database--I can't speak for VS 2008 but prior versions do not provide good support for this. I literally drag the procs onto the designer everytime the resultset's schema changes. Not fun. But, you do get compile time type checking which is great and things like Customer.Name instead of Dataset.Tables(0).Rows(0)("Name"). So, if your schema is relatively static, they may be worth it, but otherwise, I wouldn't bother. You could also look into a real ORM. A: I only gave Typed Datasets a very short try. I stopped when I found my code breaking about 2/3 of the way down a 1,000+ line file of generated code. The other thing I didn't like was I thought I'd get to write code like Customer.Name, but by default I seemed to get code like CustomerDataSet.Customers[0].Name, where Customers[0] was of type CustomersRow. Still nicer to read than untyped datasets, but not really the semantics I was looking for. Personally I headed off down the route of ActiveRecord/NHibernate, and haven't looked back since. A: There is nothing wrong with typed datasets. They are not not perfect, however it's a next step toward solution of object-relational impedance mismatch problem. The only problem I faced is weak support for schema changes. Partial classes can help but not in every case. A: I'm not a big fan of typed dataset. There is no way that I can improve the performance using typed dataset. Its purely a wrapper over the existing database objects. I cannot consider the access like employee.empName. Casting is still done in the wrapper. Another overhead is huge chunk of code. LOC is increased. So many active objects in memory. No automatic update of schema. In any way typed dataset is not useful for developers except the comfort that it gives. As a developers we don't have any right to demand for comfort :) Take the pain...take the pain out of user :) A: Typed datasets are by far an upgrade from the world of classic ADO disconnected recordsets. I have found that they are still nice to use in simple situations where you need to perform some sort task that's row oriented -- i.e. you still want to work in the context of a database paradigm of rows, columns, constraints and the like. If used wisely in that context, then you're OK. There are a few areas where their benefits diminish: * *I think the synchronization issues brought up here already are definitely a problem, especially if you've gone and customized them or used them as a base class. *Depending on the number of data tables in the dataset, they can become quite fat. I mean this in the sense that multi-table datasets typically present a relational view of data. What comes along with that, besides the in-memory footprint, are definition of keys and potentially other constraints. Again, if that's what you need great, but if you need to traverse data quickly, one time, then an efficient loop with a data reader might be a better candidate. *Because of their complex definition and potential size, using them in remoting situations is ill advised as well. *Finally, when you start realizing you need to work with your data in objects that are relevant to your problem domain, their use becomes more of a hindrance than a benefit. You constantly find yourself moving fields in and out of rows tables in the set and concerning yourself with the state of the tables and rows. You begin to realize that they made OO languages to make it easier to represent real-world problem domain objects and that working with tables, rows and columns doesn't really fit into that way of thinking. Generally in my experience, I am finding that complex systems (e.g. many large enterprise systems) are better off moving away from the use of datasets and more towards a solid domain specific object model -- how you get your data in and out of those objects (using ORM's for example) is another topic of conversation altogether. However, in small projects where there's a form slapped in front of data that needs to basic maintenance and some other simple operations, great productivity can be acheived with the dataset paradigm -- especially when coupled with Visual Studio/.Net's powerful databinding features. A: The main criticism I would extend is that they don't scale well -- performance suffers because of the overhead, when you get to higher number of transactions, compared to lightweight business entities or DTOs or LINQ to SQL. There's also the headache of schema changes. For an "industrial strength" architecture, they're probably not the way to go, they will cause issues in the long run. I would still definitely use them for quick and dirty PoCs or simple utilities -- they're really convenient to work with given the tooling in Visual Studio, and get the job done. A: Datasets are nice for quickly slapping something together with visual studio, if all the issues mentioned previously are ignored. One problem I did not see mentioned is the visual scalability of datasets within the design surface of Visual Studio. As the system grows, the size of the datasets inevitably becomes unwieldy. The visual aspects of the designer simply don't scale. It is a real pain to scroll around trying to find a particular table or relation when the dataset has more than 20 or so tables.
{ "language": "en", "url": "https://stackoverflow.com/questions/53338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Crystal Report icons/toolbar not working when deployed on web server I have built a web page which contains a Crystal Report built using the Crystal libraries included in Visual Studio 2008. It 'works on my machine' but when deployed to the web server the icons (Export, Print etc) on the Crystal Report toolbar do not display or work. Just seeing the 'red X' where the button image should be and clicking does nothing. I have checked that the toolbar images are actually in the location being looked at on the web server: (C:/Inetpub/wwwroot/aspnet_client/system_web/2_0_50727/CrystalReportWebFormViewer4/images/toolbar/) They are all there. I have checked the permissions on the above mentioned folder on the web server. Gave 'full control' to every user just to test it. I have also intalled/run the 'CRRedist2008_x86.msi' on the web server. Some people have mentioned ensuring that 'crystalreportviewers115' folder is added to my '\wwwroot\aspnet_client\system_web\2_0_50727' folder on the web server but I have been unable to find the 'crystalreportviewers115' to copy it. Appreciate any help or ideas you may be able to offer. Update - OK, so obviously I hadn't checked well enough that the images were in the correct location. A: Doh! Someone else here at work figured this out. It was really simple and I should have been able to sort it, but hey, that's how it goes sometimes. Here the fix: On the web server, copy the 'aspnet_client' folder from 'C:\Inetpub\wwwroot' to 'C:\Inetpub\wwwroot\your-website-name'. That's all we did and it's now working. Hope this saves someone from all the fuss I just went through. A: Another solution is to simply create a new virtual directory in your web site and point it to "C:/Inetpub/wwwroot/aspnet_client" A: Try this On the web server, copy the 'aspnet_client' folder from 'C:\Inetpub\wwwroot' and past inside your website folder.(where form folder,app_data folder etc will be there) A: I took over maintaining some code produced by another developer who had left and suffered this issue too. In my case the compiled report was looking for the images in the crystalreportview115 folder which existed in my local development path and therefore worked locally. The only folder on on the target server was the CrystalReportWebFormViewer4 (I assume from a previous server installation or site deployment). Simply adding the ...115 folder sorted the problem out for me. The root cause for us would appear to be the version of Crystal installed on the developers machine. Not sure that helps anyone but thought I'd mention it! A: Upload aspnet_client folder from c:\inetpub\wwwroot folder of your local computer to the httpdocs folder of your web hosting server. Good Luck!!!
{ "language": "en", "url": "https://stackoverflow.com/questions/53347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Best way to hide DB connection code in PHP5 for apps that only require one connection? Below I present three options for simplifying my database access when only a single connection is involved (this is often the case for the web apps I work on). The general idea is to make the DB connection transparent, such that it connects the first time my script executes a query, and then it remains connected until the script terminates. I'd like to know which one you think is the best and why. I don't know the names of any design patterns that these might fit so sorry for not using them. And if there's any better way of doing this with PHP5, please share. To give a brief introduction: there is a DB_Connection class containing a query method. This is a third-party class which is out of my control and whose interface I've simplified for the purpose of this example. In each option I've also provided an example model for an imaginary DB "items" table to give some context. Option 3 is the one that provides me with the interface I like most, but I don't think it's practical unfortunately. I've described the pros and cons (that I can see) of each in the comment blocks below. At the moment I lean towards Option 1 since the burden is put on my DB wrapper class instead of on the models. All comments appreciated! Note: For some reason, the Stack Overflow preview is showing an encoded HTML entity instead of underscores. If the post comes through like that, please take this into account. <?php /** * This is the 3rd-party DB interface I'm trying to wrap. * I've simplified the interface to one method for this example. * * This class is used in each option below. */ class DB_Connection { public function &query($sql) { } } /** * OPTION 1 * * Cons: Have to wrap every public DB_Connection method. * Pros: The model code is simple. */ class DB { private static $connection; private static function &getConnection() { if (!self::$connection) { self::$connection = new DB_Connection(); } return self::$connection; } public static function &query($sql) { $dbh = self::getConnection(); return $dbh->query($sql); } } class Item { public static function &getList() { return DB::query("SELECT * FROM items"); } } /** * OPTION 2 * * Pros: Don't have to wrap every DB_Connection function like in Option 1 * Cons: Every function in the model is responsible for checking the connection */ class DB { protected static $connection = null; public function connect() { self::$connection = new DB_Connection(); } } class Item extends DB { public static function &getList() { if (!self::$connection) $this->connect(); return self::$connection->query("SELECT * FROM items"); } } /** * OPTION 3 * * Use magic methods * * Pros: Simple model code AND don't have to reimplement the DB_Connection interface * Cons: __callStatic requires PHP 5.3.0 and its args can't be passed-by-reference. */ class DB { private static $connection = null; public static function &getConnection() { if (!self::$connection) { self::$connection = new DB_Connection(); } return self::$connection; } public static function __callStatic($name, $args) { if (in_array($name, get_class_methods('DB_Connection'))) { return call_user_func_array( array(self::getConnection(), $name), $args); } } } A: Based on your examples above, I'd say option 1 is the best - simplicity always wins, and you can handle a failed connection differently depending on the method (you might want to fail differently for a stored procedure call than a simple SELECT, for instance). A: Semantically speaking I think option 1 makes the most sense, if you're treating DB as a resource then the DB_Connectioin is an object that it uses but not necessarily the object itself. However, several things I caution you against. First, don't make your DB class have all static methods as it will strongly impact your ability to test your code. Consider instead a very simple inversion of control container like this: class DB { private $connection; public function &query($sql) { return $connection->query($sql); } public __construct(&$db_connection) { $this->connection = $db_connection; } } class Item { public function &getList() { return ResourceManager::getDB()->query("SELECT * FROM items"); } } class ResourceManager { private $db_connection; private function &getDbConnection() { if (!$this->connection) { $this->connection = new DB_Connection(); } return $this->connection; } private $db; public static function getDB() { if(!$this->db) $this->db = new DB(getDbConnection()); return $this->db; } There are significant benefits. If you don't want DB to be used as a singleton you just make one modification to ResourceManager. If you decide it should not be a singleton - you make the modification in one place. If you want to return a different instance of DB based on some context - again, the change is in only one place. Now if you want to test Item in isolation of DB simply create a setDb($db) method in ResourceManager and use it to set a fake/mock database (simplemock will serve you well in that respect). Second - and this is another modification that this design eases - you might not want to keep your database connection open the entire time, it can end up using far more resources than need be. Finally, as you mention that DB_Connection has other methods not shown, it seems like the it might be being used for more than simply maintaining a connection. Since you say you have no control over it, might I recommend extracting an interface from it of the methods that you DO care about and making a MyDBConnection extends DB_Connection class that implements your interface. In my experience something like that will ultimately ease some pain as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/53353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do you create a shortcut to a directory so that it opens in explorer Better yet, how can I make My Computer always open in Explorer as well? I usually make a shortcut to my programming directories on my quick launch bar, but I'd love for them to open in Explorer. A: explorer -d c:\path A: I use explorer /e,c:\path. @harpo explorer -d c:\path does not work for me (WinXP sp3). A: Have you considered the win+e hotkey? It isn't quite what you want, but might be close enough. A: This is a good reference: http://support.microsoft.com/kb/130510 i.e.: explorer /e,%HOMEDRIVE%%HOMEPATH%
{ "language": "en", "url": "https://stackoverflow.com/questions/53355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Printing in Adobe AIR - Standalone PDF Generation Is it possible to generate PDF Documents in an Adobe AIR application without resorting to a round trip web service for generating the PDF? I've looked at the initial Flex Reports on GoogleCode but it requires a round trip for generating the actual PDF. Given that AIR is supposed to be the Desktop end for RIAs is there a way to accomplish this? I suspect I am overlooking something but my searches through the documentation don't reveal too much and given the target for AIR I can't believe that it's just something they didn't include. A: There's AlivePDF, which is a PDF generation library for ActionScript that should work, it was made just for the situation you describe. A: Just added a Adobe Air + Javascript + AlivePDF demo: This demo doesn't require flex and is pretty straight forward. http://www.drybydesign.com/2010/02/26/adobe-air-alivepdf-without-flex/ A: One of the other teams where I work is working on a Flex-based drawing application and they were totally surprised that AIR / Flex does not have PDF authoring built-in. They ended up rolling their own simple PDF creator based on the PDF specification. A: Yes it is very easy to create PDF using AlivePDF, here is the sample code, first method create a pdf and second method save the pdf on disk and return the path, feel free to ask any question. public function createFlexPdf() : String { pdf = new PDF(); pdf.setDisplayMode (Display.FULL_WIDTH,Layout.ONE_COLUMN,Mode.FIT_TO_PAGE,0.96); pdf.setViewerPreferences(ToolBar.SHOW,MenuBar.HIDE,WindowUI.SHOW,FitWindow.RESIZED,CenterWindow.CENTERED); pdf.addPage(); var myFontStyle:IFont = new CoreFont ( FontFamily.COURIER ); pdf.setFont(myFontStyle,10); pdf.addText('Kamran Aslam',10,20);//String, X-Coord, Y-Coord return savePDF(); } private function savePDF():String { var fileStream:FileStream = new FileStream(); var file:File = File.createTempDirectory(); file = file.resolvePath("temp.pdf"); fileStream.open(file, FileMode.WRITE); var bytes:ByteArray = pdf.save(Method.LOCAL); fileStream.writeBytes(bytes); fileStream.close(); return file.url; }
{ "language": "en", "url": "https://stackoverflow.com/questions/53364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Getting hibernate to log clob parameters (see here for the problem I'm trying to solve) How do you get hibernate to log clob values it's going to insert. It is logging other value types, such as Integer etc. I have the following in my log4j config: log4j.logger.net.sf.hibernate.SQL=DEBUG log4j.logger.org.hibernate.SQL=DEBUG log4j.logger.net.sf.hibernate.type=DEBUG log4j.logger.org.hibernate.type=DEBUG Which produces output such as: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 However you'll note that it never displays parameter: 3 which is our clob. What I would really want is something like: (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '0' to parameter: 2 (org.hibernate.type.ClobType) binding 'something' to parameter: 3 (org.hibernate.SQL) insert into NoteSubstitutions (note, listIndex, substitution) values (?, ?, ?) (org.hibernate.type.LongType) binding '170650' to parameter: 1 (org.hibernate.type.IntegerType) binding '1' to parameter: 2 (org.hibernate.type.ClobType) binding 'something else' to parameter: 3 How do I get it to show this in the log? A: Try using: log4j.logger.net.sf.hibernate=DEBUG log4j.logger.org.hibernate=DEBUG That's the finest level you'll get. If it does not show the information you want, then it's not possible. A: Well, it looks like you can't. (Thanks Marcio for the suggestion, but sadly that didn't add anything useful) A: Try to set log4j.logger.org.hibernate.type=TRACE and see if that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/53365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Where are people getting that rotating loading image? I keep running across this loading image http://georgia.ubuntuforums.com/images/misc/lightbox_progress.gif which seems to have entered into existence in the last 18 months. All of a sudden it is in every application and is on every web site. Not wanting to be left out is there somewhere I can get this logo, perhaps with a transparent background? Also where did it come from? A: I believe the animation came from the Mac OS X loading screen. Here's a similar one with a transparent background: alt text http://homepage.mac.com/xraydoc/.Pictures/spinner.gif A: You can get many different AJAX loading animations in any colour you want here: ajaxload.info A: I think it's just a general extension to the normal clock-face style loading icon. The Firefox throbber is the first example of that style that I remember coming across; the only real difference between that and the current trend of straight lines is that the constituent symbols have been stretched to give a crisper look, moving back to more of a many-handed clock emblem.
{ "language": "en", "url": "https://stackoverflow.com/questions/53370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: using DBMS_APPLICATION_INFO with Jboss Does anyone have examples of how to use DBMS_APPLICATION_INFO package with JBOSS? We have a various applications which run within JBOSS and share db pools. I would like, at the start of each session these applications to identify themselves to the database using DBMS_APPLICATION_INFO so I can more easily track which sections of the application is causing database issues. I'm not too familiar with session life cycles in JBOSS, but at the end of the day, what needs to happen is at the start and end of a transaction, this package needs to be called. Has anyone done this before? A: If you are using JBoss, you can use a "valid-connection-checker". This class is normaly used to check the validity of the Connection. But, as it will be invoked every time the Connection pool gives the user a Connection, you can use it to set the DBMS_ APPLICATION _INFO. You declare such a class in the oracle-ds.xml like this: <local-tx-datasource> <jndi-name>jdbc/myDS</jndi-name> <connection-url>jdbc:oracle:thin:@10.10.1.15:1521:SID</connection-url> <driver-class>oracle.jdbc.driver.OracleDriver</driver-class> <security-domain>MyEncryptDBPassword</security-domain> <valid-connection-checker-class-name>test.MyValidConn</valid-connection-checker-class-name> <metadata> <type-mapping>Oracle9i</type-mapping> </metadata> </local-tx-datasource> Your class must implement the org.jboss.resource.adapter.jdbc.ValidConnectionChecker interface. If you use Maven, you can include this interface with the following dependency: <dependency> <groupId>jboss</groupId> <artifactId>jboss-common-jdbc-wrapper</artifactId> <version>3.2.3</version> <scope>provided</scope> </dependency> This interface has only one method: isValidConnection. I copy my implementation: public SQLException isValidConnection(Connection arg0) { CallableStatement statement; try { statement = arg0.prepareCall("call dbms_application_info.set_client_info('"+getInfos()+"')"); statement.execute(); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; } Hope it helps ! Benoît A: yes, you can write a wrapper class around your connection pool, and a wraper around the connection so lets say you have: OracleConnection conn=connectionPool.getConnection("java:scott@mydb"); Change it to: public class LoggingConnectionPool extends ConnectionPool{ public OracleConnection getConnection(String datasourceName, String module, String action){ OracleConnection conn=getConnection(datasourceName); CallableStatement call=conn.preparedCall("begin dbms_application_info.setModule(module_name => ?, action_name => ?); end;"); try{ call.setString(1,module); call.setString(2,action); call.execute(); finally{ call.close(); } return new WrappedOracleConnection(conn); } Note the use of WrappedOracleConnection above. You need this because you need to trap the close call public class WrappedOracleConnection extends OracleConnection{ public void close(){ CallableStatement call=this.preparedCall("begin dbms_application_info.setModule(module_name => ?, action_name => ?); end;"); try{ call.setNull(1,Types.VARCHAR); call.setNull(2,Types.VARCHAR); call.execute(); finally{ call.close(); } } // and you need to implement every other method //for example public CallableStatement prepareCall(String command){ return super.prepareCall(command); } ... } Hope this helps, I do something similar on a development server to catch connections that are not closed (not returned to the pool). A: In your -ds.xml, you can set a connection property called v$session.program and the value of that property will populate the PROGRAM column of each session in the V$SESSION view created for connections originating from your connection pool. I usually set it to the jboss.server.name property. See here for an example.
{ "language": "en", "url": "https://stackoverflow.com/questions/53379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Some Tomcat webapps not opening I downloaded a couple of webapps and placed them in my /webapps folder. Some of them I could open by going to http://localhost:8080/app1 and it would open. However, some others I would do the exact same thing and go to http://localhost:8080/app2 and it will display "HTTP Status 404 - /app2/", even though I am sure it is there. I've checked that it contains a WEB-INF folder just like app1, and I've even restarted Tomcat to be sure. My question is: is there anything (perhaps in the web.xml file) that specifies what the URL has to be to start the webapp? Or is it simply just http://localhost:8080/<folder name> ? P.S. If you want to know exactly what app1 and app2 I am refering to: app1 (works) = http://assets.devx.com/sourcecode/11237.zip app2 (doesn't work) = http://www.laliluna.de/download/eclipse-spring-jdbc-tutorial.zip I've tried a few others as well, some work, some don't. I'm just wondering if I'm missing something. A: I usually debug this by going the the manager page and making sure that all of the contexts are deployed (http://localhost:8080/manager/html). It sounds like app2 has not been deployed properly or is not starting up because of some other error. I would look at the logs. There may be a bunch of information in there but usually it explains what is broken. A: The first zip file you mention has a .war file as part of the zip. The second one is just the source code and it needs to be built into a .war file. It looks like it is setup to have that done in Eclipse. Try the File>>Export option and select War file as the export type. A: The second app (the directory named WebRoot) can also be deployed correctly but you get a 404 by going to it because there is not an "index.jsp" or "index.html" file in the root directory. Try putting a file there with any of those names, and the 404 is gone. A servlet mapping in the web.xml is not strictly necessary for this to work. A: The second requires the spring framework. The only runnable things I could find were a client in eclipse-spring-jdbc-tutorial.zip\SpringJdbc\src\test\de\laliluna\library\TestClient.java and one in eclipse-spring-jdbc-tutorial.zip\SpringJdbc\src\de\laliluna\library\sample\MyApplication.java. If you open it in eclipse (it is an eclipse project), and compile, provided the Spring framework is installed, you should be able to run both. A: Are you familiar with log4j? Spring puts a lot of often-useful information into the logs created via log4j. When I have a SpringMVC application that won't startup correctly or otherwise isn't running I check my log4j and potentially turn up the Spring log level to INFO or even DEBUG. A: If "/" is not accessible it means that there is no "index.html", "index.jsp" or whatever is defined in the welcome-files list of the web.xml Also no Servlet-Mapping for the context ROOT directory is present. Check the web.xml for Servlet-Mappings or try to figure out the name of the jsp/html /... file being in the context root
{ "language": "en", "url": "https://stackoverflow.com/questions/53387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Centralizing/controlling arbitrary builds of .NET projects and solutions Over the years I have created and tweaked a set of NAnt scripts to perform complete project builds. The main script takes a single application end point (a web application project for example) and does a complete, from source control, build of it. The scripts are preconfigured with the necessary information regarding build output locations, source control addresses, etc. The main point is that you can feed it very little information and build a given project from the ground up. This satisfies the "arbitrary" part of my question. In the past I have worked for companies that produce a few software products (mostly web applications). This environment lends itself very well to a typical continuous integration setup where there is an integrator for each product. I have set up integrators to serve as both CI builds as well as integrators to handle a complete release candidate build and QA deployment. These integrators use the master build scripts, so the integrators themselves are very little more than source control monitoring and a call to the master NAnt script. I now work for a development group that creates many applications. Often, developers are called on to support applications originally built by others. When I started there was no build management in place. I am in a particularly unique position within the group as the lead developer of a 4 person team for one business unit's product suite (around a half dozen complete systems). I have implemented CruiseControl.Net with the master build scripts for doing both CI builds as well as RC builds. This works find for the fixed set of projects within the business' product suite. I have been using CCNet for many years now so I'm fully aware of what it can do. I have great respect for its contribution to the continuous integration arena as I use it for all the projects in my suite of products. I have stressed to my team the use of the official RC build integrator as the master builder for anything destined for any location other than development. This provides great control over the fixed set of projects that are under CCNet's control. However, there are other developers building other applications. Some of these are 1 developer projects that are often not even in source control until well into the project life cycle (something else I'm trying to change). Many of these projects are one-offs that won't have much of a life in development after they have been deployed. Despite that, they'll still need to be supported. Integral to supporting those is the fact that without centralized build management of these projects the release candidate builds that go to QA and eventually production are left to be done on individual developer machines. This, of course, provides zero guarantee that everything is in source control among the other factors of a developer machine build. The issue I've been trying to solve is: what kind of system can I use to provide centralized control over these sort of arbitrary builds? This is definitely not a unique problem. However, in much of the reading I have done about centralized builds, build automation and continuous integration the focus is on fixed projects/products and the task of supporting continued development on them. What types of process are used by business that are doing development on new projects constantly? Are they not using these types of processes? While the master build scripts do live on the build server, they are clumsy to use. Also I'd prefer to limit the console access to the build server. So some management system is required to provider easier access to firing off arbitrary builds on a central system. I realize that what I'm looking for may lay under the covers of MS Team Build. Unfortunately, whenever I start reading about it, I get that quicksand feeling when I start getting into the MS marketing material and quickly lose my way, never really finding out if what I want to do can be done with it. Plus, the licensing costs have been addressed as a likely show stopper in some past general discussions on the topic of Team Foundation Server and Team System. I'm a eager to hear from anyone who has solved this problem who might offer suggestions. I have done some work on a centralized build system based around my master "build-any-project" build scripts. However, what I have is in its infancy and has been constructed to support mainly just the types of projects that I work on. There lacks the kind of support required at this point to handle many application types or the plethora of project/solution configurations that are possible with Visual Studio. A: The biggest problem I see with a central build system is that even with the best will in the world tools will diverge between teams or over time. I favour designing any build system for a particular project such that it requires checking out a single module eg. MyProjectBuildEnvironment and then running a single script in a very tool neutral manner e.g. on a windows system build.bat Where possible all tools used by the build environment should be runnable simply by checking out the module MyProjectBuildEnvironmen rather than requiring machine level installers. These two constraints will not impede the freedoms of teams to use the tools that they prefer at a given time. The central build system can then be a simple system that checks out one module per project and simply executes the build.bat file. You could call it a meta build system. To be honest it would probably be overkill as a simple wiki describing the build module name for each project would be enough to allow anyone to checkout that one module and kick it off by the common build.bat command. As a final note the script for starting the build should always examine the environment and tell the user if any tools are missing or any machine configuration needs to be tweaked to successfully complete the build.
{ "language": "en", "url": "https://stackoverflow.com/questions/53391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Suggestions wanted with Lists or Enumerators of T when inheriting from generic classes I know the answer is not going to be simple, and I already use a couple of (I think ugly) cludges. I am simply looking for some elegant answers. Abstract class: public interface IOtherObjects; public abstract class MyObjects<T> where T : IOtherObjects { ... public List<T> ToList() { ... } } Children: public class MyObjectsA : MyObjects<OtherObjectA> //(where OtherObjectA implements IOtherObjects) { } public class MyObjectsB : MyObjects<OtherObjectB> //(where OtherObjectB implements IOtherObjects) { } Is it possible, looping through a collection of MyObjects (or other similar grouping, generic or otherwise) to then utilise to ToList method of the MyObjects base class, as we do not specifically know the type of T at this point. EDIT As for specific examples, whenever this has come up, I've thought about it for a while, and done something different instead, so there is no current requirement. but as it has come up quite frequently, I thought I would float it. EDIT @Sara, it's not the specific type of the collection I care about, it could be a List, but still the ToList method of each instance is relatively unusable, without an anonymous type) @aku, true, and this question may be relatively hypothetical, however being able to retrieve, and work with a list of T of objects, knowing only their base type would be very useful. Having the ToList returning a List Of BaseType has been one of my workarounds EDIT @ all: So far, this has been the sort of discussion I was hoping for, though it largely confirms all I suspected. Thanks all so far, but anyone else, feel free to input. EDIT@Rob, Yes it works for a defined type, but not when the type is only known as a List of IOtherObjects. @Rob Again Thanks. That has usually been my cludgy workaround (no disrespect :) ). Either that or using the ConvertAll function to Downcast through a delegate. Thanks for taking the time to understand the problem. QUALIFYING EDIT in case I have been a little confusing To be more precise, (I may have let my latest implementation of this get it too complex): lets say I have 2 object types, B and C inheriting from object A. Many scenarios have presented themselves where, from a List of B or a List of C, or in other cases a List of either - but I don't know which if I am at a base class, I have needed a less specific List of A. The above example was a watered-down example of the List Of Less Specific problem's latest incarnation. Usually it has presented itself, as I think through possible scenarios that limit the amount of code that needs writing and seems a little more elegant than other options. I really wanted a discussion of possibilities and other points of view, which I have more or less got. I am surprised no one has mentioned ConvertAll() so far, as that is another workaround I have used, but a little too verbose for the scenarios at hand @Rob Yet Again and Sara Thanks, however I do feel I understand generics in all their static contexted glory, and did understand the issues at play here. The actual design of our system and usage of generics it (and I can say this without only a touch of bias, as I was only one of the players in the design), has been done well. It is when I have been working with the core API, I have found situations when I have been in the wrong scope for doing something simply, instead I had to deal with them with a little less elegant than I like (trying either to be clever or perhaps lazy - I'll accept either of those labels). My distaste for what I termed a cludge is largely that we require to do a loop through our record set simply to convert the objects to their base value which may be a performance hit. I guess I was wondering if anyone else had come across this in their coding before, and if anyone had been cleverer, or at least more elegant, than me in dealing with it. A: In your case MyObjectsA and MyObjectsB don't have common predecessor. Generic class is template for different classes not a common base class. If you want to have common properties in different classes use interfaces. You can't call ToList in a loop cause it has different signature in different classes. You can create ToList that returns objects rather than specific type. A: why do you have a collection of MyObjects? Is there a specific reason you don't have a List? A: You can still probably access the ToList() method, but since you are unsure of the type, won't this work? foreach(var myObject in myObjectsList) foreach(var obj in myObject.ToList()) //do something Of course this will only work on C# 3.0. Note that the use of var is merely to remove the requirement of knowing what type the lists contain; as opposed to Frank's comments that I have delusions that var will make typing dynamic. A: OK, I am confused, the following code works fine for me (curiosity got the better of me!): // Original Code Snipped for Brevity - See Edit History if Req'd Or have I missed something? Update Following Response from OP OK now I am really confused.. What you are saying is that you want to get a List of Typed values from a generic/abstract List? (the child classes therefore become irrelevant). You cannot return a Typed List if the Types are children/interface implementors - they do not match! You can of course get a List of items that are of a specific type from the abstract List like so: public List<OfType> TypedList<OfType>() where OfType : IOtherObjects { List<OfType> rtn = new List<OfType>(); foreach (IOtherObjects o in _objects) { Type objType = o.GetType(); Type reqType = typeof(OfType); if (objType == reqType) rtn.Add((OfType)o); } return rtn; } If I am still off-base here can you please reword your question?! (It doesn't seem like I am the only one unsure of what you are driving at). I am trying to establish if there is a misunderstanding of generics on your part. Another Update :D Right, so it looks like you want/need the option to get the typed List, or the base list yes? This would make your abstract class look like this - you can use ToList to get the concrete type, or ToBaseList() to get a List of the interface type. This should work in any scenarios you have. Does that help? public abstract class MyObjects<T> where T : IOtherObjects { List<T> _objects = new List<T>(); public List<T> ToList() { return _objects; } public List<IOtherObjects> ToBaseList() { List<IOtherObjects> rtn = new List<IOtherObjects>(); foreach (IOtherObjects o in _objects) { rtn.Add(o); } return rtn; } } Update #3 It's not really a "cludgy" workaround (no disrespect taken) - thats the only way to do it.. I think the bigger issue here is a design/grok problem. You said you had a problem, this code solves it. But if you were expecting to do something like: public abstract class MyObjects<T> where T : IOtherObjects { List<T> _objects = new List<T>(); public List<IOtherObjects> Objects { get { return _objects; } } } #warning This won't compile, its for demo's sake. And be able to pick-and-choose the types that come out of it, how else could you do it?! I get the feeling you do not really understand what the point of generics are, and you are trying to get them to do something they are not designed for!? A: If you have class B : A class C : A And you have List<B> listB; List<C> listC; that you wish to treat as a List of the parent type Then you should use List<A> listA = listB.Cast<A>().Concat(listC.Cast<A>()).ToList() A: I have recently found the List<A>.Cast<B>().ToList<B>() pattern. It does exactly what I was looking for, A: Generics are used for static time type checks not runtime dispatch. Use inheritance/interfaces for runtime dispatch, use generics for compile-time type guarantees. interface IMyObjects : IEnumerable<IOtherObjects> {} abstract class MyObjects<T> : IMyObjects where T : IOtherObjects {} IEnumerable<IMyObjects> objs = ...; foreach (IMyObjects mo in objs) { foreach (IOtherObjects oo in mo) { Console.WriteLine(oo); } } (Obviously, I prefer Enumerables over Lists.) OR Just use a proper dynamic language like VB. :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/53395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Managed OleDB provider written in C# An OleDB provider is a binary implementing COM interfaces provided by Microsoft. From that it seems to be possible to create a provider using C#. Is that correct? Is there a sample demonstrating that? If not, would you discourage me from doing that? I see that there are multiple unmanaged samples but I can't find any managed. A: The article is good, but doesn't actually answer the question. OLEDB is a set of COM interfaces that could in fact be implemented in .Net via COM Interop though I've never heard of such an implementation and probably isn't advisable. The set of OLEDB interfaces are documented by Microsoft here. OLEDB is a complicated topic and not all interfaces are required to implement a functional provider. To make things worse, different OLEDB clients have the set of interfaces they require to be able to use the provider. For example, here is a list of require interfaces that must be implemented to use a provider from the .Net OLEDB client (System.Data.OleDb.*). Note: I didn't find such a link for the 2.0 Framework or later. Finally it's worth noting that it was so difficult to implement providers Microsoft later provided a set of ATL templates (C++) to help implementers do it correctly. To learn more about OLEDB I'd definitely recommend looking at the Windows Data Access SDK on MSDN. A: I am not sure I really understand your question?! There already is a managed OleDBProvider?! using System.Data.OleDb; I would certainly discourage writing a provider that exists and works absolutely fine! :) But in answer to your first question, you can of course create your own. The Data Provider Roadmap may be a good place to start for an overview and links to samples etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/53404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: wav <> mp3 for flash(as3) I'm wondering about MP3 decoding/encoding, and I was hoping to pull this off in Flash using AS3 I'm sure it'll be a right pain... I have no idea where to start, can anyone offer any pointers? reference material? ----much later--- Thank you all very much for your input... It seems I have a long road ahead of me yet! A: You could also theoretically do this as a PixelBender filter, and should get significantly better performance than using a pure ActionScript 3 implementation. More info on PixelBender here: http://labs.adobe.com/wiki/index.php/Pixel_Bender_Toolkit mike chambers [email protected] A: this would help http://labs.adobe.com/technologies/alchemy/ A: See LAME MP3 Encoder. You can checkout their source code and their link page. Mpeg.org should have documents too. A: I've got a project converting WAV files (actually Asterisk voice mails) into MP3's. If I remember correctly there are some oddities about Lame's license, so I've downloaded and compiled first LAME, then SOX by hand. I have a web process written in PHP to actually convert the files from WAV to MP3 on the web server's local file system (actually PHP is just supervising the command-line sox tool via exec()). Then I attach all the metadata the MP3 needs using the PEAR Mp3_Id package. Then I move the newly constructed MP3 file into a folder Apache is sharing, and point the outstanding SoundManager2 flash-based MP3 player at it. For small transactions this works very well -- converting a minute or two voice mail does not add any appreciable lag to actually rendering and returning the rest of the page. As I get more users on a single server, it will probably eventually become necessary to write a cron job or something to do the conversion before the user actually asks for the file the first time. A: It's going to be VERY slow doing this in AS3. You really need a C/C++ implementation if you care at all about how long it will take. A: Andre Michelle and the Hobnox guys pulled off something similar with their Hobnox AudioTool, they ported a Java Vorbis encoder to AS3. They supposedly ended up with encoding taking twice the time of the audio duration. Don't know what your use case is, but in the Hobnox tool apparently audio is created at the client side, encoded as Vorbis, sent to the server, converted to mp3 and stored in the users library.
{ "language": "en", "url": "https://stackoverflow.com/questions/53411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: NHibernate or LINQ to SQL If starting a new project what would you use for your ORM NHibernate or LINQ and why. What are the pros and cons of each. edit: LINQ to SQL not just LINQ (thanks @Jon Limjap) A: Start with NHibernate is a bad idea. It shows a good performance only with ably settings. Try to use EFv4 for large projects and L2S (maybe 3rd-part products) for small and medium size. These products are more convenient and flexible than NHibernate and allow you to start quickly. A: not a complete list LinqToSQL Pro: * *better tool support *good linq provider *easy to start with when db-schema == classes - Con: * *not flexible (ie db-schema != classes) *only supports MS SQL Server *no cascading (save, update ... doesnt cascade to referenced objects) NHibernate Pro: * *a lot rdbms supported ootb *feature rich *very flexible for almost all corner cases *open source Con: * *not so easy to start with *not from MS *there are many tools, but you have to search for Between the 2 ORMs i would choose LinqToSql if: * *db-schema == classes *only ever use MS SQL Server *shop only allows MS-Products i would choose Nhibernate if: * *richer objectmodel *legacy db-schema *DB other than MS SQL Server or support multiple *performance critical (i think NH has more features to optimise performance than LinqToSql) NOTE: this is my personal view. I deal mostly with (crazy) legacy dbs and complex ETL jobs where the object model helps a lot over SQL. A: I have asked myself a very similar question except that instead of NHibernate I was thinking about WilsonORM which I have consider pretty nice. It seems to me that there are many important differences. LINQ: * *is not a complete ORM tool (you can get there with some additional libraries like the latest Entity framework - I personally consider the architecture of this latest technology from MS to be about 10 years old when compared with other ORM frameworks) *is primarily querying "language" supporting intellisense (compiler will check the syntax of your query) *is primarily used with Microsoft SQL Server *is closed source NHibernate: * *is ORM tool *has pretty limited querying language without intellisense *can be used with almost any DBMS for which you have a DB provider *is open source It really depends. If you develop a Rich (Windows) desktop application where you need to construct objects, work with them and at the end persist their changes, then I would recommend ORM framework like NHibernate. If you develop a Web application that usually just query data and only occasionally writes some data back to the DB then I would recommend good querying language like Linq. So as always, it depends. :-) A: Errr... there's LINQ for NHibernate. Perhaps what you mean is which to use: * *LINQ to SQL *NHibernate I prefer NHibernate. LINQ to SQL is fairly lightweight, but it's a little bit more tightly coupled to your data structure, as opposed to NHibernate which is pretty flexible in terms of the types of object definitions that can be mapped to your table structures. Of course that's not to say that LINQ to SQL has no uses: this very website uses it. I believe that it's quite useful to get up and running in small applications where the database schema is not as massive. A: I don't use (or even know) NHibernate, I just want to give my testimony: I use LINQ to SQL since about 2 years with MySQL and PostgreSQL databases (using DbLinq on Windows, using Mono on Linux and Mac OS X). So LINQ to SQL is NOT limited to Microsoft products. I can confirm that LINQ to SQL is very well suited for small and medium projects, or large projects where you have the absolute control of the database structure. As the reviews indicate, LINQ to SQL has some limitations that make it an inappropriate tool when there is no direct mapping between the database tables and the entity classes. Note : LINQ to SQL doesn't support many-to-many relationships (but this can be easily achieved with a few code lines). A: The main drawback of NHibernate is the inability to make use of method calls. They cannot be translated to SQL. To circumvent that, you have to recreate expression trees which is difficult to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/53417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Memory leak detectors for C? What memory leak detectors have people had a good experience with? Here is a summary of the answers so far: Valgrind - Instrumentation framework for building dynamic analysis tools. Electric Fence - A tool that works with GDB Splint - Annotation-Assisted Lightweight Static Checking Glow Code - This is a complete real-time performance and memory profiler for Windows and .NET programmers who develop applications with C++, C#, or any .NET Framework Also see this stackoverflow post. A: If you have the money: IBM Rational Purify is an extremely powerful industry-strength memory leak and memory corruption detector for C/C++. Exists for Windows, Solaris and Linux. If you're linux-only and want a cheap solution, go for Valgrind. A: Mudflap for gcc! It actually compiles the checks into the executable. Just add -fmudflap -lmudflap to your gcc flags. A: I had quite some hits with cppcheck, which does static analysis only. It is open source and has a command line interface (I did not use it in any other way). A: second the valgrind... and I'll add electric fence. A: lint (very similar open-source tool called splint) A: Painful but if you had to use one.. I'd recommend the DevPartner BoundsChecker suite.. that's what people at my workplace use for this purpose. Paid n proprietary.. not freeware. A: Also worth using if you're on Linux using glibc is the built-in debug heap code. To use it, link with -lmcheck or define (and export) the MALLOC_CHECK_ environment variable with the value 1, 2, or 3. The glibc manual provides more information. This mode is most useful for detecting double-frees, and it often finds writes outside the allocated memory area when doing a free. I don't think it reports leaked memory. A: Valgrind under linux is fairly good; I have no experience under Windows with this. A: I've had minimal love for any memory leak detectors. Typically there are far too many false positives for them to be of any use. I would recommend these two as beiong the least intrusive: GlowCode Debug heap A: For Win32 debugging of memory leaks I have had very good experiences with the plain old CRT Debug Heap, that comes as a lib with Visual C. In a Debug build malloc (et al) get redefined as _malloc_dbg (et al) and there are other calls to retrieve results, which are all undefined if _DEBUG is not set. It sets up all sorts of boundary guards on the heap, and allows you to diplay the results at any time. I had a few false positives when I was witting some time routines that messed with the library run time allocations until I discovered _CRT_BLOCK. I had to produce first DOS, then Win32 console and services that would run for ever. As far as I know there are no memory leaks, and in at least one place the code run for two years unattended before the monitor on the PC failed (though the PC was fine!). A: On Windows, I have used Visual Leak Detector. Integrates with VC++, easy to use (just include a header and set LIB to find the lib), open source, free to use FTW. A: At university when I was doing most things under Unix Solaris I used gdb. However I would go with valgrind under Linux. A: The granddaddy of these tools is the commercial, closed-source Purify tool, which was sold to IBM and then to UNICOM Parasoft's Insure++ (source code instrumentation) and valgrind (open source) are the two other real competitors. Trivia: the original author of Purify, Reed Hastings, went on to found NetFlix. A: No one mentioned clang's MSan, which is quite powerful. It is officially supported on Linux only, though. A: This question maybe old, but I'll answer it anyway - maybe my answer will help someone to find their memory leaks. This is my own project - I've put it as open source code: https://sourceforge.net/projects/diagnostic/ Windows 32 & 64-bit platforms are supported, native and mixed mode callstacks are supported. .NET garbage collection is not supported. (C++ cli's gcnew or C#'s new) It high performance tool, and does not require any integration (unless you really want to integrate it). Complete manual can be found here: http://diagnostic.sourceforge.net/index.html Don't be afraid of how much it actually detects leaks it your process. It catches memory leaks from whole process. Analyze only biggest leaks, not all. A: I'll second the valgrind as an external tool for memory leaks. But, for most of the problems I've had to solve I've always used internally built tools. Sometimes the external tools have too much overhead or are too complicated to set up. Why use already written code when you can write your own :) I joke, but sometimes you need something simple and it's faster to write it yourself. Usually I just replace calls to malloc() and free() with functions that keep better track of who allocates what. Most of my problems seem to be someone forgot to free and this helps to solve that problem. It really depends on where the leak is, and if you knew that, then you would not need any tools. But if you have some insight into where you think it's leaking, then put in your own instrumentation and see if it helps you. A: Our CheckPointer tool can do this for GNU C 3/4 and, MS dialects of C, and GreenHills C. It can find memory management problems that Valgrind cannot. If your code simply leaks, on exit CheckPointer will tell you where all the unfreed memory was allocated.
{ "language": "en", "url": "https://stackoverflow.com/questions/53426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: What are some good Python ORM solutions? I'm evaluating and looking at using CherryPy for a project that's basically a JavaScript front-end from the client-side (browser) that talks to a Python web service on the back-end. So, I really need something fast and lightweight on the back-end that I can implement using Python that then speaks to the PostgreSQL DB via an ORM (JSON to the browser). I'm also looking at Django, which I like, since its ORM is built-in. However, I think Django might be a little more than I really need (i.e. more features than I really need == slower?). Anyone have any experience with different Python ORM solutions that can compare and contrast their features and functionality, speed, efficiency, etc.? A: Storm has arguably the simplest API: from storm.locals import * class Foo: __storm_table__ = 'foos' id = Int(primary=True) class Thing: __storm_table__ = 'things' id = Int(primary=True) name = Unicode() description = Unicode() foo_id = Int() foo = Reference(foo_id, Foo.id) db = create_database('sqlite:') store = Store(db) foo = Foo() store.add(foo) thing = Thing() thing.foo = foo store.add(thing) store.commit() And it makes it painless to drop down into raw SQL when you need to: store.execute('UPDATE bars SET bar_name=? WHERE bar_id like ?', []) store.commit() A: This seems to be the canonical reference point for high-level database interaction in Python: http://wiki.python.org/moin/HigherLevelDatabaseProgramming From there, it looks like Dejavu implements Martin Fowler's DataMapper pattern fairly abstractly in Python. A: I usually use SQLAlchemy. It's pretty powerful and is probably the most mature python ORM. If you're planning on using CherryPy, you might also look into dejavu as it's by Robert Brewer (the guy that is the current CherryPy project leader). I personally haven't used it, but I do know some people that love it. SQLObject is a little bit easier to use ORM than SQLAlchemy, but it's not quite as powerful. Personally, I wouldn't use the Django ORM unless I was planning on writing the entire project in Django, but that's just me. A: SQLAlchemy's declarative extension, which is becoming standard in 0.5, provides an all in one interface very much like that of Django or Storm. It also integrates seamlessly with classes/tables configured using the datamapper style: Base = declarative_base() class Foo(Base): __tablename__ = 'foos' id = Column(Integer, primary_key=True) class Thing(Base): __tablename__ = 'things' id = Column(Integer, primary_key=True) name = Column(Unicode) description = Column(Unicode) foo_id = Column(Integer, ForeignKey('foos.id')) foo = relation(Foo) engine = create_engine('sqlite://') Base.metadata.create_all(engine) # issues DDL to create tables session = sessionmaker(bind=engine)() foo = Foo() session.add(foo) thing = Thing(name='thing1', description='some thing') thing.foo = foo # also adds Thing to session session.commit() A: If you're looking for lightweight and are already familiar with django-style declarative models, check out peewee: https://github.com/coleifer/peewee Example: import datetime from peewee import * class Blog(Model): name = CharField() class Entry(Model): blog = ForeignKeyField(Blog) title = CharField() body = TextField() pub_date = DateTimeField(default=datetime.datetime.now) # query it like django Entry.filter(blog__name='Some great blog') # or programmatically for finer-grained control Entry.select().join(Blog).where(Blog.name == 'Some awesome blog') Check the docs for more examples. A: SQLAlchemy is more full-featured and powerful (uses the DataMapper pattern). Django ORM has a cleaner syntax and is easier to write for (ActiveRecord pattern). I don't know about performance differences. SQLAlchemy also has a declarative layer that hides some complexity and gives it a ActiveRecord-style syntax more similar to the Django ORM. I wouldn't worry about Django being "too heavy." It's decoupled enough that you can use the ORM if you want without having to import the rest. That said, if I were already using CherryPy for the web layer and just needed an ORM, I'd probably opt for SQLAlchemy. A: We use Elixir alongside SQLAlchemy and have liked it so far. Elixir puts a layer on top of SQLAlchemy that makes it look more like the "ActiveRecord pattern" counter parts. A: I think you might look at: Autumn Storm A: There is no conceivable way that the unused features in Django will give a performance penalty. Might just come in handy if you ever decide to upscale the project. A: I used Storm + SQLite for a small project, and was pretty happy with it until I added multiprocessing. Trying to use the database from multiple processes resulted in a "Database is locked" exception. I switched to SQLAlchemy, and the same code worked with no problems. A: I'd check out SQLAlchemy It's really easy to use and the models you work with aren't bad at all. Django uses SQLAlchemy for it's ORM but using it by itself lets you use it's full power. Here's a small example on creating and selecting orm objects >>> ed_user = User('ed', 'Ed Jones', 'edspassword') >>> session.add(ed_user) >>> our_user = session.query(User).filter_by(name='ed').first() >>> our_user <User('ed','Ed Jones', 'edspassword')> A: SQLAlchemy is very, very powerful. However it is not thread safe make sure you keep that in mind when working with cherrypy in thread-pool mode.
{ "language": "en", "url": "https://stackoverflow.com/questions/53428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "244" }
Q: Getting IIS Worker Process Crash dumps I'm doing something bad in my ASP.NET app. It could be the any number of CTP libraries I'm using or I'm just not disposing something properly. But when I redeploy my ASP.NET to my Vista IIS7 install or my server's IIS6 install I crash an IIS worker process. I've narrowed the problem down to my HTTP crawler, which is a multithreaded beast that crawls sites for useful information when asked to. After I start a crawler and redeploy the app over the top, rather than gracefully unloading the appDomain and reloading, an IIS worker process will crash (popping up a crash message) and continue reloading the app domain. When this crash happens, where can I find the crash dump for analysis? A: Download Debugging tools for Windows: http://www.microsoft.com/whdc/DevTools/Debugging/default.mspx Debugging Tools for Windows has has a script (ADPLUS) that allows you to create dumps when a process CRASHES: http://support.microsoft.com/kb/286350 The command should be something like (if you are using IIS6): cscript adplus.vbs -crash -pn w3wp.exe This command will attach the debugger to the worker process. When the crash occurs it will generate a dump (a *.DMP file). You can open it in WinDBG (also included in the Debugging Tools for Windows). File > Open Crash dump... By default, WinDBG will show you (next to the command line) the thread were the process crashed. The first thing you need to do in WinDBG is to load the .NET Framework extensions: .loadby sos mscorwks then, you will display the managed callstack: !clrstack if the thread was not running managed code, then you'll need to check the native stack: kpn 200 This should give you some ideas. To continue troubleshooting I recommend you read the following article: http://msdn.microsoft.com/en-us/library/ee817663.aspx A: A quick search found IISState - it relies on the Windows debugging tools and needs to be running when a crash occurs, but given the circumstances you've described, this shouldn't be a problem,
{ "language": "en", "url": "https://stackoverflow.com/questions/53435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Is it possible to unit test a class that makes P/Invoke calls? I want to wrap a piece of code that uses the Windows Impersonation API into a neat little helper class, and as usual, I'm looking for a way to go test-first. However, while WindowsIdentity is a managed class, the LogonUser call that is required to actually perform the logging in as another user is an unmanaged function in advapi32.dll. I think I can work around this by introducing an interface for my helper class to use and hiding the P/Invoke calls in an implementation, but testing that implementation will still be a problem. And you can imagine actually performing the impersonation in the test can be a bit problematic, given that the user would actually need to exist on the system. A: Guideline: Don't test code that you haven't written. You shouldn't be concerned with WinAPI implementation not working (most probably it works as expected). Your concern should be testing the 'Wiring' i.e. if your code makes the right WinAPI call. In which case, all you need is to mock out the interface and let the mock framework tell if you the call was made with the right params. If yes, you're done. * *Create IWinAPIFacade (with relevant WinAPI methods) and implementation CWinAPIFacade. *Write a test which plugs in a mock of IWinAPIFacade and verify that the appropriate call is made *Write a test to ensure that CWinAPIFacade is created and plugged in as a default (in normal functioning) *Implement CWinAPIFacade which simply blind-delegates to Platform Invoke calls - no need to auto-test this layer. Just do a manual verification. Hopefully this won't change that often and nothing breaks. If you find that it does in the future, barricade it with some tests. A: I am not sure if I follow you.. You don't want to test the PInvoke yourself (you didn't write it) so you want to test that the wrapper class is performing as expected right? So, just create your interface in the wrapper class and test against that? In terms of needing to set up users etc, I think that would be a bullet you need to bite. It would seem odd to mock a wrapper PInvoke call, since you would simply just confirm and interface exists :)
{ "language": "en", "url": "https://stackoverflow.com/questions/53439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: When building a Handler, should it be .ashx or .axd? Say I'm building an ASP.Net class that inherits from IHttpHandler, should I wire this up to a URL ending in .ashx, or should I use the .axd extension? Does it matter as long as there's no naming conflict? A: Ahh.. ScottGu says it doesn't matter, but .ashx is slightly better because there's less chance of a conflict with things like trace.axd and others. That's why the flag went up in my head that .ashx might be better. http://forums.asp.net/t/964074.aspx A: Out in "the wild", .ashx are definitely the most popular extension.
{ "language": "en", "url": "https://stackoverflow.com/questions/53450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to stop IIS asking authentication for default website on localhost I have IIS 5.1 installed on Windows XP Pro SP2. Besides I have installed VS 2008 Express with .NET 3.5. So obviously IIS is configured for ASP.NET automatically for .NET 3.5 The problem is whenever I access http://localhost IE & Firefox both presents authentication box. Even if I enter Administrator user and its password, the authentication fails. I have already checked the anonymous user access (with IUSR_ user and password is controlled by IIS) in Directory Security options of default website. However other deployed web apps work fine (does not ask for any authentication). In IE this authentication process stops if I add http://localhost in Intranet sites option. Please note that the file system is FAT32 when IIS is installed. Regards, Jatan A: It could be because of couple of Browser settings. Try with these options checked.. Tools > Internet Options > Advanced > Enable Integrated Windows Authentication (works with Integrated Windows Authentication set on IIS) Tools > Internet Options> Security > Local Intranet > Custom Level > Automatic Logon Worst case, try adding localhost to the Trusted sites. If you are in a network, you can also try debugging by getting a network trace. Could be because of some proxy trying to authenticate. A: This is most likely a NT file permissions problem. IUSR_ needs to have file system permissions to read whatever file you're requesting (like /inetpub/wwwroot/index.htm).If you still have trouble, check the IIS logs, typically at \windows\system32\logfiles\W3SVC*. A: IIS uses Integrated Authentication and by default IE has the ability to use your windows user account...but don't worry, so does Firefox but you'll have to make a quick configuration change. 1) Open up Firefox and type in about:config as the url 2) In the Filter Type in ntlm 3) Double click "network.automatic-ntlm-auth.trusted-uris" and type in localhost and hit enter 4) Write Thank You To Blogger As Always, Hope this helped you out. This was copied from link text A: It is easier to remove the "Default Web Site" and create a new one if you do not have any limitations. I did it and my problem solved. A: If you want authentication try domainname\administrator as the username. If you don't want authentication then remove all the tickboxes in the authenticated access section of the direcory security > edit window. A: * *Add Admin user with password *Go to wwwroot props *Give this user a full access to this folder and its children *Change the user of the AppPool to the added user using this article http://technet.microsoft.com/en-us/library/cc771170(v=ws.10).aspx *Change the User of the website using this article http://techblog.sunsetsurf.co.uk/2010/07/changing-the-user-iis-runs-as-windows-2008-iis-7-5/ Put the same username and password you have created at step (1). It is working now congrats A: What worked for me is ,,, Click Start>control panel>Administrative Tools>Internet Information Services Expand the left tree, right-click your WebSite>Properties Click on Directory Security, then in "Anonymous access and authentication control" click on Edit Enable Anonymous access>browse> enter the credentials of the admin (like Administrator) (check names),> Click OK Apply the settings and it should work fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/53464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: What is the best way to convert a Ruby string range to a Range object I have some Ruby code which takes dates on the command line in the format: -d 20080101,20080201..20080229,20080301 I want to run for all dates between 20080201 and 20080229 inclusive and the other dates present in the list. I can get the string 20080201..20080229, so is the best way to convert this to a Range instance? Currently, I am using eval, but it feels like there should be a better way. @Purfideas I was kind of looking for a more general answer for converting any string of type int..int to a Range I guess. A: Inject with no args works well for two element arrays: rng='20080201..20080229'.split('..').inject { |s,e| s.to_i..e.to_i } Of course, this can be made generic class Range def self.from_ary(a) a.inject{|s,e| s..e } end end rng = Range.from_ary('20080201..20080229'.split('..').map{|s| s.to_i}) rng.class # => Range A: Range.new(*self.split("..").map(&:to_i)) A: assuming you want the range to iterate properly through months etc, try require 'date' ends = '20080201..20080229'.split('..').map{|d| Date.parse(d)} (ends[0]..ends[1]).each do |d| p d.day end A: But then just do ends = '20080201..20080229'.split('..').map{|d| Integer(d)} ends[0]..ends[1] anyway I don't recommend eval, for security reasons A: Ranger uses regex to validate strings with no SQL injection fear, and then eval. A: Combining @Purfideas answer with another answer somewhere on StackOverflow, I solved this by also surrounding the code with an input check, so the only thing used is a valid enumerable if !value[/^[0-9]+\.\.[0-9]+$/].nil? ends = value.split('..').map{|d| Integer(d)} value = ends[0]..ends[1] end It essentially rewrites your string value to a enumerable value. This comes in handy if you add a enumerable field in a yaml config file. If you need it for your application, you could extend the regex with an optional third literal dot, that could be optional. A: If we do it like v= "20140101..20150101" raise "Error: invalid format: #{v}" if /\d{8}\.\.\d{8}/ !~ v r= eval(v) and the attacker has a way of bypassing the raise check (simply by means of manipulating the runtime to disable exceptions) then we can get a dangerous eval which will potentially destroy the universe. So for the sake of reducing attack vectors, we check the format, and then do the parsing manually, then check the results v= "20140101..20150101" raise "Error: invalid format: #{v}" if /\d{8}\.\.\d{8}/ !~ v r= Range.new(*v.split(/\.\./).map(&:to_i)) raise "Error: invalid range: #{v}" if r.first> r.last A: Here suppose you want to store the hash as a system constant value and fetch it in any model. The hash key will be a range value. hash_1 = {1..5 => 'a', 6..12 => 'b', 13..67 => 'c', 68..9999999 => 'd'} Then create the system constant with value as hash_1.to_json. .to_json will convert your hash object to JSON object. Now inside the code create a new hash hash_2, JSON.parse(SystemConstant.get('Constant_name')).each{|key,val| temp_k=key.split('..').map{|d| Integer(d)}; hash_2[temp_k[0]..temp_k[1]] = val} The new hash_2 will be the required hash_1 A: I had a similar requirement although in my case the strings were in two possible formats, occasionally they were single number strings such as "7", other times they were ranges such as "10-14". Either way I wanted to turn the string into an Enumerable collection of numbers. My approach (inspired by the highest voted answer) was: def numbers(from_string:) if from_string.include?('-') return Range.new(*from_string.split('-').map(&:to_i)) else return [from_string.to_i] # put number in an array so we can enumerate over it end end It can also be done as a (long) one-liner if you think that's more readable: from_string.include?('-') ? Range.new(*from_string.split('-').map(&:to_i)) : [from_string.to_i] I was processing a long list of known strings, not dealing with arbitrary user input so this doesn't guard against malicious input.
{ "language": "en", "url": "https://stackoverflow.com/questions/53472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Javascript - Applying class to an HTML tag given an attribute/value I am trying to apply styles to HTML tags dynamically by reading in the value of certain HTML attributes and applying a class name based on their values. For instance, if I have: <p height="30"> I want to apply a class="h30" to that paragraph so that I can style it in my style sheet. I can't find any information on getting the value of an attribute that is not an id or class. Help? A: I would highly recommend using something like jquery where adding classes is trivial: $("#someId").addClass("newClass"); so in your case: $("p[height='30']").addClass("h30"); so this selects all paragraph tags where the height attribute is 30 and adds the class h30 to it. A: See: getAttribute(). Parameter is the name of the attribute (case insensitive). Return value is the value of the attribute (a string). Be sure to see the Remarks in MSDN before dealing with IE... A: It's better to separate layout and presentation. Despite using CSS, you're tying these two together. Use better class names (why does it have to have 30px height? Is it menubar? footer? banner?) A: Attributes are just properties (usually). So just try: for (e in ...) { if (e.height == 30) { e.className = "h30"; } } Or use something like jquery to simplify this kind of stuff.
{ "language": "en", "url": "https://stackoverflow.com/questions/53473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: ASP.NET MVC vs. Web client software factory (WCSF) I have recently been doing a bit of investigation into the different types of Model View architectures, and need to decide which one to pursue for future in-house development. As I'm currently working in a Microsoft shop that has ASP.NET skills, it seems my options are between ASP.NET MVC and WCSF (Monorail is probably out of the as it wouldn't be supported by Microsoft). After reading the ASP.NET MVC framework, using the WCSF as a yardstick, I picked up the following points: * *ASP.NET MVC cannot use web controls that rely on postbacks, whereas WCSF can. *You have more control over the urls in an ASP.NET MVC site as opposed to a WCSF site. *An ASP.NET MVC site will probably be easier to test than an equivalent WCSF version. *It seems that the WCSF still uses the code behind to control UI events under some circumstances, but ASP.NET MVC doesn't allow this. What are some of the other considerations? What have I misunderstood? Is there anybody out there who has used both frameworks and has advice either way? A: Not to start a flame war but I found the WCSF to be quite convoluted. The elegance and simplicity of MVC blows away MVP which feels like a pattern that is just grafted onto webforms. A: We opted for the WCSF after doing exactly the same evaluation. We felt that the MVP pattern gave us more options i.e Ability to use server controls. Our development team is mostly made up of Programmers from a myriad of disciplines i.e C++, Biztalk and web etc. but had all focused mostly on MS type development, so the learning curve in adopting the patterns was not so much for our team. We are more than happy with our choice. A: ASP.NET MVC cannot use web controls that rely on postbacks, whereas WCSF can. You should think of WCSF as guidance about how to use the existing WebForms infrastructure, especially introducing Model-View-Presenter to help enforce separation of concerns. It also increases the testability of the resulting code. You have more control over the urls in an ASP.NET MVC site as opposed to a WCSF site. If you can target 3.5 SP1, you can use the new Routing system with a traditional WebForms site. Routing is not limited to MVC. For example, take a look at Dynamic Data (which also ships in 3.5 SP1). An ASP.NET MVC site will probably be easier to test than an equivalent WCSF version. This is true because it uses the new abstractions classes for HttpContext, HttpRequest, HttpResponse, etc. There's nothing inherently more testable about the MVC pattern than the MVP pattern. They're both instances of "Separated Presentation", and both increase testability. It seems that the WCSF still uses the code behind to control UI events under some circumstances, but ASP.NET doesn't allow this. In Model-View-Presenter, since the outside world interacts with views (i.e., the URL points to the view), the views will naturally be responding to these events. They should be as simple as possible, either by calling the presenter or by offering events that the presenter can subscribe to. Model-View-Controller overcomes this limitation by having the outside world interact with controllers. This means your views can be a lot "dumber" about non-presentation things. As for which you should use, I think the answer comes down to which one best suits your project goals. Sometimes WebForms and the rich third party control vendor availability will be preferable, and in some cases, raw simplicity and fine-grained HTML control will favor MVC. A: You might also consider the background's of your developers (if any have already been identified). If they come from a strict asp.net background, they will be more comfortable around WCSF (although in my experience, it still took them a few weeks to really be comfortable around MVP). If they come from a java/rails background, or have used other MVC architectures before, then obviously they'll be happier there (and in my experience get very snooty about anything other than MVC). A: MVC is a much simpler paradigm and is more similar to how all other frameworks do web development. WebForms is simply way too much jumping through hoops and too many layers of abstraction to try and achieve simplicity. IMHO, MVC will be the default ASP.NET architecture within a few years, as more and more people realize the simplicity and ease of development and testing that it brings with us. I have been doing MVC development for a year and a half and would never even think of going back to WebForms on a new project. A: An ASP.NET MVC site will probably be easier to test than an equivalent WCSF version. This is true because it uses the new abstractions classes for HttpContext, HttpRequest, HttpResponse, etc. There's nothing inherently more testable about the MVC pattern than the MVP pattern. They're both instances of "Separated Presentation", and both increase testability. This is probably debatable, but there is literature to suggest using an MVP design model is easier to unit test then an MVC design model if you have Views that are packed with logic. To summarize, in the MVP design model the Presenter is handling the work that might be handled by the View in the MVC design model. The logic that might be contained in the MVC View does not facilitate unit testing. Here are some references to literature that I have read that would cover this concept and the reasons why keeping your View light is better for many reasons including facilitating unit testing. http://martinfowler.com/eaaDev/uiArchs.html http://martinfowler.com/eaaDev/SupervisingPresenter.html http://martinfowler.com/eaaDev/PassiveScreen.html A: Why not attach both to Northwind and see which fits best for you and your situation?
{ "language": "en", "url": "https://stackoverflow.com/questions/53479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Fuzzy text (sentences/titles) matching in C# Hey, I'm using Levenshteins algorithm to get distance between source and target string. also I have method which returns value from 0 to 1: /// <summary> /// Gets the similarity between two strings. /// All relation scores are in the [0, 1] range, /// which means that if the score gets a maximum value (equal to 1) /// then the two string are absolutely similar /// </summary> /// <param name="string1">The string1.</param> /// <param name="string2">The string2.</param> /// <returns></returns> public static float CalculateSimilarity(String s1, String s2) { if ((s1 == null) || (s2 == null)) return 0.0f; float dis = LevenshteinDistance.Compute(s1, s2); float maxLen = s1.Length; if (maxLen < s2.Length) maxLen = s2.Length; if (maxLen == 0.0F) return 1.0F; else return 1.0F - dis / maxLen; } but this for me is not enough. Because I need more complex way to match two sentences. For example I want automatically tag some music, I have original song names, and i have songs with trash, like super, quality, years like 2007, 2008, etc..etc.. also some files have just http://trash..thash..song_name_mp3.mp3, other are normal. I want to create an algorithm which will work just more perfect than mine now.. Maybe anyone can help me? here is my current algo: /// <summary> /// if we need to ignore this target. /// </summary> /// <param name="targetString">The target string.</param> /// <returns></returns> private bool doIgnore(String targetString) { if ((targetString != null) && (targetString != String.Empty)) { for (int i = 0; i < ignoreWordsList.Length; ++i) { //* if we found ignore word or target string matching some some special cases like years (Regex). if (targetString == ignoreWordsList[i] || (isMatchInSpecialCases(targetString))) return true; } } return false; } /// <summary> /// Removes the duplicates. /// </summary> /// <param name="list">The list.</param> private void removeDuplicates(List<String> list) { if ((list != null) && (list.Count > 0)) { for (int i = 0; i < list.Count - 1; ++i) { if (list[i] == list[i + 1]) { list.RemoveAt(i); --i; } } } } /// <summary> /// Does the fuzzy match. /// </summary> /// <param name="targetTitle">The target title.</param> /// <returns></returns> private TitleMatchResult doFuzzyMatch(String targetTitle) { TitleMatchResult matchResult = null; if (targetTitle != null && targetTitle != String.Empty) { try { //* change target title (string) to lower case. targetTitle = targetTitle.ToLower(); //* scores, we will select higher score at the end. Dictionary<Title, float> scores = new Dictionary<Title, float>(); //* do split special chars: '-', ' ', '.', ',', '?', '/', ':', ';', '%', '(', ')', '#', '\"', '\'', '!', '|', '^', '*', '[', ']', '{', '}', '=', '!', '+', '_' List<String> targetKeywords = new List<string>(targetTitle.Split(ignoreCharsList, StringSplitOptions.RemoveEmptyEntries)); //* remove all trash from keywords, like super, quality, etc.. targetKeywords.RemoveAll(delegate(String x) { return doIgnore(x); }); //* sort keywords. targetKeywords.Sort(); //* remove some duplicates. removeDuplicates(targetKeywords); //* go through all original titles. foreach (Title sourceTitle in titles) { float tempScore = 0f; //* split orig. title to keywords list. List<String> sourceKeywords = new List<string>(sourceTitle.Name.Split(ignoreCharsList, StringSplitOptions.RemoveEmptyEntries)); sourceKeywords.Sort(); removeDuplicates(sourceKeywords); //* go through all source ttl keywords. foreach (String keyw1 in sourceKeywords) { float max = float.MinValue; foreach (String keyw2 in targetKeywords) { float currentScore = StringMatching.StringMatching.CalculateSimilarity(keyw1.ToLower(), keyw2); if (currentScore > max) { max = currentScore; } } tempScore += max; } //* calculate average score. float averageScore = (tempScore / Math.Max(targetKeywords.Count, sourceKeywords.Count)); //* if average score is bigger than minimal score and target title is not in this source title ignore list. if (averageScore >= minimalScore && !sourceTitle.doIgnore(targetTitle)) { //* add score. scores.Add(sourceTitle, averageScore); } } //* choose biggest score. float maxi = float.MinValue; foreach (KeyValuePair<Title, float> kvp in scores) { if (kvp.Value > maxi) { maxi = kvp.Value; matchResult = new TitleMatchResult(maxi, kvp.Key, MatchTechnique.FuzzyLogic); } } } catch { } } //* return result. return matchResult; } This works normally but just in some cases, a lot of titles which should match, does not match... I think I need some kind of formula to play with weights and etc, but i can't think of one.. Ideas? Suggestions? Algos? by the way I already know this topic (My colleague already posted it but we cannot come with a proper solution for this problem.): Approximate string matching algorithms A: It sounds like what you want may be a longest substring match. That is, in your example, two files like trash..thash..song_name_mp3.mp3 and garbage..spotch..song_name_mp3.mp3 would end up looking the same. You'd need some heuristics there, of course. One thing you might try is putting the string through a soundex converter. Soundex is the "codec" used to see if things "sound" the same (as you might tell a telephone operator). It's more or less a rough phonetic and mispronunciation semi-proof transliteration. It is definitely poorer than edit distance, but much, much cheaper. (The official use is for names, and only uses three characters. There's no reason to stop there, though, just use the mapping for every character in the string. See wikipedia for details) So my suggestion would be to soundex your strings, chop each one into a few length tranches (say 5, 10, 20) and then just look at clusters. Within clusters you can use something more expensive like edit distance or max substring. A: Your problem here may be distinguishing between noise words and useful data: * *Rolling_Stones.Best_of_2003.Wild_Horses.mp3 *Super.Quality.Wild_Horses.mp3 *Tori_Amos.Wild_Horses.mp3 You may need to produce a dictionary of noise words to ignore. That seems clunky, but I'm not sure there's an algorithm that can distinguish between band/album names and noise. A: There's a lot of work done on somewhat related problem of DNA sequence alignment (search for "local sequence alignment") - classic algorithm being "Needleman-Wunsch" and more complex modern ones also easy to find. The idea is - similar to Greg's answer - instead of identifying and comparing keywords try to find longest loosely matching substrings within long strings. That being sad, if the only goal is sorting music, a number of regular expressions to cover possible naming schemes would probably work better than any generic algorithm. A: Kind of old, but It might be useful to future visitors. If you're already using the Levenshtein algorithm and you need to go a little better, I describe some very effective heuristics in this solution: Getting the closest string match The key is that you come up with 3 or 4 (or more) methods of gauging the similarity between your phrases (Levenshtein distance is just one method) - and then using real examples of strings you want to match as similar, you adjust the weightings and combinations of those heuristics until you get something that maximizes the number of positive matches. Then you use that formula for all future matches and you should see great results. If a user is involved in the process, it's also best if you provide an interface which allows the user to see additional matches that rank highly in similarity in case they disagree with the first choice. Here's an excerpt from the linked answer. If you end up wanting to use any of this code as is, I apologize in advance for having to convert VBA into C#. Simple, speedy, and a very useful metric. Using this, I created two separate metrics for evaluating the similarity of two strings. One I call "valuePhrase" and one I call "valueWords". valuePhrase is just the Levenshtein distance between the two phrases, and valueWords splits the string into individual words, based on delimiters such as spaces, dashes, and anything else you'd like, and compares each word to each other word, summing up the shortest Levenshtein distance connecting any two words. Essentially, it measures whether the information in one 'phrase' is really contained in another, just as a word-wise permutation. I spent a few days as a side project coming up with the most efficient way possible of splitting a string based on delimiters. valueWords, valuePhrase, and Split function: Public Function valuePhrase#(ByRef S1$, ByRef S2$) valuePhrase = LevenshteinDistance(S1, S2) End Function Public Function valueWords#(ByRef S1$, ByRef S2$) Dim wordsS1$(), wordsS2$() wordsS1 = SplitMultiDelims(S1, " _-") wordsS2 = SplitMultiDelims(S2, " _-") Dim word1%, word2%, thisD#, wordbest# Dim wordsTotal# For word1 = LBound(wordsS1) To UBound(wordsS1) wordbest = Len(S2) For word2 = LBound(wordsS2) To UBound(wordsS2) thisD = LevenshteinDistance(wordsS1(word1), wordsS2(word2)) If thisD < wordbest Then wordbest = thisD If thisD = 0 Then GoTo foundbest Next word2 foundbest: wordsTotal = wordsTotal + wordbest Next word1 valueWords = wordsTotal End Function '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' ' SplitMultiDelims ' This function splits Text into an array of substrings, each substring ' delimited by any character in DelimChars. Only a single character ' may be a delimiter between two substrings, but DelimChars may ' contain any number of delimiter characters. It returns a single element ' array containing all of text if DelimChars is empty, or a 1 or greater ' element array if the Text is successfully split into substrings. ' If IgnoreConsecutiveDelimiters is true, empty array elements will not occur. ' If Limit greater than 0, the function will only split Text into 'Limit' ' array elements or less. The last element will contain the rest of Text. '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' Function SplitMultiDelims(ByRef Text As String, ByRef DelimChars As String, _ Optional ByVal IgnoreConsecutiveDelimiters As Boolean = False, _ Optional ByVal Limit As Long = -1) As String() Dim ElemStart As Long, N As Long, M As Long, Elements As Long Dim lDelims As Long, lText As Long Dim Arr() As String lText = Len(Text) lDelims = Len(DelimChars) If lDelims = 0 Or lText = 0 Or Limit = 1 Then ReDim Arr(0 To 0) Arr(0) = Text SplitMultiDelims = Arr Exit Function End If ReDim Arr(0 To IIf(Limit = -1, lText - 1, Limit)) Elements = 0: ElemStart = 1 For N = 1 To lText If InStr(DelimChars, Mid(Text, N, 1)) Then Arr(Elements) = Mid(Text, ElemStart, N - ElemStart) If IgnoreConsecutiveDelimiters Then If Len(Arr(Elements)) > 0 Then Elements = Elements + 1 Else Elements = Elements + 1 End If ElemStart = N + 1 If Elements + 1 = Limit Then Exit For End If Next N 'Get the last token terminated by the end of the string into the array If ElemStart <= lText Then Arr(Elements) = Mid(Text, ElemStart) 'Since the end of string counts as the terminating delimiter, if the last character 'was also a delimiter, we treat the two as consecutive, and so ignore the last elemnent If IgnoreConsecutiveDelimiters Then If Len(Arr(Elements)) = 0 Then Elements = Elements - 1 ReDim Preserve Arr(0 To Elements) 'Chop off unused array elements SplitMultiDelims = Arr End Function Measures of Similarity Using these two metrics, and a third which simply computes the distance between two strings, I have a series of variables which I can run an optimization algorithm to achieve the greatest number of matches. Fuzzy string matching is, itself, a fuzzy science, and so by creating linearly independent metrics for measuring string similarity, and having a known set of strings we wish to match to each other, we can find the parameters that, for our specific styles of strings, give the best fuzzy match results. Initially, the goal of the metric was to have a low search value for for an exact match, and increasing search values for increasingly permuted measures. In an impractical case, this was fairly easy to define using a set of well defined permutations, and engineering the final formula such that they had increasing search values results as desired. As you can see, the last two metrics, which are fuzzy string matching metrics, already have a natural tendency to give low scores to strings that are meant to match (down the diagonal). This is very good. Application To allow the optimization of fuzzy matching, I weight each metric. As such, every application of fuzzy string match can weight the parameters differently. The formula that defines the final score is a simply combination of the metrics and their weights: value = Min(phraseWeight*phraseValue, wordsWeight*wordsValue)*minWeight + Max(phraseWeight*phraseValue, wordsWeight*wordsValue)*maxWeight + lengthWeight*lengthValue Using an optimization algorithm (neural network is best here because it is a discrete, multi-dimentional problem), the goal is now to maximize the number of matches. I created a function that detects the number of correct matches of each set to each other, as can be seen in this final screenshot. A column or row gets a point if the lowest score is assigned the the string that was meant to be matched, and partial points are given if there is a tie for the lowest score, and the correct match is among the tied matched strings. I then optimized it. You can see that a green cell is the column that best matches the current row, and a blue square around the cell is the row that best matches the current column. The score in the bottom corner is roughly the number of successful matches and this is what we tell our optimization problem to maximize. A: There is a GitHub repo implementing several methods.
{ "language": "en", "url": "https://stackoverflow.com/questions/53480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: How do I change the password of the root user in MySQL? I have long since forgotten the password for the root user on one of my boxes. Is there a way I can change it without having to log in to the instance, or will I have to reinstall? A: A quick Google resulted in this answer. In the root shell type: mysqladmin -u root password <password> A: Step 1 Stop database: shell> /etc/init.d/mysql stop Step 2 Restart database * *without password autentification *without connection to the network Access to database is only possible through it's sock file '/var/lib/mysql/mysql.sock'. shell> mysqld --user=mysql --pid-file=/var/lib/mysql/mysqld.pid \ --socket=/var/lib/mysql/mysql.sock --datadir=/var/lib/mysql \ --skip-grant-tables --skip-networking & Step 3 Connect to the database and change password: shell> mysql --database mysql --socket=/var/lib/mysql/mysql.sock If you want to, show all users: mysql> select User, password from user; Set new password: mysql> update user set password=password('NEW PASS') WHERE User='USERNAME'; Leave database connection: mysql> exit Step 4 Restart database server "normally". shell> kill `cat /var/lib/mysql/mysqld.pid` shell> /etc/init.d/mysql start A: If you are running an Ubuntu server (possibly also Debian?) you can easily reset. If you are on 12.04: sudo dpkg-reconfigure mysql-server-5.5 If you are on 10.04: sudo dpkg-reconfigure mysql-server-5.1 If you are not sure which mysql-server version is installed you can try: dpkg --get-selections | grep mysql-server See for more info: https://help.ubuntu.com/12.04/serverguide/mysql.html https://help.ubuntu.com/10.04/serverguide/mysql.html
{ "language": "en", "url": "https://stackoverflow.com/questions/53482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do I enable external access to MySQL Server? How do I enable external access to MySQL Server? I can connect locally but I cannot connect from another box on the network. I just tried grant all privileges on *.* to root@'%' identified by '*****' with grant option; And restarted MySQL Server with no success. A: You probably have to edit the configuration file (usually my.cnf) to listen in the external interface instead of on localhost only. Change the bind-address parameter to your machine's IP address. If this is an old MySQL installation, you should comment out the skip-networking parameter. Afterwards, restart MySQL and you'll be set A: Command and syntax looks fine. Have you checked the server is listening on an interface other than 127.0.0.1? By default Im pretty sure it only listens on the localhost address (127.0.0.1)
{ "language": "en", "url": "https://stackoverflow.com/questions/53491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Regular expression that matches valid IPv6 addresses I'm having trouble writing a regular expression that matches valid IPv6 addresses, including those in their compressed form (with :: or leading zeros omitted from each byte pair). Can someone suggest a regular expression that would fulfill the requirement? I'm considering expanding each byte pair and matching the result with a simpler regex. A: This regular expression will match valid IPv6 and IPv4 addresses in accordance with GNU C++ implementation of regex with REGULAR EXTENDED mode used: "^\s*((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])(\.(25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])){3}))|:)))(%.+)?\s*$" A: I'm not an Ipv6 expert but I think you can get a pretty good result more easily with this one: ^([0-9A-Fa-f]{0,4}:){2,7}([0-9A-Fa-f]{1,4}$|((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.|$)){4})$ to answer "is a valid ipv6" it look like ok to me. To break it down in parts... forget it. I've omitted the unspecified one (::) since there is no use to have "unpecified adress" in my database. the beginning: ^([0-9A-Fa-f]{0,4}:){2,7} <-- match the compressible part, we can translate this as: between 2 and 7 colon who may have heaxadecimal number between them. followed by: [0-9A-Fa-f]{1,4}$ <-- an hexadecimal number (leading 0 omitted) OR ((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)(\.|$)){4} <-- an Ipv4 adress A: The following will validate IPv4, IPv6 (full and compressed), and IPv6v4 (full and compressed) addresses: '/^(?>(?>([a-f0-9]{1,4})(?>:(?1)){7}|(?!(?:.*[a-f0-9](?>:|$)){8,})((?1)(?>:(?1)){0,6})?::(?2)?)|(?>(?>(?1)(?>:(?1)){5}:|(?!(?:.*[a-f0-9]:){6,})(?3)?::(?>((?1)(?>:(?1)){0,4}):)?)?(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])(?>\.(?4)){3}))$/iD' A: If you use Perl try Net::IPv6Addr use Net::IPv6Addr; if( defined Net::IPv6Addr::is_ipv6($ip_address) ){ print "Looks like an ipv6 address\n"; } NetAddr::IP use NetAddr::IP; my $obj = NetAddr::IP->new6($ip_address); Validate::IP use Validate::IP qw'is_ipv6'; if( is_ipv6($ip_address) ){ print "Looks like an ipv6 address\n"; } A: A simple regex that will match, but I wouldn't recommend for validation of any sort is this: ([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4} Note this matches compression anywhere in the address, though it won't match the loopback address ::1. I find this a reasonable compromise in order to keep the regex simple. I successfully use this in iTerm2 smart selection rules to quad-click IPv6 addresses. A: Beware! In Java, the use of InetAddress and related classes (Inet4Address, Inet6Address, URL) may involve network trafic! E.g. DNS resolving (URL.equals, InetAddress from string!). This call may take long and is blocking! For IPv6 I have something like this. This of course does not handle the very subtle details of IPv6 like that zone indices are allowed only on some classes of IPv6 addresses. And this regex is not written for group capturing, it is only a "matches" kind of regexp. S - IPv6 segment = [0-9a-f]{1,4} I - IPv4 = (?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9]{1,2})\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9]{1,2}) Schematic (first part matches IPv6 addresses with IPv4 suffix, second part matches IPv6 addresses, last patrt the zone index): ( ( ::(S:){0,5}| S::(S:){0,4}| (S:){2}:(S:){0,3}| (S:){3}:(S:){0,2}| (S:){4}:(S:)?| (S:){5}:| (S:){6} ) I | :(:|(:S){1,7})| S:(:|(:S){1,6})| (S:){2}(:|(:S){1,5})| (S:){3}(:|(:S){1,4})| (S:){4}(:|(:S){1,3})| (S:){5}(:|(:S){1,2})| (S:){6}(:|(:S))| (S:){7}:| (S:){7}S ) (?:%[0-9a-z]+)? And here the might regex (case insensitive, surround with what ever needed like beginning/end of line, etc.): (?: (?: ::(?:[0-9a-f]{1,4}:){0,5}| [0-9a-f]{1,4}::(?:[0-9a-f]{1,4}:){0,4}| (?:[0-9a-f]{1,4}:){2}:(?:[0-9a-f]{1,4}:){0,3}| (?:[0-9a-f]{1,4}:){3}:(?:[0-9a-f]{1,4}:){0,2}| (?:[0-9a-f]{1,4}:){4}:(?:[0-9a-f]{1,4}:)?| (?:[0-9a-f]{1,4}:){5}:| (?:[0-9a-f]{1,4}:){6} ) (?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9]{1,2})\.){3} (?:25[0-5]|2[0-4][0-9]|[01]?[0-9]{1,2})| :(?::|(?::[0-9a-f]{1,4}){1,7})| [0-9a-f]{1,4}:(?::|(?::[0-9a-f]{1,4}){1,6})| (?:[0-9a-f]{1,4}:){2}(?::|(?::[0-9a-f]{1,4}){1,5})| (?:[0-9a-f]{1,4}:){3}(?::|(?::[0-9a-f]{1,4}){1,4})| (?:[0-9a-f]{1,4}:){4}(?::|(?::[0-9a-f]{1,4}){1,3})| (?:[0-9a-f]{1,4}:){5}(?::|(?::[0-9a-f]{1,4}){1,2})| (?:[0-9a-f]{1,4}:){6}(?::|(?::[0-9a-f]{1,4}))| (?:[0-9a-f]{1,4}:){7}:| (?:[0-9a-f]{1,4}:){7}[0-9a-f]{1,4} ) (?:%[0-9a-z]+)? A: Following regex is for IPv6 only. Group 1 matches with the IP. (([0-9a-fA-F]{0,4}:){1,7}[0-9a-fA-F]{0,4}) A: Regexes for ipv6 can get really tricky when you consider addresses with embedded ipv4 and addresses that are compressed, as you can see from some of these answers. The open-source IPAddress Java library will validate all standard representations of IPv6 and IPv4 and also supports prefix-length (and validation of such). Disclaimer: I am the project manager of that library. Code example: try { IPAddressString str = new IPAddressString("::1"); IPAddress addr = str.toAddress(); if(addr.isIPv6() || addr.isIPv6Convertible()) { IPv6Address ipv6Addr = addr.toIPv6(); } //use address } catch(AddressStringException e) { //e.getMessage has validation error } A: I was unable to get @Factor Mystic's answer to work with POSIX regular expressions, so I wrote one that works with POSIX regular expressions and PERL regular expressions. It should match: * *IPv6 addresses *zero compressed IPv6 addresses (section 2.2 of rfc5952) *link-local IPv6 addresses with zone index (section 11 of rfc4007) *IPv4-Embedded IPv6 Address (section 2 of rfc6052) *IPv4-mapped IPv6 addresses (section 2.1 of rfc2765) *IPv4-translated addresses (section 2.1 of rfc2765) IPv6 Regular Expression: (([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])) For ease of reading, the following is the above regular expression split at major OR points into separate lines: # IPv6 RegEx ( ([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}| # 1:2:3:4:5:6:7:8 ([0-9a-fA-F]{1,4}:){1,7}:| # 1:: 1:2:3:4:5:6:7:: ([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}| # 1::8 1:2:3:4:5:6::8 1:2:3:4:5:6::8 ([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}| # 1::7:8 1:2:3:4:5::7:8 1:2:3:4:5::8 ([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}| # 1::6:7:8 1:2:3:4::6:7:8 1:2:3:4::8 ([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}| # 1::5:6:7:8 1:2:3::5:6:7:8 1:2:3::8 ([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}| # 1::4:5:6:7:8 1:2::4:5:6:7:8 1:2::8 [0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})| # 1::3:4:5:6:7:8 1::3:4:5:6:7:8 1::8 :((:[0-9a-fA-F]{1,4}){1,7}|:)| # ::2:3:4:5:6:7:8 ::2:3:4:5:6:7:8 ::8 :: fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}| # fe80::7:8%eth0 fe80::7:8%1 (link-local IPv6 addresses with zone index) ::(ffff(:0{1,4}){0,1}:){0,1} ((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3} (25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])| # ::255.255.255.255 ::ffff:255.255.255.255 ::ffff:0:255.255.255.255 (IPv4-mapped IPv6 addresses and IPv4-translated addresses) ([0-9a-fA-F]{1,4}:){1,4}: ((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3} (25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]) # 2001:db8:3:4::192.0.2.33 64:ff9b::192.0.2.33 (IPv4-Embedded IPv6 Address) ) # IPv4 RegEx ((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]) To make the above easier to understand, the following "pseudo" code replicates the above: IPV4SEG = (25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]) IPV4ADDR = (IPV4SEG\.){3,3}IPV4SEG IPV6SEG = [0-9a-fA-F]{1,4} IPV6ADDR = ( (IPV6SEG:){7,7}IPV6SEG| # 1:2:3:4:5:6:7:8 (IPV6SEG:){1,7}:| # 1:: 1:2:3:4:5:6:7:: (IPV6SEG:){1,6}:IPV6SEG| # 1::8 1:2:3:4:5:6::8 1:2:3:4:5:6::8 (IPV6SEG:){1,5}(:IPV6SEG){1,2}| # 1::7:8 1:2:3:4:5::7:8 1:2:3:4:5::8 (IPV6SEG:){1,4}(:IPV6SEG){1,3}| # 1::6:7:8 1:2:3:4::6:7:8 1:2:3:4::8 (IPV6SEG:){1,3}(:IPV6SEG){1,4}| # 1::5:6:7:8 1:2:3::5:6:7:8 1:2:3::8 (IPV6SEG:){1,2}(:IPV6SEG){1,5}| # 1::4:5:6:7:8 1:2::4:5:6:7:8 1:2::8 IPV6SEG:((:IPV6SEG){1,6})| # 1::3:4:5:6:7:8 1::3:4:5:6:7:8 1::8 :((:IPV6SEG){1,7}|:)| # ::2:3:4:5:6:7:8 ::2:3:4:5:6:7:8 ::8 :: fe80:(:IPV6SEG){0,4}%[0-9a-zA-Z]{1,}| # fe80::7:8%eth0 fe80::7:8%1 (link-local IPv6 addresses with zone index) ::(ffff(:0{1,4}){0,1}:){0,1}IPV4ADDR| # ::255.255.255.255 ::ffff:255.255.255.255 ::ffff:0:255.255.255.255 (IPv4-mapped IPv6 addresses and IPv4-translated addresses) (IPV6SEG:){1,4}:IPV4ADDR # 2001:db8:3:4::192.0.2.33 64:ff9b::192.0.2.33 (IPv4-Embedded IPv6 Address) ) I posted a script on GitHub which tests the regular expression: https://gist.github.com/syzdek/6086792 A: From "IPv6 regex": (\A([0-9a-f]{1,4}:){1,1}(:[0-9a-f]{1,4}){1,6}\Z)| (\A([0-9a-f]{1,4}:){1,2}(:[0-9a-f]{1,4}){1,5}\Z)| (\A([0-9a-f]{1,4}:){1,3}(:[0-9a-f]{1,4}){1,4}\Z)| (\A([0-9a-f]{1,4}:){1,4}(:[0-9a-f]{1,4}){1,3}\Z)| (\A([0-9a-f]{1,4}:){1,5}(:[0-9a-f]{1,4}){1,2}\Z)| (\A([0-9a-f]{1,4}:){1,6}(:[0-9a-f]{1,4}){1,1}\Z)| (\A(([0-9a-f]{1,4}:){1,7}|:):\Z)| (\A:(:[0-9a-f]{1,4}){1,7}\Z)| (\A((([0-9a-f]{1,4}:){6})(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3})\Z)| (\A(([0-9a-f]{1,4}:){5}[0-9a-f]{1,4}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3})\Z)| (\A([0-9a-f]{1,4}:){5}:[0-9a-f]{1,4}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)| (\A([0-9a-f]{1,4}:){1,1}(:[0-9a-f]{1,4}){1,4}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)| (\A([0-9a-f]{1,4}:){1,2}(:[0-9a-f]{1,4}){1,3}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)| (\A([0-9a-f]{1,4}:){1,3}(:[0-9a-f]{1,4}){1,2}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)| (\A([0-9a-f]{1,4}:){1,4}(:[0-9a-f]{1,4}){1,1}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)| (\A(([0-9a-f]{1,4}:){1,5}|:):(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z)| (\A:(:[0-9a-f]{1,4}){1,5}:(25[0-5]|2[0-4]\d|[0-1]?\d?\d)(\.(25[0-5]|2[0-4]\d|[0-1]?\d?\d)){3}\Z) A: It sounds like you may be using Python. If so, you can use something like this: import socket def check_ipv6(n): try: socket.inet_pton(socket.AF_INET6, n) return True except socket.error: return False print check_ipv6('::1') # True print check_ipv6('foo') # False print check_ipv6(5) # TypeError exception print check_ipv6(None) # TypeError exception I don't think you have to have IPv6 compiled in to Python to get inet_pton, which can also parse IPv4 addresses if you pass in socket.AF_INET as the first parameter. Note: this may not work on non-Unix systems. A: Looking at the patterns included in the other answers there are a number of good patterns that can be improved by referencing groups and utilizing lookaheads. Here is an example of a pattern that is self referencing that I would utilize in PHP if I had to: ^(?<hgroup>(?<hex>[[:xdigit:]]{0,4}) # grab a sequence of up to 4 hex digits # and name this pattern for usage later (?<!:::):{1,2}) # match 1 or 2 ':' characters # as long as we can't match 3 (?&hgroup){1,6} # match our hex group 1 to 6 more times (?:(?: # match an ipv4 address or (?<dgroup>2[0-5]|(?:2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3}(?&dgroup) # match our hex group one last time |(?&hex))$ Note: PHP has a built in filter for this which would be a better solution than this pattern. Regex101 Analysis A: In Scala use the well known Apache Commons validators. http://mvnrepository.com/artifact/commons-validator/commons-validator/1.4.1 libraryDependencies += "commons-validator" % "commons-validator" % "1.4.1" import org.apache.commons.validator.routines._ /** * Validates if the passed ip is a valid IPv4 or IPv6 address. * * @param ip The IP address to validate. * @return True if the passed IP address is valid, false otherwise. */ def ip(ip: String) = InetAddressValidator.getInstance().isValid(ip) Following the test's of the method ip(ip: String): "The `ip` validator" should { "return false if the IPv4 is invalid" in { ip("123") must beFalse ip("255.255.255.256") must beFalse ip("127.1") must beFalse ip("30.168.1.255.1") must beFalse ip("-1.2.3.4") must beFalse } "return true if the IPv4 is valid" in { ip("255.255.255.255") must beTrue ip("127.0.0.1") must beTrue ip("0.0.0.0") must beTrue } //IPv6 //@see: http://www.ronnutter.com/ipv6-cheatsheet-on-identifying-valid-ipv6-addresses/ "return false if the IPv6 is invalid" in { ip("1200::AB00:1234::2552:7777:1313") must beFalse } "return true if the IPv6 is valid" in { ip("1200:0000:AB00:1234:0000:2552:7777:1313") must beTrue ip("21DA:D3:0:2F3B:2AA:FF:FE28:9C5A") must beTrue } } A: Depending on your needs, an approximation like: [0-9a-f:]+ may be enough (as with simple log file grepping, for example.) A: I generated the following using python and works with the re module. The look-ahead assertions ensure that the correct number of dots or colons appear in the address. It does not support IPv4 in IPv6 notation. pattern = '^(?=\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$)(?:(?:25[0-5]|[12][0-4][0-9]|1[5-9][0-9]|[1-9]?[0-9])\.?){4}$|(?=^(?:[0-9a-f]{0,4}:){2,7}[0-9a-f]{0,4}$)(?![^:]*::.+::[^:]*$)(?:(?=.*::.*)|(?=\w+:\w+:\w+:\w+:\w+:\w+:\w+:\w+))(?:(?:^|:)(?:[0-9a-f]{4}|[1-9a-f][0-9a-f]{0,3})){0,8}(?:::(?:[0-9a-f]{1,4}(?:$|:)){0,6})?$' result = re.match(pattern, ip) if result: result.group(0) A: This catches the loopback(::1) as well and ipv6 addresses. changed {} to + and put : inside the first square bracket. ([a-f0-9:]+:+)+[a-f0-9]+ tested on with ifconfig -a output http://regexr.com/ Unix or Mac OSx terminal o option returns only the matching output(ipv6) including ::1 ifconfig -a | egrep -o '([a-f0-9:]+:+)+[a-f0-9]+' Get All IP addresses (IPv4 OR IPv6) and print match on unix OSx term ifconfig -a | egrep -o '([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}) | (([a-f0-9:]+:+)+[a-f0-9]+)' A: I'd have to strongly second the answer from Frank Krueger. Whilst you say you need a regular expression to match an IPv6 address, I'm assuming what you really need is to be able to check if a given string is a valid IPv6 address. There is a subtle but important distinction here. There is more than one way to check if a given string is a valid IPv6 address and regular expression matching is only one solution. Use an existing library if you can. The library will have fewer bugs and its use will result in less code for you to maintain. The regular expression suggested by Factor Mystic is long and complex. It most likely works, but you should also consider how you'd cope if it unexpectedly fails. The point I'm trying to make here is that if you can't form a required regular expression yourself you won't be able to easily debug it. If you have no suitable library it may be better to write your own IPv6 validation routine that doesn't depend on regular expressions. If you write it you understand it and if you understand it you can add comments to explain it so that others can also understand and subsequently maintain it. Act with caution when using a regular expression whose functionality you can't explain to someone else. A: In Java, you can use the library class sun.net.util.IPAddressUtil: IPAddressUtil.isIPv6LiteralAddress(iPaddress); A: Using Ruby? Try this: /^(((?=.*(::))(?!.*\3.+\3))\3?|[\dA-F]{1,4}:)([\dA-F]{1,4}(\3|:\b)|\2){5}(([\dA-F]{1,4}(\3|:\b|$)|\2){2}|(((2[0-4]|1\d|[1-9])?\d|25[0-5])\.?\b){4})\z/i A: It is difficult to find a regular expression which works for all IPv6 cases. They are usually hard to maintain, not easily readable and may cause performance problems. Hence, I want to share an alternative solution which I have developed: Regular Expression (RegEx) for IPv6 Separate from IPv4 Now you may ask that "This method only finds IPv6, how can I find IPv6 in a text or file?" Here are methods for this issue too. Note: If you do not want to use IPAddress class in .NET, you can also replace it with my method. It also covers mapped IPv4 and special cases too, while IPAddress does not cover. class IPv6 { public List<string> FindIPv6InFile(string filePath) { Char ch; StringBuilder sbIPv6 = new StringBuilder(); List<string> listIPv6 = new List<string>(); StreamReader reader = new StreamReader(filePath); do { bool hasColon = false; int length = 0; do { ch = (char)reader.Read(); if (IsEscapeChar(ch)) break; //Check the first 5 chars, if it has colon, then continue appending to stringbuilder if (!hasColon && length < 5) { if (ch == ':') { hasColon = true; } sbIPv6.Append(ch.ToString()); } else if (hasColon) //if no colon in first 5 chars, then dont append to stringbuilder { sbIPv6.Append(ch.ToString()); } length++; } while (!reader.EndOfStream); if (hasColon && !listIPv6.Contains(sbIPv6.ToString()) && IsIPv6(sbIPv6.ToString())) { listIPv6.Add(sbIPv6.ToString()); } sbIPv6.Clear(); } while (!reader.EndOfStream); reader.Close(); reader.Dispose(); return listIPv6; } public List<string> FindIPv6InText(string text) { StringBuilder sbIPv6 = new StringBuilder(); List<string> listIPv6 = new List<string>(); for (int i = 0; i < text.Length; i++) { bool hasColon = false; int length = 0; do { if (IsEscapeChar(text[length + i])) break; //Check the first 5 chars, if it has colon, then continue appending to stringbuilder if (!hasColon && length < 5) { if (text[length + i] == ':') { hasColon = true; } sbIPv6.Append(text[length + i].ToString()); } else if (hasColon) //if no colon in first 5 chars, then dont append to stringbuilder { sbIPv6.Append(text[length + i].ToString()); } length++; } while (i + length != text.Length); if (hasColon && !listIPv6.Contains(sbIPv6.ToString()) && IsIPv6(sbIPv6.ToString())) { listIPv6.Add(sbIPv6.ToString()); } i += length; sbIPv6.Clear(); } return listIPv6; } bool IsEscapeChar(char ch) { if (ch != ' ' && ch != '\r' && ch != '\n' && ch!='\t') { return false; } return true; } bool IsIPv6(string maybeIPv6) { IPAddress ip; if (IPAddress.TryParse(maybeIPv6, out ip)) { return ip.AddressFamily == AddressFamily.InterNetworkV6; } else { return false; } } } A: InetAddressUtils has all the patterns defined. I ended-up using their pattern directly, and am pasting it here for reference: private static final String IPV4_BASIC_PATTERN_STRING = "(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}" + // initial 3 fields, 0-255 followed by . "([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])"; // final field, 0-255 private static final Pattern IPV4_PATTERN = Pattern.compile("^" + IPV4_BASIC_PATTERN_STRING + "$"); private static final Pattern IPV4_MAPPED_IPV6_PATTERN = // TODO does not allow for redundant leading zeros Pattern.compile("^::[fF]{4}:" + IPV4_BASIC_PATTERN_STRING + "$"); private static final Pattern IPV6_STD_PATTERN = Pattern.compile( "^[0-9a-fA-F]{1,4}(:[0-9a-fA-F]{1,4}){7}$"); private static final Pattern IPV6_HEX_COMPRESSED_PATTERN = Pattern.compile( "^(([0-9A-Fa-f]{1,4}(:[0-9A-Fa-f]{1,4}){0,5})?)" + // 0-6 hex fields "::" + "(([0-9A-Fa-f]{1,4}(:[0-9A-Fa-f]{1,4}){0,5})?)$"); // 0-6 hex fields A: For PHP 5.2+ users filter_var works great. I know this doesn't answer the original question (specifically a regex solution), but I post this in the hope it may help someone else in the future. $is_ip4address = (filter_var($ip, FILTER_VALIDATE_IP, FILTER_FLAG_IPV4) !== FALSE); $is_ip6address = (filter_var($ip, FILTER_VALIDATE_IP, FILTER_FLAG_IPV6) !== FALSE); A: Here's what I came up with, using a bit of lookahead and named groups. This is of course just IPv6, but it shouldn't interfere with additional patterns if you want to add IPv4: (?=([0-9a-f]+(:[0-9a-f])*)?(?P<wild>::)(?!([0-9a-f]+:)*:))(::)?([0-9a-f]{1,4}:{1,2}){0,6}(?(wild)[0-9a-f]{0,4}|[0-9a-f]{1,4}:[0-9a-f]{1,4}) A: Just matching local ones from an origin with square brackets included. I know it's not as comprehensive but in javascript the other ones had difficult to trace issues primarily that of not working, so this seems to get me what I needed for now. extra capitals A-F aren't needed either. ^\[([0-9a-fA-F]{1,4})(\:{1,2})([0-9a-fA-F]{1,4})(\:{1,2})([0-9a-fA-F]{1,4})(\:{1,2})([0-9a-fA-F]{1,4})(\:{1,2})([0-9a-fA-F]{1,4})\] Jinnko's version is simplified and better I see. A: As stated above, another way to get an IPv6 textual representation validating parser is to use programming. Here is one that is fully compliant with RFC-4291 and RFC-5952. I've written this code in ANSI C (works with GCC, passed tests on Linux - works with clang, passed tests on FreeBSD). Thus, it does only rely on the ANSI C standard library, so it can be compiled everywhere (I've used it for IPv6 parsing inside a kernel module with FreeBSD). // IPv6 textual representation validating parser fully compliant with RFC-4291 and RFC-5952 // BSD-licensed / Copyright 2015-2017 Alexandre Fenyo #include <string.h> #include <netinet/in.h> #include <stdlib.h> #include <stdio.h> #include <ctype.h> typedef enum { false, true } bool; static const char hexdigits[] = "0123456789abcdef"; static int digit2int(const char digit) { return strchr(hexdigits, digit) - hexdigits; } // This IPv6 address parser handles any valid textual representation according to RFC-4291 and RFC-5952. // Other representations will return -1. // // note that str input parameter has been modified when the function call returns // // parse_ipv6(char *str, struct in6_addr *retaddr) // parse textual representation of IPv6 addresses // str: input arg // retaddr: output arg int parse_ipv6(char *str, struct in6_addr *retaddr) { bool compressed_field_found = false; unsigned char *_retaddr = (unsigned char *) retaddr; char *_str = str; char *delim; bzero((void *) retaddr, sizeof(struct in6_addr)); if (!strlen(str) || strchr(str, ':') == NULL || (str[0] == ':' && str[1] != ':') || (strlen(str) >= 2 && str[strlen(str) - 1] == ':' && str[strlen(str) - 2] != ':')) return -1; // convert transitional to standard textual representation if (strchr(str, '.')) { int ipv4bytes[4]; char *curp = strrchr(str, ':'); if (curp == NULL) return -1; char *_curp = ++curp; int i; for (i = 0; i < 4; i++) { char *nextsep = strchr(_curp, '.'); if (_curp[0] == '0' || (i < 3 && nextsep == NULL) || (i == 3 && nextsep != NULL)) return -1; if (nextsep != NULL) *nextsep = 0; int j; for (j = 0; j < strlen(_curp); j++) if (_curp[j] < '0' || _curp[j] > '9') return -1; if (strlen(_curp) > 3) return -1; const long val = strtol(_curp, NULL, 10); if (val < 0 || val > 255) return -1; ipv4bytes[i] = val; _curp = nextsep + 1; } sprintf(curp, "%x%02x:%x%02x", ipv4bytes[0], ipv4bytes[1], ipv4bytes[2], ipv4bytes[3]); } // parse standard textual representation do { if ((delim = strchr(_str, ':')) == _str || (delim == NULL && !strlen(_str))) { if (delim == str) _str++; else if (delim == NULL) return 0; else { if (compressed_field_found == true) return -1; if (delim == str + strlen(str) - 1 && _retaddr != (unsigned char *) (retaddr + 1)) return 0; compressed_field_found = true; _str++; int cnt = 0; char *__str; for (__str = _str; *__str; ) if (*(__str++) == ':') cnt++; unsigned char *__retaddr = - 2 * ++cnt + (unsigned char *) (retaddr + 1); if (__retaddr <= _retaddr) return -1; _retaddr = __retaddr; } } else { char hexnum[4] = "0000"; if (delim == NULL) delim = str + strlen(str); if (delim - _str > 4) return -1; int i; for (i = 0; i < delim - _str; i++) if (!isxdigit(_str[i])) return -1; else hexnum[4 - (delim - _str) + i] = tolower(_str[i]); _str = delim + 1; *(_retaddr++) = (digit2int(hexnum[0]) << 4) + digit2int(hexnum[1]); *(_retaddr++) = (digit2int(hexnum[2]) << 4) + digit2int(hexnum[3]); } } while (_str < str + strlen(str)); return 0; } A: If you want only normal IP-s (no slashes), here: ^(?:[0-9a-f]{1,4}(?:::)?){0,7}::[0-9a-f]+$ I use it for my syntax highlighter in hosts file editor application. Works as charm. A: The regex allows the use of leading zeros in the IPv4 parts. Some Unix and Mac distros convert those segments into octals. I suggest using 25[0-5]|2[0-4]\d|1\d\d|[1-9]?\d as an IPv4 segment. A: This will work for IPv4 and IPv6: ^(([0-9a-f]{0,4}:){1,7}[0-9a-f]{1,4}|([0-9]{1,3}\.){3}[0-9]{1,3})$ A: You can use the ipextract shell tools I made for this purpose. They are based on regexp and grep. Usage: $ ifconfig | ipextract6 fe80::1%lo0 ::1 fe80::7ed1:c3ff:feec:dee1%en0 A: Try this small one-liner. It should only match valid uncompressed/compressed IPv6 addresses (no IPv4 hybrids) /(?!.*::.*::)(?!.*:::.*)(?!:[a-f0-9])((([a-f0-9]{1,4})?[:](?!:)){7}|(?=(.*:[:a-f0-9]{1,4}::|^([:a-f0-9]{1,4})?::))(([a-f0-9]{1,4})?[:]{1,2}){1,6})[a-f0-9]{1,4}/
{ "language": "en", "url": "https://stackoverflow.com/questions/53497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "111" }
Q: Is it possible to determine which process starts my .Net application? I am developing console application in .Net and I want to change a behavior a little based on information that application was started from cmd.exe or from explorer.exe. Is it possible? A: Process this_process = Process.GetCurrentProcess(); int parent_pid = 0; using (ManagementObject MgmtObj = new ManagementObject("win32_process.handle='" + this_process.Id.ToString() + "'")) { MgmtObj.Get(); parent_pid = Convert.ToInt32(MgmtObj["ParentProcessId"]); } string parent_process_name = Process.GetProcessById(parent_pid).ProcessName; A: The CreateToolhelp32Snapshot Function has a Process32First method that will allow you to read a PROCESSENTRY32 Structure. The structure has a property that will get you the information you want: th32ParentProcessID - The identifier of the process that created this process (its parent process). This article will help you get started using the ToolHelpSnapshot function: http://www.codeproject.com/KB/cs/IsApplicationRunning.aspx A: One issue with the ToolHelp/ManagementObject approaches is that the parent process could already have exited. The GetStartupInfo Win32 function (use PInvoke if there's no .NET equivalent) fills in a structure that includes the window title. For a Win32 console application "app.exe", this title string is "app" when started from cmd and "c:\full\path\to\app.exe" when started from explorer (or the VS debugger). Of course this is a hack (subject to change in other versions, etc.). #define WIN32_LEAN_AND_MEAN #include <windows.h> int main() { STARTUPINFO si; GetStartupInfo(&si); MessageBox(NULL, si.lpTitle, NULL, MB_OK); return 0; }
{ "language": "en", "url": "https://stackoverflow.com/questions/53501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Plain text passwords in Ruby on Rails using Restful_Authentication If I use restful_authentication in my ruby on rails app are passwords transfered between the broswer and the server in paintext? And if so how worried should I be about it? A: Well, you need to be worried if you are not hosting it over HTTPS... because if it is straight HTTP, then yes it is being passed in plaintext. I don't know Rails specifically, but I'm pretty sure it has nothing to do with whether you host it via HTTP or HTTPS. Edit: I found this link which apparently provides an example Rails app over HTTPS. Further Edit: Here is another link discussing HTTPS with Rails. A: If you are concerned about the privacy of passwords (and you should be), you will also want to add this line to your ApplicationController: filter_parameter_logging :password Otherwise the passwords will end up in plain text in your log files. A: Authentication with plain text can be done with CHAP style protocols. Is that possible over HTTP? I'm asking, because I think that it would need some state on the server to foil replay-attacks - state on the server is something to be eliminated with RESTful architectures, right? A: You should definitely have a look at the bcrypt gem. It will hash all passwords for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/53511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How do I check if a list is empty? For example, if passed the following: a = [] How do I check to see if a is empty? A: Simply use is_empty() or make function like:- def is_empty(any_structure): if any_structure: print('Structure is not empty.') return True else: print('Structure is empty.') return False It can be used for any data_structure like a list,tuples, dictionary and many more. By these, you can call it many times using just is_empty(any_structure). A: Simple way is checking the length is equal zero. if len(a) == 0: print("a is empty") A: len() is an O(1) operation for Python lists, strings, dicts, and sets. Python internally keeps track of the number of elements in these containers. JavaScript has a similar notion of truthy/falsy. A: From python3 onwards you can use a == [] to check if the list is empty EDIT : This works with python2.7 too.. I am not sure why there are so many complicated answers. It's pretty clear and straightforward A: if not a: print("List is empty") Using the implicit booleanness of the empty list is quite Pythonic. A: The truth value of an empty list is False whereas for a non-empty list it is True. A: What brought me here is a special use-case: I actually wanted a function to tell me if a list is empty or not. I wanted to avoid writing my own function or using a lambda-expression here (because it seemed like it should be simple enough): foo = itertools.takewhile(is_not_empty, (f(x) for x in itertools.count(1))) And, of course, there is a very natural way to do it: foo = itertools.takewhile(bool, (f(x) for x in itertools.count(1))) Of course, do not use bool in if (i.e., if bool(L):) because it's implied. But, for the cases when "is not empty" is explicitly needed as a function, bool is the best choice. A: I had written: if isinstance(a, (list, some, other, types, i, accept)) and not a: do_stuff which was voted -1. I'm not sure if that's because readers objected to the strategy or thought the answer wasn't helpful as presented. I'll pretend it was the latter, since---whatever counts as "pythonic"---this is the correct strategy. Unless you've already ruled out, or are prepared to handle cases where a is, for example, False, you need a test more restrictive than just if not a:. You could use something like this: if isinstance(a, numpy.ndarray) and not a.size: do_stuff elif isinstance(a, collections.Sized) and not a: do_stuff the first test is in response to @Mike's answer, above. The third line could also be replaced with: elif isinstance(a, (list, tuple)) and not a: if you only want to accept instances of particular types (and their subtypes), or with: elif isinstance(a, (list, tuple)) and not len(a): You can get away without the explicit type check, but only if the surrounding context already assures you that a is a value of the types you're prepared to handle, or if you're sure that types you're not prepared to handle are going to raise errors (e.g., a TypeError if you call len on a value for which it's undefined) that you're prepared to handle. In general, the "pythonic" conventions seem to go this last way. Squeeze it like a duck and let it raise a DuckError if it doesn't know how to quack. You still have to think about what type assumptions you're making, though, and whether the cases you're not prepared to handle properly really are going to error out in the right places. The Numpy arrays are a good example where just blindly relying on len or the boolean typecast may not do precisely what you're expecting. A: This is the first google hit for "python test empty array" and similar queries, and other people are generalizing the question beyond just lists, so here's a caveat for a different type of sequence that a lot of people use. Other methods don't work for NumPy arrays You need to be careful with NumPy arrays, because other methods that work fine for lists or other standard containers fail for NumPy arrays. I explain why below, but in short, the preferred method is to use size. The "pythonic" way doesn't work: Part 1 The "pythonic" way fails with NumPy arrays because NumPy tries to cast the array to an array of bools, and if x tries to evaluate all of those bools at once for some kind of aggregate truth value. But this doesn't make any sense, so you get a ValueError: >>> x = numpy.array([0,1]) >>> if x: print("x") ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() The "pythonic" way doesn't work: Part 2 But at least the case above tells you that it failed. If you happen to have a NumPy array with exactly one element, the if statement will "work", in the sense that you don't get an error. However, if that one element happens to be 0 (or 0.0, or False, ...), the if statement will incorrectly result in False: >>> x = numpy.array([0,]) >>> if x: print("x") ... else: print("No x") No x But clearly x exists and is not empty! This result is not what you wanted. Using len can give unexpected results For example, len( numpy.zeros((1,0)) ) returns 1, even though the array has zero elements. The numpythonic way As explained in the SciPy FAQ, the correct method in all cases where you know you have a NumPy array is to use if x.size: >>> x = numpy.array([0,1]) >>> if x.size: print("x") x >>> x = numpy.array([0,]) >>> if x.size: print("x") ... else: print("No x") x >>> x = numpy.zeros((1,0)) >>> if x.size: print("x") ... else: print("No x") No x If you're not sure whether it might be a list, a NumPy array, or something else, you could combine this approach with the answer @dubiousjim gives to make sure the right test is used for each type. Not very "pythonic", but it turns out that NumPy intentionally broke pythonicity in at least this sense. If you need to do more than just check if the input is empty, and you're using other NumPy features like indexing or math operations, it's probably more efficient (and certainly more common) to force the input to be a NumPy array. There are a few nice functions for doing this quickly — most importantly numpy.asarray. This takes your input, does nothing if it's already an array, or wraps your input into an array if it's a list, tuple, etc., and optionally converts it to your chosen dtype. So it's very quick whenever it can be, and it ensures that you just get to assume the input is a NumPy array. We usually even just use the same name, as the conversion to an array won't make it back outside of the current scope: x = numpy.asarray(x, dtype=numpy.double) This will make the x.size check work in all cases I see on this page. A: I prefer the following: if a == []: print "The list is empty." A: From documentation on truth value testing: All values other than what is listed here are considered True * *None *False *zero of any numeric type, for example, 0, 0.0, 0j. *any empty sequence, for example, '', (), []. *any empty mapping, for example, {}. *instances of user-defined classes, if the class defines a __bool__() or __len__() method, when that method returns the integer zero or bool value False. As can be seen, empty list [] is falsy, so doing what would be done to a boolean value sounds most efficient: if not a: print('"a" is empty!') A: Here are a few ways you can check if a list is empty: a = [] #the list 1) The pretty simple pythonic way: if not a: print("a is empty") In Python, empty containers such as lists,tuples,sets,dicts,variables etc are seen as False. One could simply treat the list as a predicate (returning a Boolean value). And a True value would indicate that it's non-empty. 2) A much explicit way: using the len() to find the length and check if it equals to 0: if len(a) == 0: print("a is empty") 3) Or comparing it to an anonymous empty list: if a == []: print("a is empty") 4) Another yet silly way to do is using exception and iter(): try: next(iter(a)) # list has elements except StopIteration: print("Error: a is empty") A: Best way to check if a list is empty For example, if passed the following: a = [] How do I check to see if a is empty? Short Answer: Place the list in a boolean context (for example, with an if or while statement). It will test False if it is empty, and True otherwise. For example: if not a: # do this! print('a is an empty list') PEP 8 PEP 8, the official Python style guide for Python code in Python's standard library, asserts: For sequences, (strings, lists, tuples), use the fact that empty sequences are false. Yes: if not seq: if seq: No: if len(seq): if not len(seq): We should expect that standard library code should be as performant and correct as possible. But why is that the case, and why do we need this guidance? Explanation I frequently see code like this from experienced programmers new to Python: if len(a) == 0: # Don't do this! print('a is an empty list') And users of lazy languages may be tempted to do this: if a == []: # Don't do this! print('a is an empty list') These are correct in their respective other languages. And this is even semantically correct in Python. But we consider it un-Pythonic because Python supports these semantics directly in the list object's interface via boolean coercion. From the docs (and note specifically the inclusion of the empty list, []): By default, an object is considered true unless its class defines either a __bool__() method that returns False or a __len__() method that returns zero, when called with the object. Here are most of the built-in objects considered false: * *constants defined to be false: None and False. *zero of any numeric type: 0, 0.0, 0j, Decimal(0), Fraction(0, 1) *empty sequences and collections: '', (), [], {}, set(), range(0) And the datamodel documentation: object.__bool__(self) Called to implement truth value testing and the built-in operation bool(); should return False or True. When this method is not defined, __len__() is called, if it is defined, and the object is considered true if its result is nonzero. If a class defines neither __len__() nor __bool__(), all its instances are considered true. and object.__len__(self) Called to implement the built-in function len(). Should return the length of the object, an integer >= 0. Also, an object that doesn’t define a __bool__() method and whose __len__() method returns zero is considered to be false in a Boolean context. So instead of this: if len(a) == 0: # Don't do this! print('a is an empty list') or this: if a == []: # Don't do this! print('a is an empty list') Do this: if not a: print('a is an empty list') Doing what's Pythonic usually pays off in performance: Does it pay off? (Note that less time to perform an equivalent operation is better:) >>> import timeit >>> min(timeit.repeat(lambda: len([]) == 0, repeat=100)) 0.13775854044661884 >>> min(timeit.repeat(lambda: [] == [], repeat=100)) 0.0984637276455409 >>> min(timeit.repeat(lambda: not [], repeat=100)) 0.07878462291455435 For scale, here's the cost of calling the function and constructing and returning an empty list, which you might subtract from the costs of the emptiness checks used above: >>> min(timeit.repeat(lambda: [], repeat=100)) 0.07074015751817342 We see that either checking for length with the builtin function len compared to 0 or checking against an empty list is much less performant than using the builtin syntax of the language as documented. Why? For the len(a) == 0 check: First Python has to check the globals to see if len is shadowed. Then it must call the function, load 0, and do the equality comparison in Python (instead of with C): >>> import dis >>> dis.dis(lambda: len([]) == 0) 1 0 LOAD_GLOBAL 0 (len) 2 BUILD_LIST 0 4 CALL_FUNCTION 1 6 LOAD_CONST 1 (0) 8 COMPARE_OP 2 (==) 10 RETURN_VALUE And for the [] == [] it has to build an unnecessary list and then, again, do the comparison operation in Python's virtual machine (as opposed to C) >>> dis.dis(lambda: [] == []) 1 0 BUILD_LIST 0 2 BUILD_LIST 0 4 COMPARE_OP 2 (==) 6 RETURN_VALUE The "Pythonic" way is a much simpler and faster check since the length of the list is cached in the object instance header: >>> dis.dis(lambda: not []) 1 0 BUILD_LIST 0 2 UNARY_NOT 4 RETURN_VALUE Evidence from the C source and documentation PyVarObject This is an extension of PyObject that adds the ob_size field. This is only used for objects that have some notion of length. This type does not often appear in the Python/C API. It corresponds to the fields defined by the expansion of the PyObject_VAR_HEAD macro. From the c source in Include/listobject.h: typedef struct { PyObject_VAR_HEAD /* Vector of pointers to list elements. list[0] is ob_item[0], etc. */ PyObject **ob_item; /* ob_item contains space for 'allocated' elements. The number * currently in use is ob_size. * Invariants: * 0 <= ob_size <= allocated * len(list) == ob_size Response to comments: I would point out that this is also true for the non-empty case though its pretty ugly as with l=[] then %timeit len(l) != 0 90.6 ns ± 8.3 ns, %timeit l != [] 55.6 ns ± 3.09, %timeit not not l 38.5 ns ± 0.372. But there is no way anyone is going to enjoy not not l despite triple the speed. It looks ridiculous. But the speed wins out I suppose the problem is testing with timeit since just if l: is sufficient but surprisingly %timeit bool(l) yields 101 ns ± 2.64 ns. Interesting there is no way to coerce to bool without this penalty. %timeit l is useless since no conversion would occur. IPython magic, %timeit, is not entirely useless here: In [1]: l = [] In [2]: %timeit l 20 ns ± 0.155 ns per loop (mean ± std. dev. of 7 runs, 100000000 loops each) In [3]: %timeit not l 24.4 ns ± 1.58 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) In [4]: %timeit not not l 30.1 ns ± 2.16 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) We can see there's a bit of linear cost for each additional not here. We want to see the costs, ceteris paribus, that is, all else equal - where all else is minimized as far as possible: In [5]: %timeit if l: pass 22.6 ns ± 0.963 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) In [6]: %timeit if not l: pass 24.4 ns ± 0.796 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) In [7]: %timeit if not not l: pass 23.4 ns ± 0.793 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) Now let's look at the case for an unempty list: In [8]: l = [1] In [9]: %timeit if l: pass 23.7 ns ± 1.06 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) In [10]: %timeit if not l: pass 23.6 ns ± 1.64 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) In [11]: %timeit if not not l: pass 26.3 ns ± 1 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each) What we can see here is that it makes little difference whether you pass in an actual bool to the condition check or the list itself, and if anything, giving the list, as is, is faster. Python is written in C; it uses its logic at the C level. Anything you write in Python will be slower. And it will likely be orders of magnitude slower unless you're using the mechanisms built into Python directly. A: Method 1 (preferred): if not a: print ("Empty") Method 2: if len(a) == 0: print("Empty") Method 3: if a == []: print ("Empty") A: You can even try using bool() like this. Although it is less readable surely it's a concise way to perform this. a = [1,2,3]; print bool(a); # it will return True a = []; print bool(a); # it will return False I love this way for the checking list is empty or not. Very handy and useful. A: def list_test (L): if L is None : print('list is None') elif not L : print('list is empty') else: print('list has %d elements' % len(L)) list_test(None) list_test([]) list_test([1,2,3]) It is sometimes good to test for None and for emptiness separately as those are two different states. The code above produces the following output: list is None list is empty list has 3 elements Although it's worth nothing that None is falsy. So if you don't want to separate test for None-ness, you don't have to do that. def list_test2 (L): if not L : print('list is empty') else: print('list has %d elements' % len(L)) list_test2(None) list_test2([]) list_test2([1,2,3]) produces expected list is empty list is empty list has 3 elements A: To check whether a list is empty or not you can use two following ways. But remember, we should avoid the way of explicitly checking for a type of sequence (it's a less Pythonic way): def enquiry(list1): return len(list1) == 0 list1 = [] if enquiry(list1): print("The list isn't empty") else: print("The list is Empty") # Result: "The list is Empty". The second way is a more Pythonic one. This method is an implicit way of checking and much more preferable than the previous one. def enquiry(list1): return not list1 list1 = [] if enquiry(list1): print("The list is Empty") else: print("The list isn't empty") # Result: "The list is Empty" A: An empty list is itself considered false in true value testing (see python documentation): a = [] if a: print("not empty") To Daren Thomas's answer: EDIT: Another point against testing the empty list as False: What about polymorphism? You shouldn't depend on a list being a list. It should just quack like a duck - how are you going to get your duckCollection to quack ''False'' when it has no elements? Your duckCollection should implement __nonzero__ or __len__ so the if a: will work without problems. A: If you want to check if a list is empty: l = [] if l: # do your stuff. If you want to check whether all the values in list is empty. However it will be True for an empty list: l = ["", False, 0, '', [], {}, ()] if all(bool(x) for x in l): # do your stuff. If you want to use both cases together: def empty_list(lst): if len(lst) == 0: return False else: return all(bool(x) for x in l) Now you can use: if empty_list(lst): # do your stuff. A: Many answers have been given, and a lot of them are pretty good. I just wanted to add that the check not a will also pass for None and other types of empty structures. If you truly want to check for an empty list, you can do this: if isinstance(a, list) and len(a)==0: print("Received an empty list") A: The Pythonic way to do it is from the PEP 8 style guide. For sequences, (strings, lists, tuples), use the fact that empty sequences are false: # Correct: if not seq: if seq: # Wrong: if len(seq): if not len(seq): A: print('not empty' if a else 'empty') a little more practical: a.pop() if a else None and the shortest version: if a: a.pop() A: Patrick's (accepted) answer is right: if not a: is the right way to do it. Harley Holcombe's answer is right that this is in the PEP 8 style guide. But what none of the answers explain is why it's a good idea to follow the idiom—even if you personally find it's not explicit enough or confusing to Ruby users or whatever. Python code, and the Python community, has very strong idioms. Following those idioms makes your code easier to read for anyone experienced in Python. And when you violate those idioms, that's a strong signal. It's true that if not a: doesn't distinguish empty lists from None, or numeric 0, or empty tuples, or empty user-created collection types, or empty user-created not-quite-collection types, or single-element NumPy array acting as scalars with falsey values, etc. And sometimes it's important to be explicit about that. And in that case, you know what you want to be explicit about, so you can test for exactly that. For example, if not a and a is not None: means "anything falsey except None", while if len(a) != 0: means "only empty sequences—and anything besides a sequence is an error here", and so on. Besides testing for exactly what you want to test, this also signals to the reader that this test is important. But when you don't have anything to be explicit about, anything other than if not a: is misleading the reader. You're signaling something as important when it isn't. (You may also be making the code less flexible, or slower, or whatever, but that's all less important.) And if you habitually mislead the reader like this, then when you do need to make a distinction, it's going to pass unnoticed because you've been "crying wolf" all over your code. A: Being inspired by dubiousjim's solution, I propose to use an additional general check of whether is it something iterable: import collections def is_empty(a): return not a and isinstance(a, collections.Iterable) Note: a string is considered to be iterable—add and not isinstance(a,(str,unicode)) if you want the empty string to be excluded Test: >>> is_empty('sss') False >>> is_empty(555) False >>> is_empty(0) False >>> is_empty('') True >>> is_empty([3]) False >>> is_empty([]) True >>> is_empty({}) True >>> is_empty(()) True A: We could use a simple if else: item_list=[] if len(item_list) == 0: print("list is empty") else: print("list is not empty") A: I prefer it explicitly: if len(li) == 0: print('the list is empty') This way it's 100% clear that li is a sequence (list) and we want to test its size. My problem with if not li: ... is that it gives the false impression that li is a boolean variable. A: Why check at all? No one seems to have addressed questioning your need to test the list in the first place. Because you provided no additional context, I can imagine that you may not need to do this check in the first place, but are unfamiliar with list processing in Python. I would argue that the most Pythonic way is to not check at all, but rather to just process the list. That way it will do the right thing whether empty or full. a = [] for item in a: # <Do something with item> # <The rest of code> This has the benefit of handling any contents of a, while not requiring a specific check for emptiness. If a is empty, the dependent block will not execute and the interpreter will fall through to the next line. If you do actually need to check the array for emptiness: a = [] if not a: # <React to empty list> # <The rest of code> is sufficient.
{ "language": "en", "url": "https://stackoverflow.com/questions/53513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3229" }
Q: How can you unit test a database in SQL Server? What are some of the ways? What frameworks can you use? A: Well, I presume you mean unit testing your code that hits the database, in which case, there is NDbUnit, which appears to be a DbUnit clone for .NET. I've never used it, but I have used DbUnit, and it is quite useful. Edit: I assumed you were using .NET because you mentioned MSSQL. If however you are actually using Java, there is DbUnit. A: Here's a project which does this: http://tsqlunit.sourceforge.net/ Also, Visual Studio Team System for DBA has built-in support for unit testing of Databases. A: T.S.T. the T-SQL Test Tool TST is a tool that simplifies the task of writing and running test automation for code written in T-SQL. At the heart of the TST tool is the TST database. This database contains a series of stored procedures that are exposed as a test API. Part of this API is similar with those found in Unit Testing libraries familiar to programmers in C# or Java.
{ "language": "en", "url": "https://stackoverflow.com/questions/53527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Unit-testing servlets I have a bunch of servlets running under the Tomcat servlet container. I would like to separate test code from production code, so I considered using a test framework. JUnit is nicely integrated into Eclipse, but I failed to make it run servlets using a running Tomcat server. Could you please recommend a unit testing framework that supports testing Tomcat servlets? Eclipse integration is nice but not necessary. A: Check out ServletUnit, which is part of HttpUnit. In a nutshell, ServletUnit provides a library of mocks and utilities you can use in ordinary JUnit tests to mock out a servlet container and other servlet-related objects like request and response objects. The link above contains examples. A: The Spring Framework has nice ready made mock objects for several classes out of the Servlet API: http://static.springframework.org/spring/docs/2.5.x/api/org/springframework/mock/web/package-summary.html A: Okay. Ignoring the 'tomcat' bit and coding to the servlet, your best bet is to create mocks for the response and request objects, and then tell it what you expect out of it. So for a standard empty doPost, and using EasyMock, you'll have public void testPost() { mockRequest = createMock(HttpServletRequest.class); mockResponse = createMock(HttpServletResponse.class); replay(mockRequest, mockResponse); myServlet.doPost(mockRequest, mockResponse); verify(mockRequest, mockResponse); } Then start adding code to the doPost. The mocks will fail because they have no expectations, and then you can set up the expectations from there. Note that if you want to use EasyMock with classes, you'll have to use the EasyMock class extension library. But it'll work the same way from then on. A: Separate the parts of that code that deal with HTTP requests and response from the parts that do business logic or data-base manipulation. In most cases this will produce a three tier architecture, with a data-layer (for the data-base/persistence), service-layer (for the business logic) and a presentation-layer (for the HTTP requests and responses). * *You can unit test the first two layers without any servlet stuff at all; it will be easier to test that way. *You can test the presentation layer, as others suggest, using mock HTTP request and response objects. *Finally, if you feel it really is necessary, you can do integration tests using a too such as HtmlUnit or JWebUnit . A: For "in-container" testing, have a look at Cactus If you want to be able to test without a running container you can either simulate its components with your own mockobjects (e.g. with EasyMock) or you could try MockRunner which has "pre-defined" Stubs for testing servlets, jdbc-connections etc. A: Updated Feb 2018: OpenBrace Limited has closed down, and its ObMimic product is no longer supported. If you want a newer alternative to ServletUnit for JUnit testing of Servlets, you might find my company's ObMimic library useful. It's available for free from the website's downloads page. As with ServletUnit, it provides a library of classes that you can use in normal JUnit or TestNG tests outside of any servlet container to simulate the Servlet API. Its Servlet API objects have no-argument constructors, are fully configurable and inspectable for all relevant Servlet API data and settings, and provide a complete simulation of all of the behaviour specified by the Servlet API's javadoc. To help with testing there's support for selective recording of Servlet API calls, control over any container-dependent behaviour, checks for any ambiguous calls (i.e. where the Servlet API behavour isn't fully defined), and an in-memory JNDI simulation for any servlet code that relies on JNDI lookups. For full details, example code, "how to" guides, Javadoc etc, see the website.
{ "language": "en", "url": "https://stackoverflow.com/questions/53532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: SQL Server Freetext match - how do I sort by relevance Is it possible to order results in SQL Server 2005 by the relevance of a freetext match? In MySQL you can use the (roughly equivalent) MATCH function in the ORDER BY section, but I haven't found any equivalence in SQL Server. From the MySQL docs: For each row in the table, MATCH() returns a relevance value; that is, a similarity measure between the search string and the text in that row in the columns named in the MATCH() list. So for example you could order by the number of votes, then this relevance, and finally by a creation date. Is this something that can be done, or am I stuck with just returning the matching values and not having this ordering ability? A: If you are using FREETEXTTABLE then it returns a column name Rank, so order by Rank should work. I don't know if other freetext search methods are also returning this value or not. You can have a try. A: Both FREETEXTTABLE and CONTAINSTABLE will return the [RANK] column, but make sure you are using either the correct variation or union both of them to get all appropriate results.
{ "language": "en", "url": "https://stackoverflow.com/questions/53538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What are some strategies to write python code that works in CPython, Jython and IronPython Having tries to target two of these environments at the same time I can safely say the if you have to use a database etc. you end up having to write unique code for that environment. Have you got a great way to handle this situation? A: I write code for CPython and IronPython but tip should work for Jython as well. Basically, I write all the platform specific code in separate modules/packages and then import the appropriate one based on platform I'm running on. (see cdleary's comment above) This is especially important when it comes to the differences between the SQLite implementations and if you are implementing any GUI code. A: If you do find you need to write unique code for an environment, use pythons import mymodule_jython as mymodule import mymodule_cpython as mymodule have this stuff in a simple module (''module_importer''?) and write your code like this: from module_importer import mymodule This way, all you need to do is alter module_importer.py per platform. A: @Daren Thomas: I agree, but you should use the platform module to determine which interpreter you're running. A: The #1 thing IMO: Focus on thread safety. CPython's GIL makes writing threadsafe code easy because only one thread can access the interpreter at a time. IronPython and Jython are a little less hand-holding though. A: I'm pretty sure you already know this but unfortunately Jython can't load c extension modules. A: There are two major issues at play here... Firstly, to my knowledge, only CPython has RAII - you have to close your own resources in Jython, Ironpython, etc. And Secondly, as has been mentioned, is thread safety.
{ "language": "en", "url": "https://stackoverflow.com/questions/53543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Get the App.Config of another Exe I have an exe with an App.Config file. Now I want to create a wrapper dll around the exe in order to consume some of the functionalities. The question is how can I access the app.config property in the exe from the wrapper dll? Maybe I should be a little bit more in my questions, I have the following app.config content with the exe: <?xml version="1.0" encoding="utf-8" ?> <configuration> <appSettings> <add key="myKey" value="myValue"/> </appSettings> </configuration> The question is how to how to get "myValue" out from the wrapper dll? thanks for your solution. Actually my initial concept was to avoid XML file reading method or LINQ or whatever. My preferred solution was to use the configuration manager libraries and the like. I'll appreciate any help that uses the classes that are normally associated with accessing app.config properties. A: After some testing, I found a way to do this. * *Add the App.Config file to the test project. Use "Add as a link" option. *Use System.Configuration.ConfigurationManager.AppSettings["myKey"] to access the value. A: I think what you're looking for is: System.Configuration.ConfigurationManager.OpenExeConfiguration(string path) A: The ConfigurationManager.OpenMappedExeConfiguration Method will allow you to do this. Sample from the MSDN page: static void GetMappedExeConfigurationSections() { // Get the machine.config file. ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // You may want to map to your own exe.comfig file here. fileMap.ExeConfigFilename = @"C:\test\ConfigurationManager.exe.config"; System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); // Loop to get the sections. Display basic information. Console.WriteLine("Name, Allow Definition"); int i = 0; foreach (ConfigurationSection section in config.Sections) { Console.WriteLine( section.SectionInformation.Name + "\t" + section.SectionInformation.AllowExeDefinition); i += 1; } Console.WriteLine("[Total number of sections: {0}]", i); // Display machine.config path. Console.WriteLine("[File path: {0}]", config.FilePath); } EDIT: This should output the "myKey" value: ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); fileMap.ExeConfigFilename = @"C:\test\ConfigurationManager.exe.config"; System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); Console.WriteLine(config.AppSettings.Settings["MyKey"].Value); A: I'd second Gishu's point that there's another way. Wouldn't it be better to abstact the common/"public" part of the EXE out into DLL create a wrapper EXE to run it? This is certainly the more usual pattern of development. Only the stuff that you wish to consume would go into the DLL, and the EXE would do all the stuff it currently does, minus what's gone into the DLL. A: It's an xml file, you can use Linq-XML or DOM based approaches to parse out the relevant information. (that said I'd question if there isn't a better design for whatever it is.. you're trying to achieve.) A: Adding a link in the IDE would only help during development. I think lomaxx has the right idea: System.Configuration.ConfigurationManager.OpenExeConfiguration.
{ "language": "en", "url": "https://stackoverflow.com/questions/53545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How would you go about for switching a site from Prototype to jQuery I have written a site in Prototype but want to switch to jQuery. Any ideas on how best make the switch? A: Personally, I like to take things in steps, so I would start by using both, like this: jQuery.noConflict(); // Put all your code in your document ready area jQuery(document).ready(function($){ // Do jQuery stuff using $ $("div").hide(); }); // Use Prototype with $(...), etc. $('someid').hide(); That way you don't have to convert all your old code at once, but can start using jquery on new stuff, and migrate your old Prototype code when ever it's convenient. I don't know the size of your project, so I can't say whether or not this applies to you, but Spolsky had a great article about "The big rewrite" and why it's such a bad idea in Things you should never do, Part 1. It's well worth a read! For more on using jquery with Prototype, see Using jQuery with other libraries in the jquery docs.
{ "language": "en", "url": "https://stackoverflow.com/questions/53555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Enabling Hibernate second-level cache with JPA on JBoss 4.2 What are the steps required to enable Hibernate's second-level cache, when using the Java Persistence API (annotated entities)? How do I check that it's working? I'm using JBoss 4.2.2.GA. From the Hibernate documentation, it seems that I need to enable the cache and specify a cache provider in persistence.xml, like: <property name="hibernate.cache.use_second_level_cache" value="true" /> <property name="hibernate.cache.provider_class" value="org.hibernate.cache.HashtableCacheProvider" /> What else is required? Do I need to add @Cache annotations to my JPA entities? How can I tell if the cache is working? I have tried accessing cache statistics after running a Query, but Statistics.getSecondLevelCacheStatistics returns null, perhaps because I don't know what 'region' name to use. A: Follow-up: in the end, after adding annotations, I have it working with EhCache, i.e. <property name="hibernate.cache.provider_class" value="net.sf.ehcache.hibernate.EhCacheProvider" /> A: I believe you need to add the cache annotations to tell hibernate how to use the second-level cache (read-only, read-write, etc). This was the case in my app (using spring / traditional hibernate and ehcache, so your mileage may vary). Once the caches were indicated, I started seeing messages that they were in use from hibernate.
{ "language": "en", "url": "https://stackoverflow.com/questions/53562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to get the changes on a branch in Git What is the best way to get a log of commits on a branch since the time it was branched from the current branch? My solution so far is: git log $(git merge-base HEAD branch)..branch The documentation for git-diff indicates that git diff A...B is equivalent to git diff $(git-merge-base A B) B. On the other hand, the documentation for git-rev-parse indicates that r1...r2 is defined as r1 r2 --not $(git merge-base --all r1 r2). Why are these different? Note that git diff HEAD...branch gives me the diffs I want, but the corresponding git log command gives me more than what I want. In pictures, suppose this: x---y---z---branch / ---a---b---c---d---e---HEAD I would like to get a log containing commits x, y, z. * *git diff HEAD...branch gives these commits *however, git log HEAD...branch gives x, y, z, c, d, e. A: git cherry branch [newbranch] does exactly what you are asking, when you are in the master branch. I am also very fond of: git diff --name-status branch [newbranch] Which isn't exactly what you're asking, but is still very useful in the same context. A: To see the log of the current branch since branching off master: git log master... If you are currently on master, to see the log of a different branch since it branched off master: git log ...other-branch A: git log --cherry-mark --oneline from_branch...to_branch (3dots) but sometimes it shows '+' instead of '=' A: What you want to see is the list of outgoing commits. You can do this using git log master..branchName or git log master..branchName --oneline Where I assume that "branchName" was created as a tracking branch of "master". Similarly, to see the incoming changes you can use: git log branchName..master A: This is similar to the answer I posted on: Preview a Git push Drop these functions into your Bash profile: * *gbout - git branch outgoing *gbin - git branch incoming You can use this like: * *If on master: gbin branch1 <-- this will show you what's in branch1 and not in master *If on master: gbout branch1 <-- this will show you what's in master that's not in branch 1 This will work with any branch. function parse_git_branch { git branch --no-color 2> /dev/null | sed -e '/^[^*]/d' -e 's/* \(.*\)/\1/' } function gbin { echo branch \($1\) has these commits and \($(parse_git_branch)\) does not git log ..$1 --no-merges --format='%h | Author:%an | Date:%ad | %s' --date=local } function gbout { echo branch \($(parse_git_branch)\) has these commits and \($1\) does not git log $1.. --no-merges --format='%h | Author:%an | Date:%ad | %s' --date=local } A: When already in the branch in question use git diff master... Which combines several features: * *it's super short *shows the actual changes *Allow for master having moved forward A: In the context of a revision list, A...B is how git-rev-parse defines it. git-log takes a revision list. git-diff does not take a list of revisions - it takes one or two revisions, and has defined the A...B syntax to mean how it's defined in the git-diff manpage. If git-diff did not explicitly define A...B, then that syntax would be invalid. Note that the git-rev-parse manpage describes A...B in the "Specifying Ranges" section, and everything in that section is only valid in situations where a revision range is valid (i.e. when a revision list is desired). To get a log containing just x, y, and z, try git log HEAD..branch (two dots, not three). This is identical to git log branch --not HEAD, and means all commits on branch that aren't on HEAD. A: I found git diff <branch_with_changes> <branch_to_compare_to> more useful, since you don't only get the commit messages but the whole diff. If you are already on the branch you want to see the changes of and (for instance) want to see what has changed to the master, you can use: git diff HEAD master A: Throw a -p in there to see some FILE CHANGES git log -p master..branch Make some aliases: alias gbc="git branch --no-color | sed -e '/^[^\*]/d' -e 's/* \\(.*\\)/\1/'" alias gbl='git log -p master..\`gbc\`' See a branch's unique commits: gbl A: With Git 2.30 (Q1 2021), "git diff A...B(man)" learned "git diff --merge-base A B(man), which is a longer short-hand to say the same thing. Thus you can do this using git diff --merge-base <branch> HEAD. This should be equivalent to git diff <branch>...HEAD but without the confusion of having to use range-notation in a diff.
{ "language": "en", "url": "https://stackoverflow.com/questions/53569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "285" }
Q: NMBLookup OS X returning inconsistant results We're trying to get SMB volume listings in our OS X application, and have been using NMBLookup, as suggested by Apple, to get listings. However, more often than not, we're not able to get a full listing of available SMB volumes using the tool. We've got a good benchmark in that we can see the full listing the Apple Finder gets, and the majority of the time, our listing is not matching up, usually missing servers. We're tried a number ways of executing the command, but haven't yet found anything that brings us back a complete listing. nmblookup -M -- - nmblookup '*' etc Does anyone know what we could be doing wrong, or know of a better way to query for SMB volumes available on local subnets? A: This work fairly well in our network. The point is to use smbclient -L on each of the entries returned by nmblookup: nmblookup -M -- - | grep -v querying | while read sw do echo $sw | awk -F' ' '{print $1}' | xargs smbclient -L done Edit: @paul - now I see what you mean - a vista has just joined our network and the Finder shows it but not nmblookup, but smbclient shows it in the "Server" section. smbclient has a "Server" section where it lists the machines found on the network. The command line I use is: smbclient -L 192.168.0.4 //the IP as returned by nmblookup of the master browser cristi:~ diciu$ smbclient -L 192.168.0.4 Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.0.24-7.fc5] Sharename Type Comment --------- ---- ------- internal Disk some share [..] Anonymous login successful Domain=[DOMAIN] OS=[Unix] Server=[Samba 3.0.24-7.fc5] Server Comment --------- ------- MMM Vista box not showing up in nmblookup
{ "language": "en", "url": "https://stackoverflow.com/questions/53583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How do I compose existing Linq Expressions I want to compose the results of two Linq Expressions. They exist in the form Expression<Func<T, bool>> So the two that I want to compose are essentially delegates on a parameter (of type T) that both return a boolean. The result I would like composed would be the logical evaluation of the booleans. I would probably implement it as an extension method so my syntax would be something like: Expression<Func<User, bool>> expression1 = t => t.Name == "steve"; Expression<Func<User, bool>> expression2 = t => t.Age == 28; Expression<Func<User, bool>> composedExpression = expression1.And(expression2); And later on in my code I want to evaluate the composed expression var user = new User(); bool evaluated = composedExpression.Compile().Invoke(user); I have poked around with a few different ideas but I fear that it is more complex than I had hoped. How is this done? A: Here is an example: var user1 = new User {Name = "steve", Age = 28}; var user2 = new User {Name = "foobar", Age = 28}; Expression<Func<User, bool>> expression1 = t => t.Name == "steve"; Expression<Func<User, bool>> expression2 = t => t.Age == 28; var invokedExpression = Expression.Invoke(expression2, expression1.Parameters.Cast<Expression>()); var result = Expression.Lambda<Func<User, bool>>(Expression.And(expression1.Body, invokedExpression), expression1.Parameters); Console.WriteLine(result.Compile().Invoke(user1)); // true Console.WriteLine(result.Compile().Invoke(user2)); // false You can reuse this code via extension methods: class User { public string Name { get; set; } public int Age { get; set; } } public static class PredicateExtensions { public static Expression<Func<T, bool>> And<T>(this Expression<Func<T, bool>> expression1,Expression<Func<T, bool>> expression2) { InvocationExpression invokedExpression = Expression.Invoke(expression2, expression1.Parameters.Cast<Expression>()); return Expression.Lambda<Func<T, bool>>(Expression.And(expression1.Body, invokedExpression), expression1.Parameters); } } class Program { static void Main(string[] args) { var user1 = new User {Name = "steve", Age = 28}; var user2 = new User {Name = "foobar", Age = 28}; Expression<Func<User, bool>> expression1 = t => t.Name == "steve"; Expression<Func<User, bool>> expression2 = t => t.Age == 28; var result = expression1.And(expression2); Console.WriteLine(result.Compile().Invoke(user1)); Console.WriteLine(result.Compile().Invoke(user2)); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/53597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: regex for parsing resource (.rc) files Ulimately I just wanted to extract strings from the .rc file so I could translate them, but anything that goes with .rc files works for me. A: I'd consider usage of gettext and .PO files, if your program fits GNU license 1) I'd suggest extracting from .rc files using state machine algorithm. void ProcessLine(const char * str) { if (strstr(str, " DIALOG")) state = Scan; else if (strstr(str, " MENU")) state = Scan; else if (strstr(str, " STRINGTABLE")) state = Scan; else if (strstr(str, "END")) state = DontScan; if (state == Scan) { const char * cur = sLine; string hdr = ...// for example "# file.rc:453" string msgid; string msgid = ""; while (ExtractString(sLine, cur, msgid)) { if (msgid.empty()) continue; if (IsPredefined(msgid)) continue; if (msgid.find("IDB_") == 0 || msgid.find("IDC_") == 0) continue; WritePoString(hdr, msgid, msgstr); } } } 2) When extracting string inside ExtractString() you should consider that char " is represented as "", and there are also chars like \t \n \r. So state machine is also a good option here. The following string: LTEXT "Mother has washed ""Sony"", then \taquarium\\shelves\r\nand probably floors",IDC_TEXT1,24,14,224,19 represents such label on a dialog: Mother has washed "Sony", then aquarium\shelves and probably floors 3) Then on program startup you should load .po file via gettext and for each dialog translate its string on startup using a function like this: int TranslateDialog(CWnd& wnd) { int i = 0; CWnd *pChild; CString text; //Translate Title wnd.GetWindowText(text); LPCTSTR translation = Translate(text); window.SetWindowText(translation); //Translate child windows pChild=wnd.GetWindow(GW_CHILD); while(pChild) { i++; Child->GetWindowText(Text);//including NULL translation = Translate(Text); pChild->SetWindowText(translation); pChild = pChild->GetWindow(GW_HWNDNEXT); } return i; } A: Maybe this helps? (http://social.msdn.microsoft.com/forums/en-US/regexp/thread/5e87fce9-ec73-42eb-b2eb-c821e95e0d31/) They are using the following regex to find the stringtable in the rc source: (?<=\bSTRINGTABLE\s+BEGIN\s+).*?(?=\s+END\b) Edit - And you can read the key values pairs with the following statement with the MultiLine option: @"\s+(.*?)\s+""(.*)"""; A: Although rc files seems an obvious starting point for translation, it's not. The job of developers is to make sure the app is translatable. It's not to manage translations. Starting translations from the exe, although somewhat counter-intuitive, is a way better idea. Read more about it here: http://www.apptranslator.com/misconceptions.html A: This sounds like a job for a SED script. By running this command line: sed.exe -n -f sed.txt test.rc The following SED script will extract all the quoted strings from the input test.rc file: # Run Script Using This Command Line # # sed.exe -n -f sed.txt test.rc # # Check for lines that contain strings /\".*\"/ { # print the string part of the line only s/\(.*\)\(\".*\"\)\(.*\)/\2/ p } A: In case rc, better use advanced parser like http://www.soft-gems.net/index.php/java/windows-resource-file-parser-and-converter A: ResxCrunch will be out sometimes soon. It will edit multiple resource files in multiple languages in one single table.
{ "language": "en", "url": "https://stackoverflow.com/questions/53599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How can I export Shared Outlook Calendars? I am looking for a solution to export Shared Outlook Calendars (yes, I'm using exchange server and yes I can copy calendar to my local folders, which would allow me to export. However, what if there are more than 50 calendars, which I need to export? Is there a way of automating the process?). I am interested in using C# for accomplish this task. Has anyone ever tried to do that? A: Exchange Web Services (EWS) allows you to do this A: outlook2ical has a VBMacro to export outlook calendar to iCal format. You might check if its worth the time to port to C#. A: hmmm.. Just wondering if Google Calendar Sync lets you do this. I havent got any shared calendar to try out. Or have you tried that already? A: Have you tried looking at VSTO ? (Visual Studio Tools for Office). This will provide APIs for the common Outlook items (Calendars etc). It's the C#/.Net version of various VBA admin APIs.
{ "language": "en", "url": "https://stackoverflow.com/questions/53602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: What does 'foo' really mean? I hope this qualifies as a programming question, as in any programming tutorial, you eventually come across 'foo' in the code examples. (yeah, right?) what does 'foo' really mean? If it is meant to mean nothing, when did it begin to be used so? A: I think it's meant to mean nothing. The wiki says: "Foo is commonly used with the metasyntactic variables bar and foobar." A: foo is used as a place-holder name, usually in example code to signify that the object being named, or the choice of name, is not part of the crux of the example. foo is often followed by bar, baz, and even bundy, if more than one such name is needed. Wikipedia calls these names Metasyntactic Variables. Python programmers supposedly use spam, eggs, ham, instead of foo, etc. There are good uses of foo in SA. I have also seen foo used when the programmer can't think of a meaningful name (as a substitute for tmp, say), but I consider that to be a misuse of foo. A: The sound of the french fou, (like: amour fou) [crazy] written in english, would be foo, wouldn't it. Else furchtbar -> foobar -> foo, bar -> barfoo -> barfuß (barefoot). Just fou. A foot without teeth. I agree with all, who mentioned it means: nothing interesting, just something, usually needed to complete a statement/expression. A: Among my colleagues, the meaning (or perhaps more accurately - the use) of the term "foo" has been to serve as a placeholder to represent an example for a name. Examples include, but not limited to, yourVariableName, yourObjectName, or yourColumnName. Today, I avoid using "foo" and prefer using this type of named substitution for a couple of reasons. * * In my earlier days, I originally found the use of "foo" as a placement in any example to represent something as f'd-up to be confusing. I wanted a working example, not something that was foobar. *Your results may vary, but I always, 100%, everytime, never-failed, got more follow-up questions about the meaning of the actual variable where "foo" was used. A: As definition of "Foo" has lot's of meanings: * *bar, and baz are often compounded together to make such words as foobar, barbaz, and foobaz. www.nationmaster.com/encyclopedia/Metasyntactic-variable *Major concepts in CML, usually mapped directly onto XMLElements (to be discussed later). wwmm.ch.cam.ac.uk/blogs/cml/ *Measurement of the total quantity of pasture in a paddock, expressed in kilograms of pasture dry matter per hectare (kg DM/ha) www.lifetimewool.com.au/glossary.aspx *Forward Observation Officer. An artillery officer who remained with infantry and tank battalions to set up observation posts in the front lines from which to observe enemy positions and radio the coordinates of targets to the guns further in the rear. members.fortunecity.com/lniven/definition.htm *is the first metasyntactic variable commonly used. It is sometimes combined with bar to make foobar. This suggests that foo may have originated with the World War II slang term fubar, as an acronym for fucked/fouled up beyond all recognition, although the Jargon File makes a pretty good case ... explanation-guide.info/meaning/Metasyntactic-variable.html *Foo is a metasyntactic variable used heavily in computer science to represent concepts abstractly and can be used to represent any part of a ... en.wikipedia.org/wiki/FOo *Foo is the world of dreams (no its not) in Obert Skye's Leven Thumps series. Although it has many original inhabitants, most of its current dwellers are from Reality, and are known as nits. ... en.wikipedia.org/wiki/Foo (place) *Also foo’. Representation of fool (foolish person), in a Mr. T accent en.wiktionary.org/wiki/foo Resource: google A: In my opinion every programmer has his or her own "words" that is used every time you need an arbitrary word when programming. For some people it's the first words from a childs song, for other it's names and for other its something completely different. Now for the programmer community there are these "words" as well, and these words are 'foo' and 'bar'. The use of this is that if you have to communicate publicly about programming you don't have to say that you would use arbitratry words, you would simply write 'foo' or 'bar' and every programmer knows that this is just arbitrary words. A: See: RFC 3092: Etymology of "Foo", D. Eastlake 3rd et al. Quoting only the relevant definitions from that RFC for brevity: *Used very generally as a sample name for absolutely anything, esp. programs and files (esp. scratch files). *First on the standard list of metasyntactic variables used in syntax examples (bar, baz, qux, quux, corge, grault, garply, waldo, fred, plugh, xyzzy, thud). [JARGON] A: It's a metasyntactic variable. A: foo = File or Object. It is used in place of an object variable or file name.
{ "language": "en", "url": "https://stackoverflow.com/questions/53609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "235" }
Q: How to properly link your a custom css file in sharepoint I've created a custom list, and made some changes to the way the CQWP renders it on a page by modifying ItemStyle.xsl. However, I'd like to use some custom css classes and therefore I'd like to link to my own custom .css file from the head tag of the pages containing this CQWP. So my question is, where to do put my .css file and how do I link it properly to a page containing the CQWPs. Please have in mind that I'm making a solution that should be deployed on multi sharepoint installations. Thanks. A: The microsoft official way is just to copy them into the relevant folders (as seen by downloading their template packs). However, you could also create your own site definition and add the items to the correct libraries and lists in the same way that the master pages are added. If you are going to deploy CSS and Master Pages through features remember you will have to activate your the publishing infrastructure on the site collection and the publishing feature on the site. To deploy a master page/page layout as a feature you should follow the steps at the site below, you can use the "fileurl" element to specify your CSS and place it into the correct folder (style library, for example): http://www.sharepointnutsandbolts.com/2007/04/deploying-master-pages-and-page-layouts.html A: Consider uploading them to "Style Library" in the root of the site collection. If you don't have a "Style Library" at the root, consider making one -- it's just a document library. Make sure the permissions are set correctly so everyone who needs to read it can. You can reference them using "/Style%20Library/my.css" but this won't work on site collections that don't live at the root of the domain.
{ "language": "en", "url": "https://stackoverflow.com/questions/53610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to control IIS 5.1 from command line? I found some informations about controlling IIS 5.1 from command line via adsutil.vbs (http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/d3df4bc9-0954-459a-b5e6-7a8bc462960c.mspx?mfr=true). The utility is available at c:\InetPub\AdminScripts. The utility throw only errors like the following: ErrNumber: -2147463164 (0x80005004) Error Trying To GET the Schema of the property: IIS://localhost/Schema/ROOT Can you tell me, how to check if there exists a virtual directory and create it, if it does not exist? A: Hope this helps you. http://www.codeproject.com/KB/system/commandlineweb.aspx A: I could not comment your post, so I have to write a new message. I was able to use the script CreateWebDir.vbs from your link and use it to create/update my virtual directory with only one call: CreateWebDir.vbs DirName Path 80 If the virtual directory already exists, it changes the path and that's exactly, what I need. Thank you!
{ "language": "en", "url": "https://stackoverflow.com/questions/53618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Why is the 'Resource Pool' not supported on the current platform? What could be the possible causes for the following exception? System.PlatformNotSupportedException: 'ResourcePool' is not supported on the current platform. at System.EnterpriseServices.Platform.Assert(Boolean fSuccess, String function) at System.EnterpriseServices.Platform.Assert(Version platform, String function) at System.EnterpriseServices.ResourcePool..ctor(TransactionEndDelegate cb) at System.Data.SqlClient.ConnectionPool..ctor(DefaultPoolControl ctrl) at System.Data.SqlClient.PoolManager.FindOrCreatePool(DefaultPoolControl ctrl) at System.Data.SqlClient.SqlConnectionPoolManager.GetPooledConnection(SqlConnectionString options, Boolean& isInTransaction) at System.Data.SqlClient.SqlConnection.Open() The platform is Windows 2003 Server SP2. The same code has been tested on Windows XP SP2 without any problems. However, it would be interesting to know what reasons cause this exception regardless of the platform. A: I've poked at the sources using Reflector and I can't seem to find any call to Platform.Assert in the static constructor of ResourcePool. Is the Windows 2003 server 64bit? That may be the problem.
{ "language": "en", "url": "https://stackoverflow.com/questions/53621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to get whois information of a domain name in my program? I want to get whois information of a domain name from my c#/java programs. Is there a simple way to do this? A: I think, the easiest way is a socket connection to a whois server on port 43. Send the domainname followed by a newline and read the response. A: Thomas' answer will only work if you know which "whois" server to connect to. There are many different ways of finding that out, but none (AFAIK) that works uniformly for every domain registry. Some domain names support an SRV record for the _nicname._tcp service in the DNS, but there are issues with that because there's no accepted standard yet on how to prevent a subdomain from serving up SRV records which override those of the official registry (see https://datatracker.ietf.org/doc/html/draft-sanz-whois-srv-00). For many TLDs it's possible to send your query to <tld>.whois-servers.net. This actually works quite well, but beware that it won't work in all cases where there are officially delegated second level domains. For example in .uk there are several official sub-domains, but only some of them are run by the .uk registry and the others have their own WHOIS services and those aren't in the whois-servers.net database. Confusingly there are also "unofficial" registries, such as .uk.com, which are in the whois-servers.net database. p.s. the official End-Of-Line delimiter in WHOIS, as with most IETF protocols is CRLF, not just LF. A: I found some web services that offer this information. This one is free and worked great for me. http://www.webservicex.net/whois.asmx?op=GetWhoIS A: I found a perfect C# example here. It's 11 lines of code to copy and paste straight into your own application. BUT FIRST you should add some using statements to ensure that the dispose methods are properly called to prevent memory leaks: StringBuilder stringBuilderResult = new StringBuilder(); using(TcpClient tcpClinetWhois = new TcpClient(whoIsServer, 43)) { using(NetworkStream networkStreamWhois = tcpClinetWhois.GetStream()) { using(BufferedStream bufferedStreamWhois = new BufferedStream(networkStreamWhois)) { using(StreamWriter streamWriter = new StreamWriter(bufferedStreamWhois)) { streamWriter.WriteLine(url); streamWriter.Flush(); using (StreamReader streamReaderReceive = new StreamReader(bufferedStreamWhois)) { while (!streamReaderReceive.EndOfStream) stringBuilderResult.AppendLine(streamReaderReceive.ReadLine()); } } } } } A: I found a perfect C# example on dotnet-snippets.com (which doesn't exist anymore). It's 11 lines of code to copy and paste straight into your own application. /// <summary> /// Gets the whois information. /// </summary> /// <param name="whoisServer">The whois server.</param> /// <param name="url">The URL.</param> /// <returns></returns> private string GetWhoisInformation(string whoisServer, string url) { StringBuilder stringBuilderResult = new StringBuilder(); TcpClient tcpClinetWhois = new TcpClient(whoisServer, 43); NetworkStream networkStreamWhois = tcpClinetWhois.GetStream(); BufferedStream bufferedStreamWhois = new BufferedStream(networkStreamWhois); StreamWriter streamWriter = new StreamWriter(bufferedStreamWhois); streamWriter.WriteLine(url); streamWriter.Flush(); StreamReader streamReaderReceive = new StreamReader(bufferedStreamWhois); while (!streamReaderReceive.EndOfStream) stringBuilderResult.AppendLine(streamReaderReceive.ReadLine()); return stringBuilderResult.ToString(); } A: if you add leaveOpen: true to the StreamWriter and StreamReader constructors. You will not get "Cannot access a closed stream" exception var stringBuilderResult = new StringBuilder(); using (var tcpClinetWhois = new TcpClient(whoIsServer, 43)) using (var networkStreamWhois = tcpClinetWhois.GetStream()) using (var bufferedStreamWhois = new BufferedStream(networkStreamWhois)) using (var streamWriter = new StreamWriter(networkStreamWhois, leaveOpen: true)) { streamWriter.WriteLine(url); streamWriter.Flush(); using (var streamReaderReceive = new StreamReader(networkStreamWhois, leaveOpen: true)) { while (!streamReaderReceive.EndOfStream) { stringBuilderResult.AppendLine(streamReaderReceive.ReadLine()); } } } A: Here's the Java solution, which just opens up a shell and runs whois: import java.io.*; import java.util.*; public class ExecTest2 { public static void main(String[] args) throws IOException { Process result = Runtime.getRuntime().exec("whois stackoverflow.com"); BufferedReader output = new BufferedReader(new InputStreamReader(result.getInputStream())); StringBuffer outputSB = new StringBuffer(40000); String s = null; while ((s = output.readLine()) != null) { outputSB.append(s + "\n"); System.out.println(s); } String whoisStr = output.toString(); } }
{ "language": "en", "url": "https://stackoverflow.com/questions/53623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to deal with poorly informed customer choices Here's a scenario I'm sure you're all familiar with. * *You have a fairly "hands off" customer, who really doesn't want to get too involved in the decision making despite your best efforts. *An experienced development team spend hours discussing the pros and cons of a particular approach to a problem and come up with an elegant solution which avoids the pitfalls of the more obvious approaches. *The customer casually mentions after a quick glance that they want it changed. They have no understanding of all the usability / consistency issues you were trying to avoid in your very carefully thought out approach. *Despite explanations, customer isn't interested, they just want it changed. *You sigh and do what they ask, knowing full well what will happen next... *3 weeks later, customer says it isn't working well this way, could you change it? You suggest again your original solution, and they seize on it with enthusiasm. They invariably seem to have had a form of selective amnesia and blocked out their role in messing this up in the first place. I'm sure many of you have gone through this. The thing which gets me is always when we know the time and effort that reasonably bright and able people have put in to really understanding the problem and trying to come up with a good solution. The frustration comes in contrasting this with the knowledge that the customer's choice is made in 3 minutes in a casual glance (or worse, by their managers who often don't even know what the project is really about). The icing on the cake is that it's usually made very late in the day. I know that the agile methodologies are designed to solve exactly this kind of problem, but it requires a level of customer buy in that certain types of customers (people spending other peoples money usually) are just not willing to give. Anyone any clever insight into how you deal with this? EDIT: Oops - by the way, I'm not talking about any current or recent customer in this. It's purely hypothetical... A: Make your customer pay by the effort you are putting into designing and developing the solution to their problem. The more you work, the more you get. The customer will have to pay for his mistakes. Customer will eventually learn to appreciate your experience and insight in the programming field. A: Niyaz is correct, unfortunately getting a customer buy-in is difficult until they have been burned like this once before. Additionally describe to the customer the scenario above and state how much extra it would cost if you went three or four weeks down the line and had to rewrite it due to a change and then let them use the prototype. It may take a few days to put one together so they can see both options (theirs [the wrong way], and yours [the right way]). Remember they are paying you not only for your ability to program but also your experience and knowledge of the issues which crop up. Whatever the decision the customer makes, ensure that you get it documented, update your risks register for the project with the risks that the chosen implementation will incurr and speak to the project manager (if its not you) about the mitigation plans for them. A: I agree with Niyaz. However at the time the customer suggests the change you should work out what the impact of the change will be, and how likely that impact is to happen. Then ask whomever is responsible (it's not always that customer) for the deliverable if they approve the change. Making the impact clear (more cost, lower reliability, longer delivery time etc) is very important to helping the customer to make a decision. It's very important to describe the impacts on the project or their business in a factual way, and assess how likely that impact is to occur. "Maybes" and "i feel" are very ignorable. After that as long as the right people approve the change and as long as they pay for it.. well you did give them what they wanted :) A: One thing we have done with some success in the past in these kinds of situations is to hand the issues over to the client. "OK, you want to change it - this is what will happen if you do that. These are the issues involved. You have a think about how you'd like it to work and then get back to us". This approach doesn't tend to yield good solutions (unsurprisingly) but does tend to let the client see that it's not a "gut feeling", wild stab in the dark kind of question. And failing that, it usually makes them stop asking you to change it! A: Usually a scenario like this is caused by 2 things. The ones that are supposed to give you the requirement specifications are either don't put their hearts into the project because they have no interest in it, or because they really have no idea what they want. Agile programming is one of the best ways, but there are other ways to do this. Personally I usually use a classic waterfall method, so spiral and agile methods are out of the questions. But this doesn't mean that you can't use prototypes. As a matter of fact, using a prototype would probably be the most helpful tool to use. Think about the iceberg effect. The secret is that People Who Aren't Programmers Do Not Understand This. http://img134.imageshack.us/my.php?image=icebergbelowwater.jpg "You know how an iceberg is 90% underwater? Well, most software is like that too -- there's a pretty user interface that takes about 10% of the work, and then 90% of the programming work is under the covers...." - Joel Spolsky Generating the prototype takes time and effort but it is the most effective way to gather requirements. What my project team did was, the UI designer was the one that made the prototypes. If you give the users a prototype (at least a working interface of what the application is going to look and feel like) then you will get lots of criticism which can lead to desires and requirements. It can look like comments on YouTube but it's a start. Second issue: The customer casually mentions after a quick glance that they want it changed. They have no understanding of all the usability / consistency issues you were trying to avoid in your very carefully thought out approach. Generate another prototype. The key here are results that the users would like to see instead of advice that they have to listen to. But if all else fails you can always list the pros and cons of why you implemented the solution, whether or not the particular solution they like is not the one you insisted on. Make that part of the documentation as readable as possible. For example: Problem: The park is where all the good looking women jog to stay in shape. Johnny Bravo loves enjoying "mother nature's beauty", so he's lookin to blend in... you know... lookin all buff and do a little jogging while chasing tail. Alternative Solutions: 1) Put on black suede shoes to look as stylish as you can. 2) Put on a pair of Nike's. Essential shoes for running. Try the latest styles. Implemented Solution: Black suede shoes were top choice because... well because hot mommies dig black suede shoes. A: Or else, if they won't pay for the effort, just avoid putting that much resources into the solution of the problem, and just give them exactly what they've asked for and then think about it after the three weeks have passed. Somewhat frustrating, yes, but that's the way it'll always be with that kind of customers. At least you won't be losing money.
{ "language": "en", "url": "https://stackoverflow.com/questions/53627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: History of changes to a particular line of code in Subversion Is it possible to see the history of changes to a particular line of code in a Subversion repository? I'd like, for instance, to be able to see when a particular statement was added or when that statement was changed, even if its line number is not the same any more. A: I don't know a method for tracking statements through time in Subversion. It is simple however to see when any particular line in a file was last changed using svn blame. Check the SVNBook: svn blame reference: Synopsis svn blame TARGET[@REV]... Description Show author and revision information in-line for the specified files or URLs. Each line of text is annotated at the beginning with the author (username) and the revision number for the last change to that line. A: This can be done in two stages: * *svn blame /path/to/your/file > blame.tmp *grep "your_line_of_text" blame.tmp You can delete blame.tmp file afterwards if you don't need it. In principle, a simple script can be written in any scripting language that does roughly the same. A: In the TortoiseSVN client there is a very nice feature that lets you: * *blame a file, displaying the last change for each line (this is standard) *"blame previous revision", after clicking on a particular line in the above view (this is the good one) The second feature does what it says - it shows the annotated revision preceding the last modification to the line. By using this feature iteratively, you can trace back through the history of a particular line. A: In Eclipse you can know when each line of your code has been committed using the SVN annotate view, or right click on the file → Team → Show annotation.... A: The key here is how much history is required. As others have pointed out, the short answer is: svn blame (see svn help blame for details). If you're reaching far back in history or dealing with significant changes, you will likely need more than just this one command. I just had to do this myself, and found this (ye ole) thread here on SO. Here's what I did to solve it with just the CLI, specifically for my case where an API had changed (e.g. while porting someone's far outdated work (not on a branch, arrgh!) back into a feature branch based off of an up-to-date trunk). E.g. function names had changed enough to where it wasn't apparent which function needed to be called. Step One The following command allowed me to page through commits where things had changed in the file "fileName.h" and to see the corresponding revision number (note: you may have to alter the '10' for more or less context per your svn log text). svn log | grep -C 10 "fileName.h" | less This results in a list of revisions in which this file was modified. Step Two Then it was a simple matter of using blame (or as others have pointed out, annotate) to narrow down to the revisions of interest. cd trunk svn blame fileName.h@r35948 | less E.g. found the revision of interest was 35948. Step Three Having found the revision(s) of interest via blame, a diff can be produced to leverage the SVN tool. svn diff -r35948:PREV fileName.h Conclusion Having a visual diff made it much easier to identify the old API names with the newer/updated API names. A: I'd usually: * *Run svn blame FILE first. *Note the last revision of the particular line. *Do another query with the -r argument: svn blame FILE -r 1:REV *Trace manually from there. A: svn annotate The AKA SVN Blame from TortoiseSVN. A: svn blame shows you which checkin modified any line in a file the last time. This works on old revisions too. A: A start is the command svn blame (or annotate,praise). It will show you when a line of code was last modified and by whom it was modified. e.g.: 4564 wiemann # $Id$ 4564 wiemann # Author: David Goodger <[email protected]> 778 goodger # Copyright: This module has been placed in the public domain. 217 goodger A: If you use Emacs, the built-in package vc can do this. * *Navigate to the file in question. *Run the command vc-annotate with either M-x vc-annotate or C-xvg. *Each line will show up with its revision, like a normal svn blame. *Pressing a (vc-annotate-revision-previous-to-line) will navigate to the revision before the revision at the line you're on. A: The command you're looking for is svn blame.
{ "language": "en", "url": "https://stackoverflow.com/questions/53629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "80" }
Q: How would you go about using the ASP.NET AJAX Control Toolkit in a project that doesn't use ASP.NET on the back end Your backend could be PHP or Python but you want to use the controls from the ASP.NET toolkit is there a successful way to do this? A: I don't think that it is possible. The ASP.NET AJAX Toolkit is based on ASP.NET technique (what a surprise) and needs ASP.NET. It contains server-side controls, which are translated to HTML and Javascript by the ASP.NET engine. It is not working without the engine. However you can check the code itself to see how it is working, what it generates on the browser side and get ideas and Javascript codes to build into your application or framework. Edit: I've just found an interesting project, which is in alpha stage, check this out. A: Why do you want to use the ASP.NET AJAX Toolkit with PHP / Python? Have you considered other non ASP.NET specific AJAX libraries like jQuery, Dojo, MooTools, YUI? ASP.NET AJAX and the ASP.NET AJAX Toolkit is ASP.NET centric, you'll gain more flexibility using other AJAX libraries with PHP / Python. A: are you talking about the ASP.NET AJAX Control Toolkit? A: Have a look at this blogpost on Stephen Walther's blog: ASP.NET MVC Tip #36 – Create a Popup Calendar Helper In this post he shows how to use the 'script only' version of the AJAX Control Toolkit. This version of the AJAX Control Toolkit does not contain server-side controls or control extenders. It contains only the client-side files – JavaScript, CSS, images – required to use the client-side AJAX behaviors. Stephen Walther is a Senior Program Manager at Microsoft who is responsible for ASP.NET MVC content and community (his job title is ASP.NET MVC Ninja). A: I have found that much of the functionality in AJAX Control Toolkit can be accomplished via the javascript frameworks such as jQuery.
{ "language": "en", "url": "https://stackoverflow.com/questions/53639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Creating an object without knowing the class name at design time Using reflection, I need to investigate a user DLL and create an object of a class in it. What is the simple way of doing it? A: System.Reflection.Assembly is the class you will want to use. It contains many method for iterating over the types contained with a user DLL. You can iterate through each class, perhaps see if it inherits from a particular interface etc. http://msdn.microsoft.com/en-us/library/system.reflection.assembly_members.aspx Investigate Assembly.GetTypes() method for getting the list of types, or Assembly.GetExportedTypes() for the public ones only. A: Try Activator.CreateInstance. A: You can create an instance of a class from a Type object using Activator.CreateInstance, to get all types in a dll you can use Assembly.GetTypes A: Take a look at these links: http://www.java2s.com/Code/CSharp/Development-Class/Createanobjectusingreflection.htm http://msdn.microsoft.com/en-us/library/k3a58006.aspx You basically use reflection to load an assembly, then find a type you're interested in. Once you have the type, you can ask to find it's constructors or other methods / properties. Once you have the constructor, you can invoke it. Easy! A: As it has already been said, you need to poke the System.Reflection namespace. If you know in advance the location/name of the DLL you want to load, you need to iterate through the Assembly.GetTypes(). In Pseudocode it would look something like this: Create and assembly object. Iterate through all the types contained in the assembly. Once you find the one you are looking for, invoke it (CreateInstance)… Use it wisely. ;) I have plenty of Reflection code if you want to take a look around, but the task is really simple and there are at least a dozen of articles with samples out there in the wild. (Aka Google). Despite that, the MSDN is your friend for Reflection Reference.
{ "language": "en", "url": "https://stackoverflow.com/questions/53649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: DataTable to readable text string This might be a bit on the silly side of things but I need to send the contents of a DataTable (unknown columns, unknown contents) via a text e-mail. Basic idea is to loop over rows and columns and output all cell contents into a StringBuilder using .ToString(). Formatting is a big issue though. Any tips/ideas on how to make this look "readable" in a text format ? I'm thinking on "padding" each cell with empty spaces, but I also need to split some cells into multiple lines, and this makes the StringBuilder approach a bit messy ( because the second line of text from the first column comes after the first line of text in the last column,etc.) A: Would converting the datatable to a HTML-table and sending HTML-mail be an alternative? That would make it much nicer on the receiving end if their client supports it. A: This will sound like a really horrible solution, but it just might work: Render the DataTable contents into a DataGrid/GridView (assuming ASP.NET) and then screen scrape that. I told you it would be messy. A: Get the max size for each column first. That way a varchar(255) column containing postal codes won't take up too much space. Maybe you can split the complete table instead of splitting single lines. Put the complete right part of the table in a second stringbuilder and put it beneath the first table. You can also give the user the option to create comma delimited text so the receiver can import the table into a spreadsheet. A: Loop through the datatable and send it as HTML email - generating html table from datatable & sending it as body of email. A: I got this working by writing a custom formatter specifically for this task. The code is about 120 -130 lines long, so I don't know if I should post it here as an answer (maybe a feature to attach .cs files to a topic would be a good ideea!) . Anyway, if anyone is interested in this, let me know and I'll provide the code. A: Does it need to be formatted nicely, or will an automated system pick up the mail message on the other end? If the latter, just use the datatable's .WriteXml() method. A: You can do smth like this (if VB): Dim Str As String = "" 'Create File if doesn't exist Dim FILE_NAME As String = "C:\temp\Custom.txt" If System.IO.File.Exists(FILE_NAME) = False Then System.IO.File.Create(FILE_NAME) End If Dim objWriter As System.IO.StreamWriter Try objWriter = New System.IO.StreamWriter(FILE_NAME) Catch ex As System.IO.IOException MsgBox("Please close the file: (C:\temp\Custom.txt) before proceeding" & vbCrLf & ex.Message.ToString, MsgBoxStyle.Exclamation) objWriter = Nothing Err = True End Try 'I assume you know how to write to text file. 'Say my datagridview is named "dgrid" Dim x,y as integer For x = 0 to dgrid.rows.count -1 For y = 0 to dgrid.columns.count - 1 Str = dgrid.Rows(x).Cells(y).Values & " " Next y Next x objWriter.Close() Resource. Or you can even generate an CSV file from your DataTable.
{ "language": "en", "url": "https://stackoverflow.com/questions/53652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: How to effectively work with multiple files in Vim I've started using Vim to develop Perl scripts and am starting to find it very powerful. One thing I like is to be able to open multiple files at once with: vi main.pl maintenance.pl and then hop between them with: :n :prev and see which file are open with: :args And to add a file, I can say: :n test.pl which I expect would then be added to my list of files, but instead it wipes out my current file list and when I type :args I only have test.pl open. So how can I add and remove files in my args list? A: My way to effectively work with multiple files is to use tmux. It allows you to split windows vertically and horizontally, as in: I have it working this way on both my mac and linux machines and I find it better than the native window pane switching mechanism that's provided (on Macs). I find the switching easier and only with tmux have I been able to get the 'new page at the same current directory' working on my mac (despite the fact that there seems to be options to open new panes in the same directory) which is a surprisingly critical piece. An instant new pane at the current location is amazingly useful. A method that does new panes with the same key combos for both OS's is critical for me and a bonus for all for future personal compatibility. Aside from multiple tmux panes, I've also tried using multiple tabs, e.g. and multiple new windows, e.g. and ultimately I've found that multiple tmux panes to be the most useful for me. I am very 'visual' and like to keep my various contexts right in front of me, connected together as panes. tmux also support horizontal and vertical panes which the older screen didn't (though mac's iterm2 seems to support it, but again, the current directory setting didn't work for me). tmux 1.8 A: In my and other many vim users, the best option is to, * *Open the file using, :e file_name.extension And then just Ctrl + 6 to change to the last buffer. Or, you can always press :ls to list the buffer and then change the buffer using b followed by the buffer number. * *We make a vertical or horizontal split using :vsp for vertical split :sp for horizantal split And then <C-W><C-H/K/L/j> to change the working split. You can ofcourse edit any file in any number of splits. A: Listing To see a list of current buffers, I use: :ls Opening To open a new file, I use :e ../myFile.pl with enhanced tab completion (put set wildmenu in your .vimrc). Note: you can also use :find which will search a set of paths for you, but you need to customize those paths first. Switching To switch between all open files, I use :b myfile with enhanced tab completion (still set wildmenu). Note: :b# chooses the last visited file, so you can use it to switch quickly between two files. Using windows Ctrl-W s and Ctrl-W v to split the current window horizontally and vertically. You can also use :split and :vertical split (:sp and :vs) Ctrl-W w to switch between open windows, and Ctrl-W h (or j or k or l) to navigate through open windows. Ctrl-W c to close the current window, and Ctrl-W o to close all windows except the current one. Starting vim with a -o or -O flag opens each file in its own split. With all these I don't need tabs in Vim, and my fingers find my buffers, not my eyes. Note: if you want all files to go to the same instance of Vim, start Vim with the --remote-silent option. A: I use buffer commands - :bn (next buffer), :bp (previous buffer) :buffers (list open buffers) :b<n> (open buffer n) :bd (delete buffer). :e <filename> will just open into a new buffer. A: I think you may be using the wrong command for looking at the list of files that you have open. Try doing an :ls to see the list of files that you have open and you'll see: 1 %a "./checkin.pl" line 1 2 # "./grabakamailogs.pl" line 1 3 "./grabwmlogs.pl" line 0 etc. You can then bounce through the files by referring to them by the numbers listed, e.g. :3b or you can split your screen by entering the number but using sb instead of just b. As an aside % refers to the file currently visible and # refers to the alternate file. You can easily toggle between these two files by pressing Ctrl Shift 6 Edit: like :ls you can use :reg to see the current contents of your registers including the 0-9 registers that contain what you've deleted. This is especially useful if you want to reuse some text that you've previously deleted. A: Vim (but not the original Vi!) has tabs which I find (in many contexts) superior to buffers. You can say :tabe [filename] to open a file in a new tab. Cycling between tabs is done by clicking on the tab or by the key combinations [n]gt and gT. Graphical Vim even has graphical tabs. A: I use multiple buffers that are set hidden in my ~/.vimrc file. The mini-buffer explorer script is nice too to get a nice compact listing of your buffers. Then :b1 or :b2... to go to the appropriate buffer or use the mini-buffer explorer and tab through the buffers. A: I use the command line and git a lot, so I have this alias in my bashrc: alias gvim="gvim --servername \$(git rev-parse --show-toplevel || echo 'default') --remote-tab" This will open each new file in a new tab on an existing window and will create one window for each git repository. So if you open two files from repo A, and 3 files from repo B, you will end up with two windows, one for repo A with two tabs and one for repo B with three tabs. If the file you are opening is not contained in a git repo it will go to a default window. To jump between tabs I use these mappings: nmap <C-p> :tabprevious<CR> nmap <C-n> :tabnext<CR> To open multiple files at once you should combine this with one of the other solutions. A: Things like :e and :badd will only accept ONE argument, therefore the following will fail :e foo.txt bar.txt :e /foo/bar/*.txt :badd /foo/bar/* If you want to add multiple files from within vim, use arga[dd] :arga foo.txt bar.txt :arga /foo/bar/*.txt :argadd /foo/bar/* A: Many answers here! What I use without reinventing the wheel - the most famous plugins (that are not going to die any time soon and are used by many people) to be ultra fast and geeky. * *ctrlpvim/ctrlp.vim - to find file by name fuzzy search by its location or just its name *jlanzarotta/bufexplorer - to browse opened buffers (when you do not remember how many files you opened and modified recently and you do not remember where they are, probably because you searched for them with Ag) *rking/ag.vim to search the files with respect to gitignore *scrooloose/nerdtree to see the directory structure, lookaround, add/delete/modify files EDIT: Recently I have been using dyng/ctrlsf.vim to search with contextual view (like Sublime search) and I switched the engine from ag to ripgrep. The performance is outstanding. EDIT2: Along with CtrlSF you can use mg979/vim-visual-multi, make changes to multiple files at once and then at the end save them in one go. A: Some answers in this thread suggest using tabs and others suggest using buffer to accomplish the same thing. Tabs and Buffers are different. I strongly suggest you read this article "Vim Tab madness - Buffers vs Tabs". Here's a nice summary I pulled from the article: Summary: * *A buffer is the in-memory text of a file. *A window is a viewport on a buffer. *A tab page is a collection of windows. A: :ls for list of open buffers * *:bp previous buffer *:bn next buffer *:bn (n a number) move to n'th buffer *:b <filename-part> with tab-key providing auto-completion (awesome !!) In some versions of vim, bn and bp are actually bnext and bprevious respectively. Tab auto-complete is helpful in this case. Or when you are in normal mode, use ^ to switch to the last file you were working on. Plus, you can save sessions of vim :mksession! ~/today.ses The above command saves the current open file buffers and settings to ~/today.ses. You can load that session by using vim -S ~/today.ses No hassle remembering where you left off yesterday. ;) A: To change all buffers to tab view. :tab sball will open all the buffers to tab view. Then we can use any tab related commands gt or :tabn " go to next tab gT or :tabp or :tabN " go to previous tab details at :help tab-page-commands. We can instruct vim to open ,as tab view, multiple files by vim -p file1 file2. alias vim='vim -p' will be useful. The same thing can also be achieved by having following autocommand in ~/.vimrc au VimEnter * if !&diff | tab all | tabfirst | endif Anyway to answer the question: To add to arg list: arga file, To delete from arg list: argd pattern More at :help arglist A: have a try following maps for convenience editing multiple files " split windows nmap <leader>sh :leftabove vnew<CR> nmap <leader>sl :rightbelow vnew<CR> nmap <leader>sk :leftabove new<CR> nmap <leader>sj :rightbelow new<CR> " moving around nmap <C-j> <C-w>j nmap <C-k> <C-w>k nmap <C-l> <C-w>l nmap <C-h> <C-w>h A: I made a very simple video showing the workflow that I use. Basically I use the Ctrl-P Vim plugin, and I mapped the buffer navigation to the Enter key. In this way I can press Enter in normal mode, look at the list of open files (that shows up in a small new window at the bottom of the screen), select the file I want to edit and press Enter again. To quickly search through multiple open files, just type part of the file name, select the file and press Enter. I don't have many files open in the video, but it becomes incredibly helpful when you start having a lot of them. Since the plugin sorts the buffers using a MRU ordering, you can just press Enter twice and jump to the most recent file you were editing. After the plugin is installed, the only configuration you need is: nmap <CR> :CtrlPBuffer<CR> Of course you can map it to a different key, but I find the mapping to enter to be very handy. A: I would suggest using the plugin NERDtree Here is the github link with instructions. Nerdtree I use vim-plug as a plugin manager, but you can use Vundle as well. vim-plug Vundle A: When using multiple files in vim, I use these commands mostly (with ~350 files open): * *:b <partial filename><tab> (jump to a buffer) *:bw (buffer wipe, remove a buffer) *:e <file path> (edit, open a new buffer> *pltags - enable jumping to subroutine/method definitions A: You may want to use Vim global marks. This way you can quickly bounce between files, and even to the marked location in the file. Also, the key commands are short: 'C takes me to the code I'm working with, 'T takes me to the unit test I'm working with. When you change places, resetting the marks is quick too: mC marks the new code spot, mT marks the new test spot. A: Why not use tabs (introduced in Vim 7)? You can switch between tabs with :tabn and :tabp, With :tabe <filepath> you can add a new tab; and with a regular :q or :wq you close a tab. If you map :tabn and :tabp to your F7/F8 keys you can easily switch between files. If there are not that many files or you don't have Vim 7 you can also split your screen in multiple files: :sp <filepath>. Then you can switch between splitscreens with Ctrl+W and then an arrow key in the direction you want to move (or instead of arrow keys, w for next and W for previous splitscreen) A: To add to the args list: :argadd To delete from the args list: :argdelete In your example, you could use :argedit test.pl to add test.pl to the args list and edit the file in one step. :help args gives much more detail and advanced usage A: I use the same .vimrc file for gVim and the command line Vim. I tend to use tabs in gVim and buffers in the command line Vim, so I have my .vimrc set up to make working with both of them easier: " Movement between tabs OR buffers nnoremap L :call MyNext()<CR> nnoremap H :call MyPrev()<CR> " MyNext() and MyPrev(): Movement between tabs OR buffers function! MyNext() if exists( '*tabpagenr' ) && tabpagenr('$') != 1 " Tab support && tabs open normal gt else " No tab support, or no tabs open execute ":bnext" endif endfunction function! MyPrev() if exists( '*tabpagenr' ) && tabpagenr('$') != '1' " Tab support && tabs open normal gT else " No tab support, or no tabs open execute ":bprev" endif endfunction This clobbers the existing mappings for H and L, but it makes switching between files extremely fast and easy. Just hit H for next and L for previous; whether you're using tabs or buffers, you'll get the intended results. A: If using only vim built-in commands, the best one that I ever saw to switch among multiple buffers is this: nnoremap <Leader>f :set nomore<Bar>:ls<Bar>:set more<CR>:b<Space> It perfectly combines both :ls and :b commands -- listing all opened buffers and waiting for you to input the command to switch buffer. Given above mapping in vimrc, once you type <Leader>f, * *All opened buffers are displayed *You can: * *Type 23 to go to buffer 23, *Type # to go to the alternative/MRU buffer, *Type partial name of file, then type <Tab>, or <C-i> to autocomplete, *Or just <CR> or <Esc> to stay on current buffer A snapshot of output for the above key mapping is: :set nomore|:ls|:set more 1 h "script.py" line 1 2 #h + "file1.txt" line 6 -- '#' for alternative buffer 3 %a "README.md" line 17 -- '%' for current buffer 4 "file3.txt" line 0 -- line 0 for hasn't switched to 5 + "/etc/passwd" line 42 -- '+' for modified :b '<Cursor> here' In the above snapshot: * *Second column: %a for current, h for hidden, # for previous, empty for hasn't been switched to. *Third column: + for modified. Also, I strongly suggest set hidden. See :help 'hidden'. A: If you are going to use multiple buffers, I think the most important thing is to set hidden so that it will let you switch buffers even if you have unsaved changes in the one you are leaving. A: I use the following, this gives you lots of features that you'd expect to have in other editors such as Sublime Text / Textmate * *Use buffers not 'tab pages'. Buffers are the same concept as tabs in almost all other editors. *If you want the same look of having tabs you can use the vim-airline plugin with the following setting in your .vimrc: let g:airline#extensions#tabline#enabled = 1. This automatically displays all the buffers as tab headers when you have no tab pages opened *Use Tim Pope's vim-unimpaired which gives [b and ]b for moving to previous/next buffers respectively (plus a whole host of other goodies) *Have set wildmenu in your .vimrc then when you type :b <file part> + Tab for a buffer you will get a list of possible buffers that you can use left/right arrows to scroll through *Use Tim Pope's vim-obsession plugin to store sessions that play nicely with airline (I had lots of pain with sessions and plugins) *Use Tim Pope's vim-vinegar plugin. This works with the native :Explore but makes it much easier to work with. You just type - to open the explorer, which is the same key as to go up a directory in the explorer. Makes navigating faster (however with fzf I rarely use this) *fzf (which can be installed as a vim plugin) is also a really powerful fuzzy finder that you can use for searching for files (and buffers too). fzf also plays very nicely with fd (a faster version of find) *Use Ripgrep with vim-ripgrep to search through your code base and then you can use :cdo on the results to do search and replace A: When I started using VIM I didn't realize that tabs were supposed to be used as different window layouts, and buffer serves the role for multiple file editing / switching between each other. Actually in the beginning tabs are not even there before v7.0 and I just opened one VIM inside a terminal tab (I was using gnome-terminal at the moment), and switch between tabs using alt+numbers, since I thought using commands like :buffers, :bn and :bp were too much for me. When VIM 7.0 was released I find it's easier to manager a lot of files and switched to it, but recently I just realized that buffers should always be the way to go, unless one thing: you need to configure it to make it works right. So I tried vim-airline and enabled the visual on-top tab-like buffer bar, but graphic was having problem with my iTerm2, so I tried a couple of others and it seems that MBE works the best for me. I also set shift+h/l as shortcuts, since the original ones (moving to the head/tail of the current page) is not very useful to me. map <S-h> :bprev<Return> map <S-l> :bnext<Return> It seems to be even easier than gt and gT, and :e is easier than :tabnew too. I find :bd is not as convenient as :q though (MBE is having some problem with it) but I can live with all files in buffer I think. A: Most of the answers in this thread are using plain vim commands which is of course fine but I thought I would provide an extensive answer using a combination of plugins and functions that I find particularly useful (at least some of these tips came from Gary Bernhardt's file navigation tips): * *To toggle between the last two file just press <leader> twice. I recommend assigning <leader> to the spacebar: nnoremap <leader><leader> <c-^> *For quickly moving around a project the answer is a fuzzy matching solution such as CtrlP. I bind it to <leader>a for quick access. *In the case I want to see a visual representation of the currently open buffers I use the BufExplorer plugin. Simple but effective. *If I want to browse around the file system I would use the command line or an external utility (Quicklsilver, Afred etc.) but to look at the current project structure NERD Tree is a classic. Do not use this though in the place of 2 as your main file finding method. It will really slow you down. I use the binding <leader>ff. These should be enough for finding and opening files. From there of course use horizontal and vertical splits. Concerning splits I find these functions particularly useful: * *Open new splits in smaller areas when there is not enough room and expand them on navigation. Refer here for comments on what these do exactly: set winwidth=84 set winheight=5 set winminheight=5 set winheight=999 nnoremap <C-w>v :111vs<CR> nnoremap <C-w>s :rightbelow split<CR> set splitright *Move from split to split easily: nnoremap <C-J> <C-W><C-J> nnoremap <C-K> <C-W><C-K> nnoremap <C-L> <C-W><C-L> nnoremap <C-H> <C-W><C-H> A: if you're on osx and want to be able to click on your tabs, use MouseTerm and SIMBL (taken from here). Also, check out this related discussion. A: You can be an absolute madman and alias vim to vim -p by adding in your .bashrc: alias vim="vim -p" This will result in opening multiple files from the shell in tabs, without having to invoke :tab ball from within vim afterwards. A: * *To open 2 or more files with vim type: vim -p file1 file2 *After that command to go threw that files you can use CTRL+Shift+↑ or ↓ , it will change your files in vim. *If u want to add one more file vim and work on it use: :tabnew file3 *Also u can use which will not create a new tab and will open file on screen slicing your screen: :new file3 *If u want to use a plugin that will help u work with directories and files i suggest u NERDTree. *To download it u need to have vim-plug so to download other plugins also NERDTree to type this commands in your ~/.vimrc. let data_dir = has('nvim') ? stdpath('data') . '/site' : '~/.vim' if empty(glob(data_dir . '/autoload/plug.vim')) silent execute '!curl -fLo '.data_dir.'/autoload/plug.vim --create-dirs https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim' autocmd VimEnter * PlugInstall --sync | source $MYVIMRC endif call plug#begin('~/.vim/plugged') Plug 'scrooloose/nerdtree' call plug#end() *Then save .vimrc via command :wq , get back to it and type: :PlugInstall After that the plugins will be installed and u could use your NERDTree with other plugins.
{ "language": "en", "url": "https://stackoverflow.com/questions/53664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1206" }
Q: Rhino Mocks - How can I test that at least one of a group of methods is called? Say I have an interface IFoo which I am mocking. There are 3 methods on this interface. I need to test that the system under test calls at least one of the three methods. I don't care how many times, or with what arguments it does call, but the case where it ignores all the methods and does not touch the IFoo mock is the failure case. I've been looking through the Expect.Call documentation but can't see an easy way to do it. Any ideas? A: You can give rhino mocks a lambda to run when a function get's called. This lambda can then increment a counter. Assert the counter > 1 and you're done. Commented by Don Kirkby: I believe Mendelt is referring to the Do method. A: Not sure this answers your question but I've found that if I need to do anything like that with Rhino (or any similiar framework/library), anything that I didn't know how to do upfront, then I'm better just creating a manual mock. Creating a class that implements the interface and sets a public boolean field to true if any of the methods is called will be trivially easy, you can give the class a descriptive name which means that (most importantly) the next person viewing the code will immediately understand it. A: If I understood you correctly you want to check that the interface is called at least once on any of three specified methods. Looking through the quick reference I don't think you can do that in Rhino Mocks. Intuitively I think you're trying to write a test that is brittle, which is a bad thing. This implies incomplete specification of the class under test. I urge you to think the design through so that the class under test and the test can have a known behavior. However, to be useful with an example, you could always do it like this (but don't). [TestFixture] public class MyTest { // The mocked interface public class MockedInterface implements MyInterface { int counter = 0; public method1() { counter++; } public method2() { counter++; } public method3() { counter++; } } // The actual test, I assume you have the ClassUnderTest // inject the interface through the constructor and // the methodToTest calls either of the three methods on // the interface. [TestMethod] public void testCallingAnyOfTheThreeMethods() { MockedInterface mockery = new MockedInterface(); ClassUnderTest classToTest = new ClassUnderTest(mockery); classToTest.methodToTest(); Assert.That(mockery.counter, Is.GreaterThan(1)); } } (Somebody check my code, I've written this from my head now and haven't written a C# stuff for about a year now) I'm interested to know why you're doing this though.
{ "language": "en", "url": "https://stackoverflow.com/questions/53666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How to efficiently SQL select newest entries from a MySQL database? Possible Duplicate: SQL Query to get latest price I have a database containing stock price history. I want to select most recent prices for every stock that is listed. I know PostreSQL has a DISTINCT ON statement that would suit ideally here. Table columns are name, closingPrice and date; name and date together form a unique index. The easiest (and very uneffective) way is SELECT * FROM stockPrices s WHERE s.date = (SELECT MAX(date) FROM stockPrices si WHERE si.name = s.name); Much better approach I found is SELECT * FROM stockPrices s JOIN ( SELECT name, MAX(date) AS date FROM stockPrices si GROUP BY name ) lastEntry ON s.name = lastEntry.name AND s.date = lastEntry.date; What would be an efficient way to do this? What indexes should I create? duplicate of: SQL Query to get latest price A: I think that your second approach is very efficient. What's its problem? You have to add indexes to name and date.
{ "language": "en", "url": "https://stackoverflow.com/questions/53670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to resolve ORA-011033: ORACLE initialization or shutdown in progress When trying to connect to an ORACLE user via TOAD (Quest Software) or any other means (Oracle Enterprise Manager) I get this error: ORA-011033: ORACLE initialization or shutdown in progress A: This error can also occur in the normal situation when a database is starting or stopping. Normally on startup you can wait until the startup completes, then connect as usual. If the error persists, the service (on a Windows box) may be started without the database being started. This may be due to startup issues, or because the service is not configured to automatically start the database. In this case you will have to connect as sysdba and physically start the database using the "startup" command. A: I used a combination of the answers from rohancragg, Mukul Goel, and NullSoulException from above. However I had an additional error: ORA-01157: cannot identify/lock data file string - see DBWR trace file To which I found the answer here: http://nimishgarg.blogspot.com/2014/01/ora-01157-cannot-identifylock-data-file.html Incase the above post gets deleted I am including the commands here as well. C:\>sqlplus sys/sys as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Tue Apr 30 19:07:16 2013 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORACLE instance started. Total System Global Area 778387456 bytes Fixed Size 1384856 bytes Variable Size 520097384 bytes Database Buffers 251658240 bytes Redo Buffers 5246976 bytes Database mounted. ORA-01157: cannot identify/lock data file 11 – see DBWR trace file ORA-01110: data file 16: 'E:\oracle\app\nimish.garg\oradata\orcl\test_ts.dbf' SQL> select NAME from v$datafile where file#=16; NAME -------------------------------------------------------------------------------- E:\ORACLE\APP\NIMISH.GARG\ORADATA\ORCL\TEST_TS.DBF SQL> alter database datafile 16 OFFLINE DROP; Database altered. SQL> alter database open; Database altered. Thanks everyone you saved my day! Fissh A: Here is my solution to this issue: SQL> Startup mount ORA-01081: cannot start already-running ORACLE - shut it down first SQL> shutdown abort ORACLE instance shut down. SQL> SQL> startup mount ORACLE instance started. Total System Global Area 1904054272 bytes Fixed Size 2404024 bytes Variable Size 570425672 bytes Database Buffers 1325400064 bytes Redo Buffers 5824512 bytes Database mounted. SQL> Show parameter control_files NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ control_files string C:\APP\USER\ORADATA\ORACLEDB\C ONTROL01.CTL, C:\APP\USER\FAST _RECOVERY_AREA\ORACLEDB\CONTRO L02.CTL SQL> select a.member,a.group#,b.status from v$logfile a ,v$log b where a.group#= b.group# and b.status='CURRENT' 2 SQL> select a.member,a.group#,b.status from v$logfile a ,v$log b where a.group#= b.group# and b.status='CURRENT'; MEMBER -------------------------------------------------------------------------------- GROUP# STATUS ---------- ---------------- C:\APP\USER\ORADATA\ORACLEDB\REDO03.LOG 3 CURRENT SQL> shutdown abort ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area 1904054272 bytes Fixed Size 2404024 bytes Variable Size 570425672 bytes Database Buffers 1325400064 bytes Redo Buffers 5824512 bytes Database mounted. SQL> recover database using backup controlfile until cancel; ORA-00279: change 4234808 generated at 01/21/2014 18:31:05 needed for thread 1 ORA-00289: suggestion : C:\APP\USER\FAST_RECOVERY_AREA\ORACLEDB\ARCHIVELOG\2014_01_22\O1_MF_1_108_%U_.AR C ORA-00280: change 4234808 for thread 1 is in sequence #108 Specify log: {<RET>=suggested | filename | AUTO | CANCEL} C:\APP\USER\ORADATA\ORACLEDB\REDO03.LOG Log applied. Media recovery complete. SQL> alter database open resetlogs; Database altered. And it worked: A: I had a similar problem when I had installed the 12c database as per Oracle's tutorial . The instruction instructs reader to create a PLUGGABLE DATABASE (pdb). The problem sqlplus hr/hr@pdborcl would result in ORACLE initialization or shutdown in progress. The solution * * * *Login as SYSDBA to the dabase : sqlplus SYS/Oracle_1@pdborcl AS SYSDBA * *Alter database: alter pluggable database pdborcl open read write; * *Login again: sqlplus hr/hr@pdborcl That worked for me Some documentation here A: The issue can also be due to lack of hard drive space. The installation will succeed but on startup, oracle won't be able to create the required files and will fail with the same above error message. A: I hope this will help somebody, I solved the problem like this There was a problem because the database was not open. Command startup opens the database. This you can solve with command alter database open in some case with alter database open resetlogs $ sqlplus / sysdba SQL> startup ORACLE instance started. Total System Global Area 1073741824 bytes Fixed Size 8628936 bytes Variable Size 624952632 bytes Database Buffers 436207616 bytes Redo Buffers 3952640 bytes Database mounted. Database opened. SQL> conn user/pass123 Connected. A: After some googling, I found the advice to do the following, and it worked: SQL> startup mount ORACLE Instance started SQL> recover database Media recovery complete SQL> alter database open; Database altered A: I faced the same problem. I restarted the oracle service for that DB instance and the error is gone. A: What worked for me is that i hadn't set the local_listener, to see if the local listener is set login to sqlplus / as sysdba, make sure the database is open and run the following command show parameter local_listener, if the value is empty, then you will have to set the local_listener with the following SQL command ALTER SYSTEM SET LOCAL_LISTENER='<LISTENER_NAME_GOES_HERE>'
{ "language": "en", "url": "https://stackoverflow.com/questions/53676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: Amazon SimpleDB Has anyone considered using something along the lines of the Amazon SimpleDB data store as their backend database? SQL Server hosting (at least in the UK) is expensive so could something like this along with cloud file storage (S3) be used for building apps that could grow with your application. Great in theory but would anyone consider using it. In fact is anyone actually using it now for real production software as I would love to read your comments. A: We are using SimpleDB almost exclusively for our new projects. The zero maintenance, high availability, no install aspects are just too good. And for your Ruby developers, check out SimpleRecord, an ActiveRecord like interface for SimpleDB which makes it super easy to use. A: This is a good analysis of Amazon services from Dare. S3 handled what I've typically heard described as "blob storage". A typical Web application typically has media files and other resources (images, CSS stylesheets, scripts, video files, etc) that is simply accessed by name/path. However a lot of these resources also have metadata (e.g. a video file on YouTube has metadata about it's rating, who uploaded it, number of views, etc) which need to be stored as well. This need for queryable, schematized storage is where SimpleDB comes in. EC2 provides a virtual server that can be used for computation complete with a local file system instance which isn't persistent if the virtual server goes down for any reason. With SimpleDB and S3 you have the building blocks to build a large class of "Web 2.0" style applications when you throw in the computational capabilities provided by EC2. However neither S3 nor SimpleDB provides a solution for a developer who simply wants the typical LAMP or WISC developer experience of building a database driven Web application or for applications that may have custom storage needs that don't fit neatly into the buckets of blob storage or schematized storage. Without access to a persistent filesystem, developers on Amazon's cloud computing platform have had to come up with sophisticated solutions involving backing data up manually from EC2 to S3 to get the desired experience. A: I just finished writing a library to make porting an app to simpledb in Perl easy, Net::Amazon::SimpleDB::Simple because I found the Amazon client libraries painful. The library isn't on CPAN yet, but it is at http://rjurneyopen.s3.amazonaws.com/SimpleDB/Simple.pm The idea was to make it trivial to stuff hashes in and out of SimpleDB. I just ported an app to use it. Overall I am impressed with SimpleDB... even inefficient queries take only 2-3 seconds to return. SimpleDB doesn't seem to care about the size of your table, owing to its Erlang/parallel nature. Tablescans are easy for it. The pain comes from the fact that you can't count, sum or group by. If you plan on doing any of those things... then SimpleDB probably isn't for you. At the moment in terms of functionality it exists somewhere in between memcached and MySQL. You can SELECT ORDER BY LIMIT, which is nice. Its also nice that you don't have to scale it yourself, and its nice that it doesn't care how much you stuff into it. But more advanced operations like analytics are painful at best. You'll have to do your own calculations server side. Its also a big plus that on any computer I can use the simpledb CLI http://code.google.com/p/amazon-simpledb-cli/ to query my data. There are some confusing 'gotchas.' For instance, attributes can have more than one value, and you have to explicitly set 'replace' when storing items. Also, storing undef or null string results in a library error, instead of deleting that attribute name/value pair or setting it null/empty string. Learning to think in terms of a largely un-normalized way is a little strange too, which is why I would second the suggestion above that says it is best for new applications. Porting from a SQL app to SimpleDB would be painful because your application logic would have to change. The way you do things is a bit different. The amazon docs are pretty good at explaining this. All of this is extractable in a library that sits atop SimpleDB, so for your use of SimpleDB you will want to pick a good library... you probably don't want to deal with it directly. There is some work on the PHP side to make things easy, and there is my library. There is a RAILS activesource, but it doesn't seem to do much for you. All in all its still early in the game, but compared to other APIs (twitter comes to mind), I have to say that the SimpleDB REST API is pretty simple (especially considering that it is XML) and polite to work with. I would recommend it... depending on the requirements of your application and the economics of your use of it. If you're looking to rapidly scale a service that doesn't put a great load on the DB and don't want to bother with a scalable MySQL/memcache combo... then SimpleDB can offer a 'simple' solution for you. I expect that its features will continue to grow and it will be a good choice for more and more applications that do more complex and interesting things. But right now it is targeted at and appropriate for your typical Web 2.0 service. A: But do you really need SQL Server? Can't you live with PostgreSQL or MySQL? Both have proven to be ok for most tasks. Now if you need SQL Server features then you're out of luck. Another option is to rent a server. How expensive is expensive? (I've used Amazon S3 to store images for an application, it's ok and works fine, at least for that) A: I haven't used SimpleDB, but have been using combination of S3, EC2, and MySQL for our application. As long as you are willing to use SimpleDB, then you might as well consider using MySQL (which is very scalable, and not that expensive). On the S3 and EC2 side, it is great in practice as well. A: SimpleDB works great for many applications.... if your project will require a lot of analytic reporting, joining, etc, you may consider MySQL or a hybrid-model. If you go SimpleDB, we've developed Radquery.com for our internal use and opened it up to the public.
{ "language": "en", "url": "https://stackoverflow.com/questions/53693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Creating MP4/M4A files with Chapter marks I am trying to join together several audio files into one mp4/m4a file containing chapter metadata. I am currently using QTKit to do this but unfortunately when QTKit exports to m4a format the metadata is all stripped out (this has been confirmed as a bug by Apple) see sample code. I think this rules QTKit out for this job, but would be happy to be proven wrong as it is a really neat API for it if it worked. So, I am looking for a way to concatenate audio files (input format does not really matter as I can do conversion) into an m4a file with chapters metadata. As an alternative to code, I am open to the idea of using an existing command line tool to accomplish this as long as it is redistributable as part of another application. Any ideas? A: Audiobook Maker does something like this, and I believe it uses ffmpeg under the hood. It's open source, so maybe its worth a look? A: commandline tool mp4chaps does the work. It is from mp4v2-utils package if you use Ubuntu. Remember to specify qt format for quicktime, because Nero format chapter marks seems to be used less nowadays. A: I discovered these guys: sensoryresearch who license an API for writing chapter/text/link tracks to MP4s (which is what an M4A is). A: Depending on where the bug is, you could try going straight to the QuickTime C APIs to write the movie file. You might also try adding the chapters track using the C APIs. Any word on when Apple will fix the bug? I am planning to create enhanced podcasts with QTKit, and need this to work.
{ "language": "en", "url": "https://stackoverflow.com/questions/53705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Does Delphi call inherited on overridden procedures if there is no explicit call Does Delphi call inherited on overridden procedures if there is no explicit call in the code ie (inherited;), I have the following structure (from super to sub class) TForm >> TBaseForm >> TAnyOtherForm All the forms in the project will be derived from TBaseForm, as this will have all the standard set-up and destructive parts that are used for every form (security, validation ect). TBaseForm has onCreate and onDestroy procedures with the code to do this, but if someone (ie me) forgot to add inherited to the onCreate on TAnyOtherForm would Delphi call it for me? I have found references on the web that say it is not required, but nowhere says if it gets called if it is omitted from the code. Also if it does call inherited for me, when will it call it? A: The inherited call has to be made explicitly. In general no language automatically calls the inherited function in equivalent situations (class constructors not included). It is easy to forget to make the inherited call in a class constructor. In such a situation if a base class needs to initialize any data you have an access violation waiting to happen. Perhaps you could override DoCreate and DoDestory in your TBaseForm class so you could ensure some code is executed regardless of the implementation of child classes. // interface TBaseForm = Class(TForm) ... Protected Procedure DoCreate(Sender : TObject); Override; End // implementation Procedure TBaseForm.DoCreate(Sender : TObject); Begin // do work here // let parent call the OnCreate property Inherited DoCreate(Sender); End; A: It is worth mentioning that not calling inherited in Destroy of any object can cause memory leaks. There are tools available to check for this in your source code. A: Inherited must be explicitly called in descendant objects as well as in visual form inheritance. If you use class completion then it adds inherited automatically if you flagged the definition as override (but not for reintroduce). If you are using visual form inheritance then when you add a new event hander through the form editor then it will add inherited as well. A: No, if you leave the call to inherited away, it will not be called. Otherwise it would not be possible to override a method and totally ommit the parent version of it. A: The inherited code is not called implicitly, as the others have indicated. You must call it explicitly. This gives you some useful flexibility. For instance, you might want to do some preprocessing code prior to the inherited code, then do some post processing code afterwards. This might look like: procedure TMyCalcObject.SolveForX; begin ResetCalcState; inherited SolveForX; PostProcessSolveForX; end; A: You must call it explicitly. This allows a lot of flexibility, since you can choose at which point in code to call the inherited method. But it's also a big source of bugs. It's easy to forget to call the inherited function and compiler has no way to tell if you did it deliberately or you just forgot. There should be some kind of directive "skip_inherited" to tell compiler that you don't want to call the inherited method. Compiler would then easily report error if it didn't find either "inherited" or "skip_inherited". That would mean you forgot. But unfortunately nobody in CodeGear thought of that. A: No. That's the whole point of overriding.
{ "language": "en", "url": "https://stackoverflow.com/questions/53715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Hiding data points in Excel line charts It is obviously possible to hide individual data points in an Excel line chart. * *Select a data point. *Right click -> Format Data Point... *Select Patterns *Tab Set Line to None How do you accomplish the same thing in VBA? Intuition tells me there should be a property on the Point object Chart.SeriesCollection(<index>).Points(<index> which deals with this... A: Actually if you are going to use SpyJournal's answer it has to be =IF(b2=0,NA(),b2), otherwise Excel just recognizes it as text, not as an 'official' #N/A A: "Describe it to the teddy bear" works almost every time... You have to go to the Border child object of the Point object and set its LineStyle to xlNone. A: As a general tip: If you know how to do something in excel, but don't know how to do it in VBA you could just record a macro and look at the recorded VBA-code (works at least most of the time) A: There is a Non VBA solution as well that can also be controlled from the VBA code as well. In excel a data point represented by a #N/A will not display. Thus you can use a formula - the easiest is an IF function - that returns an #N/A as text in the graph data. This data point will then not display which means you don't need to try and manipulate the format for it. An example is simply to generate your graph data in a table, and then replicate it below with a formula that simply does this =If(B2=0,"#N/A",B2) This works when you want to stop line charts from displaying 0 values for example. A: This is probably too late to be of assistance but the answer by SpyJournal, whilst easy and elegant,is slightly incorrect as it is necessary to omit the quotes around #N/A A: Yes. It doesn't have to have the quotes to be a true not available cell content but for me N/A still plot as 0 in my charts. The only way I can get it not to plot is to have the cell blank. A: I tried "#N/A" with quotes in Excel 207 and as a result the data point is shown like a zero in the graph. It Works without the quotes.
{ "language": "en", "url": "https://stackoverflow.com/questions/53719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Will HTML Encoding prevent all kinds of XSS attacks? I am not concerned about other kinds of attacks. Just want to know whether HTML Encode can prevent all kinds of XSS attacks. Is there some way to do an XSS attack even if HTML Encode is used? A: No. Putting aside the subject of allowing some tags (not really the point of the question), HtmlEncode simply does NOT cover all XSS attacks. For instance, consider server-generated client-side javascript - the server dynamically outputs htmlencoded values directly into the client-side javascript, htmlencode will not stop injected script from executing. Next, consider the following pseudocode: <input value=<%= HtmlEncode(somevar) %> id=textbox> Now, in case its not immediately obvious, if somevar (sent by the user, of course) is set for example to a onclick=alert(document.cookie) the resulting output is <input value=a onclick=alert(document.cookie) id=textbox> which would clearly work. Obviously, this can be (almost) any other script... and HtmlEncode would not help much. There are a few additional vectors to be considered... including the third flavor of XSS, called DOM-based XSS (wherein the malicious script is generated dynamically on the client, e.g. based on # values). Also don't forget about UTF-7 type attacks - where the attack looks like +ADw-script+AD4-alert(document.cookie)+ADw-/script+AD4- Nothing much to encode there... The solution, of course (in addition to proper and restrictive white-list input validation), is to perform context-sensitive encoding: HtmlEncoding is great IF you're output context IS HTML, or maybe you need JavaScriptEncoding, or VBScriptEncoding, or AttributeValueEncoding, or... etc. If you're using MS ASP.NET, you can use their Anti-XSS Library, which provides all of the necessary context-encoding methods. Note that all encoding should not be restricted to user input, but also stored values from the database, text files, etc. Oh, and don't forget to explicitly set the charset, both in the HTTP header AND the META tag, otherwise you'll still have UTF-7 vulnerabilities... Some more information, and a pretty definitive list (constantly updated), check out RSnake's Cheat Sheet: http://ha.ckers.org/xss.html A: If you systematically encode all user input before displaying then yes, you are safe you are still not 100 % safe. (See @Avid's post for more details) In addition problems arise when you need to let some tags go unencoded so that you allow users to post images or bold text or any feature that requires user's input be processed as (or converted to) un-encoded markup. You will have to set up a decision making system to decide which tags are allowed and which are not, and it is always possible that someone will figure out a way to let a non allowed tag to pass through. It helps if you follow Joel's advice of Making Wrong Code Look Wrong or if your language helps you by warning/not compiling when you are outputting unprocessed user data (static-typing). A: If you encode everything it will. (depending on your platform and the implementation of htmlencode) But any usefull web application is so complex that it's easy to forget to check every part of it. Or maybe a 3rd party component isn't safe. Or maybe some code path that you though did encoding didn't do it so you forgot it somewhere else. So you might want to check things on the input side too. And you might want to check stuff you read from the database. A: As mentioned by everyone else, you're safe as long as you encode all user input before displaying it. This includes all request parameters and data retrieved from the database that can be changed by user input. As mentioned by Pat you'll sometimes want to display some tags, just not all tags. One common way to do this is to use a markup language like Textile, Markdown, or BBCode. However, even markup languages can be vulnerable to XSS, just be aware. # Markup example [foo](javascript:alert\('bar'\);) If you do decide to let "safe" tags through I would recommend finding some existing library to parse & sanitize your code before output. There are a lot of XSS vectors out there that you would have to detect before your sanitizer is fairly safe. A: I second metavida's advice to find a third-party library to handle output filtering. Neutralizing HTML characters is a good approach to stopping XSS attacks. However, the code you use to transform metacharacters can be vulnerable to evasion attacks; for instance, if it doesn't properly handle Unicode and internationalization. A classic simple mistake homebrew output filters make is to catch only < and >, but miss things like ", which can break user-controlled output out into the attribute space of an HTML tag, where Javascript can be attached to the DOM. A: No, just encoding common HTML tokens DOES NOT completely protect your site from XSS attacks. See, for example, this XSS vulnerability found in google.com: http://www.securiteam.com/securitynews/6Z00L0AEUE.html The important thing about this type of vulnerability is that the attacker is able to encode his XSS payload using UTF-7, and if you haven't specified a different character encoding on your page, a user's browser could interpret the UTF-7 payload and execute the attack script. A: I'd like to suggest HTML Purifier (http://htmlpurifier.org/) It doesn't just filter the html, it basically tokenizes and re-compiles it. It is truly industrial-strength. It has the additional benefit of allowing you to ensure valid html/xhtml output. Also n'thing textile, its a great tool and I use it all the time, but I'd run it though html purifier too. I don't think you understood what I meant re tokens. HTML Purifier doesn't just 'filter', it actually reconstructs the html. http://htmlpurifier.org/comparison.html A: One other thing you need to check is where your input comes from. You can use the referrer string (most of the time) to check that it's from your own page, but putting in a hidden random number or something in your form and then checking it (with a session set variable maybe) also helps knowing that the input is coming from your own site and not some phishing site. A: I don't believe so. Html Encode converts all functional characters (characters which could be interpreted by the browser as code) in to entity references which cannot be parsed by the browser and thus, cannot be executed. &lt;script/&gt; There is no way that the above can be executed by the browser. **Unless their is a bug in the browser ofcourse.* A: myString.replace(/<[^>]*>?/gm, ''); I use it, then successfully. Strip HTML from Text JavaScript
{ "language": "en", "url": "https://stackoverflow.com/questions/53728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69" }
Q: Best use of indices on temporary tables in T-SQL If you're creating a temporary table within a stored procedure and want to add an index or two on it, to improve the performance of any additional statements made against it, what is the best approach? Sybase says this: "the table must contain data when the index is created. If you create the temporary table and create the index on an empty table, Adaptive Server does not create column statistics such as histograms and densities. If you insert data rows after creating the index, the optimizer has incomplete statistics." but recently a colleague mentioned that if I create the temp table and indices in a different stored procedure to the one which actually uses the temporary table, then Adaptive Server optimiser will be able to make use of them. On the whole, I'm not a big fan of wrapper procedures that add little value, so I've not actually got around to testing this, but I thought I'd put the question out there, to see if anyone had any other approaches or advice? A: A few thoughts: * *If your temporary table is so big that you have to index it, then is there a better way to solve the problem? *You can force it to use the index (if you are sure that the index is the correct way to access the table) by giving an optimiser hint, of the form: SELECT * FROM #table (index idIndex) WHERE id = @id If you are interested in performance tips in general, I've answered a couple of other questions about that at some length here: * *Favourite performance tuning tricks *How do you optimize tables for specific queries? A: What's the problem with adding the indexes after you put data into the temp table? One thing you need to be mindful of is the visibility of the index to other instances of the procedure that might be running at the same time. I like to add a guid to these kinds of temp tables (and to the indexes), to make sure there is never a conflict. The other benefit of this approach is that you could simply make the temp table a real table. Also, make sure that you will need to query the data in these temp tables more than once during the running of the stored procedure, otherwise the cost of index creation will outweigh the benefit to the select. A: In Sybase if you create a temp table and then use it in one proc the plan for the select is built using an estimate of 100 rows in the table. (The plan is built when the procedure starts before the tables are populated.) This can result in the temp table being table scanned since it is only "100 rows". Calling a another proc causes Sybase to build the plan for the select with the actual number of rows, this allows the optimizer to pick a better index to use. I have seen significant improvedments using this approach but test on your database as sometimes there is no difference.
{ "language": "en", "url": "https://stackoverflow.com/questions/53734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How to escape a # in velocity I would like to know how can i escape a # in velocity. Backslash seems to escape it but it prints itself as well This: \#\# prints: \#\# I would like: ## A: this: #[[ ## ]]# will yield: ## anything within #[[ ... ]]# is unparsed. A: Maybe, the following site helps? http://velocity.apache.org/tools/1.4/generic/EscapeTool.html A: If you don't want to bother with the EscapeTool, you can do this: #set( $H = '#' ) $H$H A: Add the esc tool to your toolbox and then you can use ${esc.hash} A: ${esc.h} will output # as per this link A: The set technique is a good way to get around any characters you need escaping, like if you want to have $name followed by "_lastname" then you can do: set ($n = '_lastname) and have this in your template: $name$n and it's all good.
{ "language": "en", "url": "https://stackoverflow.com/questions/53744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58" }
Q: Which compiles to faster code: "n * 3" or "n+(n*2)"? Which compiles to faster code: "ans = n * 3" or "ans = n+(n*2)"? Assuming that n is either an int or a long, and it is is running on a modern Win32 Intel box. Would this be different if there was some dereferencing involved, that is, which of these would be faster? long a; long *pn; long ans; ... *pn = some_number; ans = *pn * 3; Or ans = *pn+(*pn*2); Or, is it something one need not worry about as optimizing compilers are likely to account for this in any case? A: IMO such micro-optimization is not necessary unless you work with some exotic compiler. I would put readability on the first place. A: This would depend on the compiler, its configuration and the surrounding code. You should not try and guess whether things are 'faster' without taking measurements. In general you should not worry about this kind of nanoscale optimisation stuff nowadays - it's almost always a complete irrelevance, and if you were genuinely working in a domain where it mattered, you would already be using a profiler and looking at the assembly language output of the compiler. A: It's not difficult to find out what the compiler is doing with your code (I'm using DevStudio 2005 here). Write a simple program with the following code: int i = 45, j, k; j = i * 3; k = i + (i * 2); Place a breakpoint on the middle line and run the code using the debugger. When the breakpoint is triggered, right click on the source file and select "Go To Disassembly". You will now have a window with the code the CPU is executing. You will notice in this case that the last two lines produce exactly the same instructions, namely, "lea eax,[ebx+ebx*2]" (not bit shifting and adding in this particular case). On a modern IA32 CPU, it's probably more efficient to do a straight MUL rather than bit shifting due to pipelineing nature of the CPU which incurs a penalty when using a modified value too soon. This demonstrates what aku is talking about, namely, compilers are clever enough to pick the best instructions for your code. A: It doesn't matter. Modern processors can execute an integer MUL instruction in one clock cycle or less, unlike older processers which needed to perform a series of shifts and adds internally in order to perform the MUL, thereby using multiple cycles. I would bet that MUL EAX,3 executes faster than MOV EBX,EAX SHL EAX,1 ADD EAX,EBX The last processor where this sort of optimization might have been useful was probably the 486. (yes, this is biased to intel processors, but is probably representative of other architectures as well). In any event, any reasonable compiler should be able to generate the smallest/fastest code. So always go with readability first. A: As it's easy to measure it yourself, why don't do that? (Using gcc and time from cygwin) /* test1.c */ int main() { int result = 0; int times = 1000000000; while (--times) result = result * 3; return result; } machine:~$ gcc -O2 test1.c -o test1 machine:~$ time ./test1.exe real 0m0.673s user 0m0.608s sys 0m0.000s Do the test for a couple of times and repeat for the other case. If you want to peek at the assembly code, gcc -S -O2 test1.c A: It does depend on the compiler you are actually using, but very probably they translate to the same code. You can check it by yourself by creating a small test program and checking its disassembly. A: Most compilers are smart enough to decompose an integer multiplication into a series of bit shifts and adds. I don't know about Windows compilers, but at least with gcc you can get it to spit out the assembler, and if you look at that you can probably see identical assembler for both ways of writing it. A: It doesn't care. I think that there are more important things to optimize. How much time have you invested thinking and writing that question instead of coding and testing by yourself? :-) A: As long as you're using a decent optimising compiler, just write code that's easy for the compiler to understand. This makes it easier for the compiler to perform clever optimisations. You asking this question indicates that an optimising compiler knows more about optimisation than you do. So trust the compiler. Use n * 3. Have a look at this answer as well. A: Compilers are good at optimising code such as yours. Any modern compiler would produce the same code for both cases and additionally replace * 2 by a left shift. A: Trust your compiler to optimize little pieces of code like that. Readability is much more important at the code level. True optimization should come at a higher level.
{ "language": "en", "url": "https://stackoverflow.com/questions/53757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Best Way of Automating Daily Build OK, so we all know the daily build is the heart beat of a project, but whats the single best way of automating it? We have perl scripts wrapping our pipeline which includes ClearCase, VS2005 (C++), Intel FORTRAN, Inno setup. We use cron jobs on UNIX to schedule the build, and host a simple Apache web server to view and monitor the build. All in all its rather complex, I would like to know whats the best off the shelf solution that people use? And yes I did say FORTRAN no escaping it sometimes, it works, no point doing a huge re-implementation project for some tried and tested FEA code that just works. A: A new one to me that I've heard is quite slick is hudson - also with MSBuild support. A: We're in the process of implementing CC.Net. So far it seems like it would fit your model pretty well. Out of the box it offers automated building, results tracking and notification. I'm not sure how detailed the build-in-progress monitoring is though. A: There are many tools that specifically handle this: * *Cruise Control *Hudson *Continuum The tools have out of the box support for the most common build types. They all also support some sort of "run this script" type build process. In the end you should use the nicer build tools (MSBuild, Ant, Maven, Make, ...) where you can and fill the gaps for the odder tools with custom scripts. The automated build can just invoke these in the right order. A: We use TeamCity - but then its a simple C#/Java development - maybe your pipeline can done via scripts it can drive? A: Here is the best resource we found to help us pick a Continuous Integration tool. We have been evaluating 5 or 6 tools on this page. http://confluence.public.thoughtworks.org/display/CC/CI+Feature+Matrix A: I have had success using Visual Build Pro. A: CC.NET is very powerful. Used it and was really happy about it. Even the status icon in the systray. It's a small detail, but it gives you a good overview of the project's "health". You immediately feel motivated to fix the tests when you see it red. Now we use a self-baked series of scripts. Since we write Python, compilation is non-existant, so the only problem is running the tests. A: If you're working with Visual Studio, be sure to check out Team Foundation Build to see if it will suit your situation. It looks like Buck Hodges' blog post on the VS 2008 version is a good resource, too. A: I know this is a really old question, but it's still coming up in searches, so someone should mention Jenkins - the open source continuation of Hudson. From the Jenkins wiki: Among those things, current Jenkins focuses on the following two jobs: * *Building/testing software projects continuously, just like CruiseControl or DamageControl. In a nutshell, Jenkins provides an easy-to-use so-called continuous integration system, making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. The automated, continuous build increases the productivity. *Monitoring executions of externally-run jobs, such as cron jobs and procmail jobs, even those that are run on a remote machine. For example, with cron, all you receive is regular e-mails that capture the output, and it is up to you to look at them diligently and notice when it broke. Jenkins keeps those outputs and makes it easy for you to notice when something is wrong. It was originally built with Java in mind, so it integrates well with lots of other Java tools, but you can use it with any language, including all those mentioned by the OP.
{ "language": "en", "url": "https://stackoverflow.com/questions/53766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do i generate a histogram for a given probability distribution (for functional testing a server)? I am trying to automate functional testing of a server using a realistic frequency distribution of requests. (sort of load testing, sort of simulation) I've chosen the Weibull distribution as it "sort of" matches the distribution I've observed (ramps up quickly, drops off quickly but not instantly) I use this distribution to generate the number of requests that should be sent each day between a given start and end date I've hacked together an algorithm in Python that sort of works but it feels kludgy: how_many_days = (end_date - start_date).days freqs = defaultdict(int) for x in xrange(how_many_responses): freqs[int(how_many_days * weibullvariate(0.5, 2))] += 1 timeline = [] day = start_date for i,freq in sorted(freqs.iteritems()): timeline.append((day, freq)) day += timedelta(days=1) return timeline What better ways are there to do this? A: Why don't you try The Grinder 3 to load test your server, it comes with all this and more prebuilt, and it supports python as a scripting language A: Slightly longer but probably more readable rework of your last four lines: samples = [0 for i in xrange(how_many_days + 1)] for s in xrange(how_many_responses): samples[min(int(how_many_days * weibullvariate(0.5, 2)), how_many_days)] += 1 histogram = zip(timeline, samples) print '\n'.join((d.strftime('%Y-%m-%d ') + "*" * c) for d,c in histogram) This always drops the samples within the date range, but you get a corresponding bump at the end of the timeline from all of the samples that are above the [0, 1] range. A: This is quick and probably not that accurate, but if you calculate the PDF yourself, then at least you make it easier to lay several smaller/larger ones on a single timeline. dev is the std deviation in the Guassian noise, which controls the roughness. Note that this is not the 'right' way to generate what you want, but it's easy. import math from datetime import datetime, timedelta, date from random import gauss how_many_responses = 1000 start_date = date(2008, 5, 1) end_date = date(2008, 6, 1) num_days = (end_date - start_date).days + 1 timeline = [start_date + timedelta(i) for i in xrange(num_days)] def weibull(x, k, l): return (k / l) * (x / l)**(k-1) * math.e**(-(x/l)**k) dev = 0.1 samples = [i * 1.25/(num_days-1) for i in range(num_days)] probs = [weibull(i, 2, 0.5) for i in samples] noise = [gauss(0, dev) for i in samples] simdata = [max(0., e + n) for (e, n) in zip(probs, noise)] events = [int(p * (how_many_responses / sum(probs))) for p in simdata] histogram = zip(timeline, events) print '\n'.join((d.strftime('%Y-%m-%d ') + "*" * c) for d,c in histogram) A: Instead of giving the number of requests as a fixed value, why not use a scaling factor instead? At the moment, you're treating requests as a limited quantity, and randomising the days on which those requests fall. It would seem more reasonable to treat your requests-per-day as independent. from datetime import * from random import * timeline = [] scaling = 10 start_date = date(2008, 5, 1) end_date = date(2008, 6, 1) num_days = (end_date - start_date).days + 1 days = [start_date + timedelta(i) for i in range(num_days)] requests = [int(scaling * weibullvariate(0.5, 2)) for i in range(num_days)] timeline = zip(days, requests) timeline A: I rewrote the code above to be shorter (but maybe it's too obfuscated now?) timeline = (start_date + timedelta(days=days) for days in count(0)) how_many_days = (end_date - start_date).days pick_a_day = lambda _:int(how_many_days * weibullvariate(0.5, 2)) days = sorted(imap(pick_a_day, xrange(how_many_responses))) histogram = zip(timeline, (len(list(responses)) for day, responses in groupby(days))) print '\n'.join((d.strftime('%Y-%m-%d ') + "*" * c) for d,c in histogram) A: Another solution is to use Rpy, which puts all of the power of R (including lots of tools for distributions), easily into Python.
{ "language": "en", "url": "https://stackoverflow.com/questions/53786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is WindowsFormsHost fit for purpose (.net WPF hosting WinForms)? A GUI driven application needs to host some prebuilt WinForms based components. These components provide high performance interactive views using a mixture of GDI+ and DirectX. The views handle control input and display custom graphical renderings. The components are tested in a WinForms harness by the supplier. Can a commericial application use WPF for its GUI and rely on WindowsFormsHost to host the WinForms components or have you experience of technical glitches e.g. input lags, update issues that would make you cautious? A: One problem I've run into is that embedded Win Forms controls do not participate in any transform operations applied to their WPF container. This results in visual flashing effects and the embedded control appearing in an innappropriate location. I worked around this by binding the visibility of the Windows Forms Host to the animation state of its WPF container, so that the embedded control was hidden until the animation completed, like below. <WindowsFormsHost Grid.Row="1" Grid.Column="1" Margin="8,0,0,0" Visibility="{Binding ActualHeight, RelativeSource={RelativeSource Mode=FindAncestor, AncestorType=UserControl}, Converter={StaticResource WinFormsControlVisibilityConverter}}" > <winforms:DateTimePicker x:Name="datepickerOrderExpected" Width="140" Format="Custom" CustomFormat="M/dd/yy h:mm tt" ValueChanged="OnEditDateTimeOrderExpected" /> </WindowsFormsHost> A: We're currently using WindowsFormsHost in our software to host the WinForms DataGridView control, and we've not had any real problems with it. A few things to watch out for though: The first is the air-space restrictions. Practically speaking, this means that WinForms content always appears on top of WPF content. So if you are using WPF adorners they will appear to be "trimmed" if they hit up against a WinForms region in your app. The second is that, because they use Windows resources, you have to manage the lifetimes of WinForms components more carefully. Unlike WPF components, WinForms controls expect to be Disposed when they're finished with. This makes it tricky to include them in a pure XAML view. The last thing is that WinForms controls don't seem to resize as smoothly as the rest of the WPF display: they tend to snap to their new size once you've finished making an adjustment. A: I hosted WPF controls in WinForms and vice versa without problems. Though, I would test such scenarios extensively 'cause it's hard to predict how complex control will behave. A: Do note the absence of a WPF Application object when hosting in Winforms. This can result in problems if you're taking an existing WPF component and hosting it in Winforms, since resource lookups and the likes will never look in application scope. You can create your own Application object if it is a problem. A: As @Kent Boogaart mentioned, I've run into the situation where a WPF application hosted in WinForms doesn't have the WPF Application object (i.e. Application.Current). This can cause many issues such as Dispatchers not invoking threads back to the UI thread. This would only apply if you're hosting in WinForms, not the other way around. I've also had strange issues with modal dialogs behaving strangely (i.e. ShowModal calls). I'm assuming this is because, in WinForms, each control has its own Win32 handle while in WPF, there is only one handle for the entire Window. Whatever you do, test :) A: You can solve the Airspace problem by using .net 3.5 SP1: These types of airspace restrictions represent a huge limitation in a framework, like WPF, where element composition is used to create very rich user experiences. With a D3DImage solution, these restrictions are no longer present! See Introduction to D3DImage.
{ "language": "en", "url": "https://stackoverflow.com/questions/53796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What is the best tool to benchmark my JavaScript? I'm currently working on a JavaScript tool that, during the course of its execution, will ultimately traverse each node in the DOM. Because this has potential to be a very expensive task, I'd like to benchmark the performance of this script. What's the best, free tool for benchmarking a script such as this across the major browsers? Ideally, I'd like the tool (or set of tools, even): * * **To generate some form of report based on the results of the test.** It can be as simple as a table showing execution times, or as complex as generating some form of a chart. Either way is fine. * **To be free.** it's not that I don't believe in paying for software, it's just that I don't have a major need for a tool like this in my typical day-to-day tasks. If possible, I'd also like the tool to generate varying levels of complex pages so that I can stress test a set of DOMs. This isn't a necessity - if I need to do so, I can write one myself; however, I'd figure I'd poll the community first to see if something already exists. A: Firebug does include JS profiling, and it is probably the best out there. While I've had problems with Firebug's debugger, its profiler is currently top-of-the-line. Venkman is also an older JS debugger/profiler for Firefox, just in case you run into Firebug issues. Using these tools should get you just about all the profiling you need across all browsers even though you'll only be monitoring Firefox. If you truly need to get down to dirty details of IE profiling and the like, there are a number of tools online that inject profiling calls into your javascript to help monitor all profiler-lacking browsers....but even to a JS performance nazi like me, this seems unnecessary. Note: A new, very promising IE8 JS profiler has recently been announced: http://blogs.msdn.com/ie/archive/2008/09/11/introducing-the-ie8-developer-tools-jscript-profiler.aspx. A: In FireBug and FireBug Lite you can call the console.time() and console.timeEnd() methods in your code to start and end a timer around a particular piece of code. The Profiler tool in FireBug will measure how long each function takes. I've used it a lot to narrow down which lines of a particularly slow function are causing the slowdown A: I believe Firebug includes profiling of JS code. Of course, it's not available in all the major browsers--only Firefox. A: Jeff posted The great browser javascript shutdown SunSpider JavaScript Benchmark But i wonder where the download link is ;) A: For JavaScript, XmlHttpRequest, DOM Access, Rendering Times and Network traffic for IE6, 7 & 8 you can use the free dynaTrace AJAX Edition
{ "language": "en", "url": "https://stackoverflow.com/questions/53802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can I turn off automatic merging in Subversion? We're looking at moving from a check-out/edit/check-in style of version control system to Subversion, and during the evaluation we discovered that when you perform an Update action in TortoiseSVN (and presumably in any Subversion client?), if changes in the repository that need to be applied to files that you've been editing don't cause any conflicts then they'll be automatically/silently merged. This scares us a little, as it's possible that this merge, while not producing any compile errors, could at least introduce some logic errors that may not be easily detected. Very simple example: I'm working within a C# method changing some logic in the latter-part of the method, and somebody else changes the value that a variable gets initialised to at the start of the method. The other person's change isn't in the lines of code that I'm working on so there won't be a conflict; but it's possible to dramatically change the output of the method. What we were hoping the situation would be is that if a merge needs to occur, then the two files would be shown and at least a simple accept/reject change option be presented, so that at least we're aware that something has changed and are given the option to see if it impacts our code. Is there a way to do this with Subversion/TortoiseSVN? Or are we stuck in our present working ways too much and should just let it do it's thing... A: The best way around this is to educate the developers. After you do an update in TortoiseSVN it shows you a list of affected files. Simply double clicking each file will give you the diff between them. Then you'll be able to see what changed between your version and the latest repository version. A: It's in the FAQ: How can I prevent Subversion from doing automatic merges? * *In TortoiseSVN->Settings->General->Subversion configuration file, click on the edit button. *Change the [helpers] section by adding diff-cmd = "C:\\false.bat" (note the double backslash) *Create the file C:\false.bat which contains two lines @type %9 @exit 1 A: Here is a trick for TortoiseSVN: How to turn off “auto-merge” in Subversion Trick for svn.exe is to set svn external diff tool to a program that will constantly fail. svn --diff-cmd=/bin/false If external diff program fails, svn concludes that conflict is unresolvable and wouldn't merge it. A: I would suggest you should learn to work with the natural Subversion model if at all possible. In practice we find conflicts are rare, and the type of logic conflict you talk about almost non-existent (I can't recall an instance in the last 4 years in our repository). Team members should check-in changes on as small a scale as possible (whilst maintaining correctness) rather than batching up a whole days work to just check it in once. This will reduce the possibility of stepping on someone else's work. If you are concerned about about a particular change you are making Subversion does provide a locking mechanism to let you prevent other changes to the file. See the Red Book chapters on locking. A: This is why automated (unit) testing is a fundamental part of distributed software development. In the example you give, at least one unit test should fail on svn update and alert you to the error. Remember what Subversion is: a version control system, not a perfectly-working-code-merging-tool.
{ "language": "en", "url": "https://stackoverflow.com/questions/53803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Reintroducing functions in Delphi What was the motivation for having the reintroduce keyword in Delphi? If you have a child class that contains a function with the same name as a virtual function in the parent class and it is not declared with the override modifier then it is a compile error. Adding the reintroduce modifier in such situations fixes the error, but I have never grasped the reasoning for the compile error. A: There are lots of answers here about why a compiler that lets you hide a member function silently is a bad idea. But no modern compiler silently hides member functions. Even in C++, where it's allowed to do so, there's always a warning about it, and that ought to be enough. So why require "reintroduce"? The main reason is that this is the sort of bug that can actually appear by accident, when you're not looking at compiler warnings anymore. For example, let's say you're inheriting from TComponent, and the Delphi designers add a new virtual function to TComponent. The bad news is your derived component, which you wrote five years ago and distributed to others, already has a function with that name. If the compiler just accepted that situation, some end user might recompile your component, ignore the warning. Strange things would happen, and you would get blamed. This requires them to explicitly accept that the function is not the same function. A: If you declare a method in a descendant class that has the same name as a method in an ancestor class then you are hiding that ancestor method — meaning if you have an instance of that descendant class (that is referenced as that class) then you will not get the behavior of the ancestor. When the ancestor's method is virtual or dynamic, the compiler will give you a warning. Now you have one of two choices to suppress that warning message: * *Adding the keyword reintroduce just tells the compiler you know you are hiding that method and it suppresses the warning. You can still use the inherited keyword within your implementation of that descended method to call the ancestor method. *If the ancestor's method was virtual or dynamic then you can use override. It has the added behavior that if this descendant object is accessed through an expression of the ancestor type, then the call to that method will still be to the descendant method (which then may optionally call the ancestor through inherited). So difference between override and reintroduce is in polymorphism. With reintroduce, if you cast the descendant object as the parent type, then call that method you will get the ancestor method, but if you access it the descendant type then you will get the behavior of the descendant. With override you always get the descendant. If the ancestor method was neither virtual nor dynamic, then reintroduce does not apply because that behavior is implicit. (Actually you could use a class helper, but we won't go there now.) In spite of what Malach said, you can still call inherited in a reintroduced method, even if the parent was neither virtual nor dynamic. Essentially reintroduce is just like override, but it works with non-dynamic and non-virtual methods, and it does not replace the behavior if the object instance is accessed via an expression of the ancestor type. Further Explanation: Reintroduce is a way of communicating intent to the compiler that you did not make an error. We override a method in an ancestor with the override keyword, but it requires that the ancestor method be virtual or dynamic, and that you want the behavior to change when the object is accessed as the ancestor class. Now enter reintroduce. It lets you tell the compiler that you did not accidentally create a method with the same name as a virtual or dynamic ancestor method (which would be annoying if the compiler didn't warn you about). A: The RTL uses reintroduce to hide inherited constructors. For example, TComponent has a constructor which takes one argument. But, TObject has a parameterless constructor. The RTL would like you to use only TComponent's one-argument constructor, and not the parameterless constructor inherited from TObject when instantiating a new TComponent. So it uses reintroduce to hide the inherited constructor. In this way, reintroduce is a little bit like declaring a parameterless constructor as private in C#. A: First of all, "reintroduce" breaks the inheritance chain and should not be used, and I mean never ever. In my entire time I worked with Delphi (ca 10 years) I've stumbled upon a number of places that do use this keyword and it has always been a mistake in the design. With that in mind here's the simplest way it works: * *You have like a virtual method in a base class *Now you wanna have a method that has the exact same name, but maybe a different signature. So you write your method in the derived class with the same name and it will not compile because the contract is not fulfilled. *You put the reintroduce keyword in there and your base class does not know about your brand new implementation and you can use it only when accessing your object from a directly specified instance type. What that means is toy can't just assign the object to a variable of base type and call that method because it's not there with the broken contract. Like I said it's pure evil and must be avoided at all cost (well, that's my opinion at least). It's like using goto - just a terrible style :D A: The purpose of the reintroduce modifier is to prevent against a common logical error. I will assume that it is common knowledge how the reintroduce keyword fixes the warning and will explain why the warning is generated and why the keyword is included in the language. Consider the delphi code below; TParent = Class Public Procedure Procedure1(I : Integer); Virtual; Procedure Procedure2(I : Integer); Procedure Procedure3(I : Integer); Virtual; End; TChild = Class(TParent) Public Procedure Procedure1(I : Integer); Procedure Procedure2(I : Integer); Procedure Procedure3(I : Integer); Override; Procedure Setup(I : Integer); End; procedure TParent.Procedure1(I: Integer); begin WriteLn('TParent.Procedure1'); end; procedure TParent.Procedure2(I: Integer); begin WriteLn('TParent.Procedure2'); end; procedure TChild.Procedure1(I: Integer); begin WriteLn('TChild.Procedure1'); end; procedure TChild.Procedure2(I: Integer); begin WriteLn('TChild.Procedure2'); end; procedure TChild.Setup(I : Integer); begin WriteLn('TChild.Setup'); end; Procedure Test; Var Child : TChild; Parent : TParent; Begin Child := TChild.Create; Child.Procedure1(1); // outputs TChild.Procedure1 Child.Procedure2(1); // outputs TChild.Procedure2 Parent := Child; Parent.Procedure1(1); // outputs TParent.Procedure1 Parent.Procedure2(1); // outputs TParent.Procedure2 End; Given the above code both of the procedures in TParent are hidden. To say they are hidden means that the procedures can not be called through the TChild pointer. Compiling the code sample produces a single warning; [DCC Warning] Project9.dpr(19): W1010 Method 'Procedure1' hides virtual method of base type 'TParent' Why only a warning for the virtual function and not the other? Both are hidden. A virtue of Delphi is that library designers are able to release new versions without fear of breaking the logic of existing client code. This contrasts to Java where adding new functions to a parent class in a library is fraught with danger because classes are implicitly virtual. Lets say that TParent from above lives in a 3rd party library, and the library manufacture releases the new version below. // version 2.0 TParent = Class Public Procedure Procedure1(I : Integer); Virtual; Procedure Procedure2(I : Integer); Procedure Procedure3(I : Integer); Virtual; Procedure Setup(I : Integer); Virtual; End; procedure TParent.Setup(I: Integer); begin // important code end; Imagine we had the following code in our client code Procedure TestClient; Var Child : TChild; Begin Child := TChild.Create; Child.Setup; End; For the client it does not matter if the code is compiled against version 2 or 1 of the library, in both cases TChild.Setup is called as the user intends. And in the library; // library version 2.0 Procedure TestLibrary(Parent : TParent); Begin Parent.Setup; End; If TestLibrary is called with a TChild parameter, everything works as intended. The library designer have no knowledge of the TChild.Setup, and in Delphi this does not cause them any harm. The call above correctly resolves to TParent.Setup. What would happen in a equivalent situation in Java? TestClient would work correctly as intended. TestLibrary would not. In Java all functions are assumed virtual. The Parent.Setup would resolve to TChild.Setup, but remember when TChild.Setup was written they had no knowledge of the future TParent.Setup, so they are certainly not going to ever call inherited. So if the library designer intended TParent.Setup to be called it will not be, no matter what they do. And certainly this could be catasrophic. So the object model in Delphi requires explicit declaration of virtual functions down the chain of child classes. A side effect of this is that it is easy to forget to add the override modifier on child methods. The existence of the Reintroduce keyword is a convenience to the programmer. Delphi was designed so that the programmer is gently persuaded, by the generation of a warning, to explicitly state their intentions in such situations. A: tl;dr: Trying to override a non-virtual method makes no sense. Add the keyword reintroduce to acknowledge that you're making a mistake. A: When the ancestor class also has a method with the same name, and it is not necessarily declared virtual, you would see a compiler warning (as you would hide this method). In other words: You tell the compiler that you know that you hide the ancestor function and replace it with this new function and do so deliberately. And why would you do this? If the method is virtual in the parent class, the only reason is to prevent polymorphism. Other then that just override and do not call inherited. But if the parent method is not declared virtual (and you cannot change that, because you do not own the code for example), you can inherit from that class and let people inherit from your class without seeing a compiler warning. A: Reintroduce tells the compiler you want to call the code defined in this method as an entry point for this class and its descendants, regardless of other methods with the same name in the ancestors’ chain. Creating a TDescendant.MyMethod would create a potential confusion for the TDescendants in adding another method with the same name, which the compiler warns you about. Reintroduce disambiguates that and tells the compiler you know which one to use. ADescendant.MyMethod calls the TDescendant one, (ADescendant as TAncestor).MyMethod calls the TAncestor one. Always! No confusion…. Compiler happy! This is true whether you want the descendant method to be virtual or not: in both cases you want to break the natural linkage of the virtual chain. And it does not prevent you from calling the inherited code from within the new method. * *TDescendant.MyMethod is virtual: ...but you cannot or don’t want to use the linkage. * *You cannot because the method signature is different. You have no other choice as overriding is impossible in this case with return type or parameters not exactly the same. *You want to restart an inheritance tree from this class. *TDescendant.MyMethod is not virtual: You turn MyMethod into a static one at the TDescendant level and prevent further overriding. All classes inheriting from TDescendant will use the TDescendant implementation. A: This has been introduced to the language because of Framework versions (including the VCL). If you have an existing code base, and an update to a Framework (for instance because you bought a newer Delphi version) introduced a virtual method with the same name as a method in an ancestor of your code base, then reintroduce will allow you to get rid of the W1010 warning. This is the only place where you should use reintroduce. A: First, as it was said above, you should never ever deliberately reintroduce virtual method. The only sane use of reintroduce is when the author of the ancestor (not you) added a method that goes into conflict with your descendant and renaming your descendant method is not an option. Second, you can easily call the original version of the virtual method even in classes where you reintroduced it with different parameters: type tMyFooClass = class of tMyFoo; tMyFoo = class constructor Create; virtual; end; tMyFooDescendant = class(tMyFoo) constructor Create(a: Integer); reintroduce; end; procedure ....... var tmp: tMyFooClass; begin // Create tMyFooDescendant instance one way tmp := tMyFooDescendant; with tmp.Create do // please note no a: integer argument needed here try { do something } finally free; end; // Create tMyFooDescendant instance the other way with tMyFooDescendant.Create(20) do // a: integer argument IS needed here try { do something } finally free; end; so what should be the purpose of reintroducing virtual method other than make things harder to read? A: reintroduce allows you to declare a method with the same name as the ancestor, but with different parameters. It has nothing to do with bugs or mistakes!!! For example, I often use it for constructors... constructor Create (AOwner : TComponent; AParent : TComponent); reintroduce; This allows me to create the internal classes in a cleaner fashion for complex controls such as toolbars or calendars. I normally have more parameters than that. Sometimes it is almost impossible or very messy to create a class without passing some parameters. For visual controls, Application.Processmessages can get called after Create, which can be too late to use these parameters. constructor TClassname.Create (AOwner : TComponent; AParent : TComponent); begin inherited Create (AOwner); Parent := AParent; .. end;
{ "language": "en", "url": "https://stackoverflow.com/questions/53806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Can you do "builds" with PHP scripts or an interpreted language? Correct me if I'm wrong, but a "build" is a "compile", and not every language compiles. Continuous Integration involves building components to see if they continue to work beyond unit tests, which I might be oversimplifying. But if your project involves a language that does not compile, how do you perform nightly builds or use continuous integration techniques? A: Hmm... I'd define "building" as something like "preparing, packaging and deploying all artifacts of a software system". The compilation to machine code is only one of many steps in the build. Others might be checking out the latest version of the code from scm-system, getting external dependencies, setting configuration values depending on the target the software gets deployed to and running some kind of test suite to ensure you've got a "working/running build" before you actually deploy. "Building" software can/must be done for any software, independent of your programming langugage. Intepreted languages have the "disadvantage" that syntactic or structural (meaning e.g. calling a method with wrong parameters etc.) errors normally will only be detected at runtime (if you don't have a separate step in your build which checks for such errors e.g. with PHPLint). Thus (automated) Testcases (like Unit-Tests - see PHPUnit or SimpleTest - and Frontend-Tests - see Selenium) are all the more important for big PHP projects to ensure the good health of the code. There's a great Build-Tool (like Ant for Java or Rake for Ruby) for PHP too: Phing CI-Systems like Xinc or Hudson are simply used to automagically (like anytime a change is checked into scm) package your code, check it for obvious errors, run your tests (in short: run your build) and report the results back to your development team. A: Create a daily tag of your current source control trunk?
{ "language": "en", "url": "https://stackoverflow.com/questions/53807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Good 15 minute Java question to ask recent college graduate When interviewing college coops/interns or recent graduates it helps to have a Java programming question that they can do on a white board in 15 minutes. Does anyone have examples of good questions like this? A C++ question I was once asked in an interview was to write a string to integer function which is along the lines of the level of question I am looking for examples of. A: Is there any reason why it has to be on a whiteboard? Personally, I'd rather sit them in front of a keyboard and have them write some code. Our test used to be a simple 100 (IIRC) line Swing text editor. We then broke it a few simple ways, some making the code not compile and some a little more subtle, and gave the candidates half and hour and a list of problems to fix. Even if you can't have them do anything hands on make sure that you do give them some explicitly technical questions. In another round of interviews there were a surprising number of recent graduates who were just buzzword-spouting IDE-jockeys, so they could look OKish waving their hands around in front of a whiteboard talking about Enterprise-this and SOA-that, but when given a simple Java fundamentals multiple choice exam asking things about what final and protected meant did horrifyingly badly. A: I've always thought that algorithmic questions should be language agnostic. If you want to test the java level of a student, focus on the language: its keywords (from common one like static to more exotic one, like volatile), generics, overloading, boxing/unboxing of variable, standard libraries. A: * *Write a function to swap variable values using pointers (Really poor ones will fall for this) *Write a program to find the distance between two points in the XY plane. Make use of a class to store the points. *Demonstrate the use of polymorphism in java using as simple program. *Write a program to print the first n prime numbers. *Write a program to replace a string in a file with another. A: Some stuff that has showed up on SO: * *IsPalindrome(string s) *ReverseWordsInString(string s): "I know java" --> "java know I" Other stuff that springs to mind: * *multiply a Vector with a Matrix (can this be done OO-Style?) *echo (yes, a simple clone of the unix tool) *cat (15 min should be enough, should weed out the clueless) *a simple container for ints. Like ArrayList. Bonus question: Generic? A: If you don't know what questions to ask them, then may be you are not the right one to interview them in Java. With all due respect, I hate when people ask me questions in interviews which they themselves don't know answers for. Answers for most of the questions can be found online by googling in a few secs. If someone has experience in Java, they will definitely know Abstract class, interface etc as they are the core building blocks. If he/she does not know 'volatile' keyword - big deal. A: I agree with Nicolas in regards to separating the algorithmic questions from the actual language questions. One thing that you might want to consider is giving them a couple simple algorithm questions that they can write up the pseudo code for on the white board (ex. "Explain to me the Bubble sort and show me the pseudo code for it." Then once they have demonstrated their algorithmic knowledge you can move on to the Java questions. Since some people work better in front of a computer than in front of the whiteboard, I would give them something simple, but leveraging their knowledge of Java, that they can implement in 30 minutes or so in using the same IDE that you are using at the company. This way if they claim to know the IDE you can also get an idea of how well they know it. A: * *Write a function that merges two sorted lists -- stopping at limit. Look for the easy optimizations and correct boundary checks / sublist calls. Tell them T implements compareTo. public List<T> merge(List<T> one, List<T> two, int limit) *Write a function that returns true if any two integers in the array sum to the given sum. Have them try to do better than n squared using some sort of set or data structure. public boolean containsSum(int[] nums, int sum) A: I would avoid asking them questions that would have been covered in their undergrad classes. I would be more curious about their ability to apply everything they've learned to solve complex technical problems. If your business has a specific need for an IT solution you could use that as a starting point. You could ask the candidate what technologies they would use and the pros and cons of using those technologies versus alternate technologies. As the discussion progresses you could get a feel for their technical skills, problem solving skills, interpersonal skills, etc. I think it is important to avoid coaching them, even in awkward moments. This is important to weed out the BSers.
{ "language": "en", "url": "https://stackoverflow.com/questions/53808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you normally set up your compiler's optimization settings? Do you normally set your compiler to optimize for maximum speed or smallest code size? or do you manually configure individual optimization settings? Why? I notice most of the time people tend to just leave compiler optimization settings to their default state, which with visual c++ means max speed. I've always felt that the default settings had more to do with looking good on benchmarks, which tend to be small programs that will fit entirely within the L2 cache than what's best for overall performance, so I normally set it optimize for smallest size. A: As a Gentoo user I have tried quite a few optimizations on the complete OS and there have been endless discussions on the Gentoo forums about it. Some good flags for GCC can be found in the wiki. In short, optimizing for size worked best on an old Pentium3 laptop with limited ram, but on my main desktop machine with a Core2Duo, -O2 gave better results over all. There's also a small script if you are interested in the x86 (32 bit) specific flags that are the most optimized. If you use gcc and really want to optimize a specific application, try ACOVEA. It runs a set of benchmarks, then recompile them with all possible combinations of compile flags. There's an example using Huffman encoding on the site (lower is better): A relative graph of fitnesses: Acovea Best-of-the-Best: ************************************** (2.55366) Acovea Common Options: ******************************************* (2.86788) -O1: ********************************************** (3.0752) -O2: *********************************************** (3.12343) -O3: *********************************************** (3.1277) -O3 -ffast-math: ************************************************** (3.31539) -Os: ************************************************* (3.30573) (Note that it found -Os to be the slowest on this Opteron system.) A: I prefer to use minimal size. Memory may be cheap, cache is not. A: Besides the fact that cache locality matters (as On Freund said), one other things Microsoft does is to profile their application and find out which code paths are executed during the first few seconds of startup. After that they feed this data back to the compiler and ask it to put the parts which are executed during startup close together. This results in faster startup time. I do believe that this technique is available publicly in VS, but I'm not 100% sure. A: For me it depends on what platform I'm using. For some embedded platforms or when I worked on the Cell processor you have restraints such as a very small cache or minimal space provided for code. I use GCC and tend to leave it on "-O2" which is the "safest" level of optimisation and favours speed over a minimal size. I'd say it probably doesn't make a huge difference unless you are developing for a very high-performance application in which case you should probably be benchmarking the various options for your particular use-case. A: Microsoft ships all its C/C++ software optimized for size. After benchmarking they discovered that it actually gives better speed (due to cache locality). A: There are many types of optimization, maximum speed versus small code is just one. In this case, I'd choose maximum speed, as the executable will be just a bit bigger. On the other hand, you could optimize your application for a specific type of processor. In some cases this is a good idea (if you intend to run the program only on your station), but in this case it is probable that the program will not work on other architecture (eg: you compile your program to work on a Pentium 4 machine -> it will probably not work on a Pentium 3). A: Build both, profile, choose which works better on specific project and hardware. For performance critical code, that is - otherwise choose any and don't bother. A: We always use maximize for optimal speed but then, all the code I write in C++ is somehow related to bioinformatics algorithms and speed is crucial while the code size is relatively small. A: Memory is cheap now days :) So it can be meaningful to set compiler settings to max speed unless you work with embedded systems. Of course answer depends on concrete situation. A: This depends on the application of your program. When programming an application to control a fast industrial process, optimize for speed would make sense. When programming an application that only needs to react to a user's input, optimization for size could make sense. That is, if you are concerned about the size of your executable. A: Tweaking compiler settings like that is an optimization. On the principle that "premature optimization is the root of all evil," I don't bother with it until the program is near its final shipping state and I've discovered that it's not fast enough -- i.e. almost never.
{ "language": "en", "url": "https://stackoverflow.com/questions/53811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Why does windows XP minimize my swing full screen window on my second screen? In the application I'm developping (in Java/swing), I have to show a full screen window on the second screen of the user. I did this using a code similar to the one you'll find below... Be, as soon as I click in a window opened by windows explorer, or as soon as I open windows explorer (i'm using windows XP), the full screen window is minimized... Do you know any way or workaround to fix this problem, or is there something important I did not understand with full screen windows? Thanks for the help, import javax.swing.JFrame; import javax.swing.JPanel; import javax.swing.JWindow; import java.awt.BorderLayout; import java.awt.Dimension; import java.awt.GraphicsDevice; import java.awt.GraphicsEnvironment; import java.awt.Window; import javax.swing.JButton; import javax.swing.JToggleButton; import java.awt.Rectangle; import java.awt.GridBagLayout; import javax.swing.JLabel; public class FullScreenTest { private JFrame jFrame = null; // @jve:decl-index=0:visual-constraint="94,35" private JPanel jContentPane = null; private JToggleButton jToggleButton = null; private JPanel jFSPanel = null; // @jve:decl-index=0:visual-constraint="392,37" private JLabel jLabel = null; private Window window; /** * This method initializes jFrame * * @return javax.swing.JFrame */ private JFrame getJFrame() { if (jFrame == null) { jFrame = new JFrame(); jFrame.setSize(new Dimension(474, 105)); jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); jFrame.setContentPane(getJContentPane()); } return jFrame; } /** * This method initializes jContentPane * * @return javax.swing.JPanel */ private JPanel getJContentPane() { if (jContentPane == null) { jContentPane = new JPanel(); jContentPane.setLayout(null); jContentPane.add(getJToggleButton(), null); } return jContentPane; } /** * This method initializes jToggleButton * * @return javax.swing.JToggleButton */ private JToggleButton getJToggleButton() { if (jToggleButton == null) { jToggleButton = new JToggleButton(); jToggleButton.setBounds(new Rectangle(50, 23, 360, 28)); jToggleButton.setText("Show Full Screen Window on 2nd screen"); jToggleButton.addActionListener(new java.awt.event.ActionListener() { public void actionPerformed(java.awt.event.ActionEvent e) { showFullScreenWindow(jToggleButton.isSelected()); } }); } return jToggleButton; } protected void showFullScreenWindow(boolean b) { if(window==null){ window = initFullScreenWindow(); } window.setVisible(b); } private Window initFullScreenWindow() { GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment(); GraphicsDevice[] gds = ge.getScreenDevices(); GraphicsDevice gd = gds[1]; JWindow window = new JWindow(gd.getDefaultConfiguration()); window.setContentPane(getJFSPanel()); gd.setFullScreenWindow(window); return window; } /** * This method initializes jFSPanel * * @return javax.swing.JPanel */ private JPanel getJFSPanel() { if (jFSPanel == null) { jLabel = new JLabel(); jLabel.setBounds(new Rectangle(18, 19, 500, 66)); jLabel.setText("Hello ! Now, juste open windows explorer and see what happens..."); jFSPanel = new JPanel(); jFSPanel.setLayout(null); jFSPanel.setSize(new Dimension(500, 107)); jFSPanel.add(jLabel, null); } return jFSPanel; } /** * @param args */ public static void main(String[] args) { FullScreenTest me = new FullScreenTest(); me.getJFrame().setVisible(true); } } A: Usually when an application is in "full screen" mode it will take over the entire desktop. For a user to get to another window they would have to alt-tab to it. At that point windows would minimize the full screen app so that the other application could come to the front. This sounds like it may be a bug (undocumented feature...) in windows. It should probably not be doing this for a dual screen setup. One option to fix this is rather than setting it to be "full screen" just make the window the same size as the screen with location (0,0). You can get screen information from the GraphicsConfigurations on the GraphicsDevice. A: The following code works (thank you John). With no full screen and a large "always on top" window. But I still don't know why windows caused this stranged behavior... private Window initFullScreenWindow() { GraphicsEnvironment ge = GraphicsEnvironment.getLocalGraphicsEnvironment(); GraphicsDevice[] gds = ge.getScreenDevices(); GraphicsDevice gd = gds[1]; JWindow window = new JWindow(gd.getDefaultConfiguration()); window.setContentPane(getJFSPanel()); window.setLocation(1280, 0); window.setSize(gd.getDisplayMode().getWidth(), gd.getDisplayMode().getHeight()); window.setAlwaysOnTop(true); //gd.setFullScreenWindow(window); return window; }
{ "language": "en", "url": "https://stackoverflow.com/questions/53820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How to specify accepted certificates for Client Authentication in .NET SslStream I am attempting to use the .Net System.Security.SslStream class to process the server side of a SSL/TLS stream with client authentication. To perform the handshake, I am using this code: SslStream sslStream = new SslStream(innerStream, false, RemoteCertificateValidation, LocalCertificateSelectionCallback); sslStream.AuthenticateAsServer(serverCertificate, true, SslProtocols.Default, false); Unfortunately, this results in the SslStream transmitting a CertificateRequest containing the subjectnames of all certificates in my CryptoAPI Trusted Root Store. I would like to be able to override this. It is not an option for me to require the user to install or remove certificates from the Trusted Root Store. It looks like the SslStream uses SSPI/SecureChannel underneath, so if anyone knows how to do the equivalent with that API, that would be helpful, too. Any ideas? A: It does not look like this is currently possible using the .NET libraries. I solved it by using the Mono class library implementation of System.Security.SslStream, which gives better access to overriding the servers behavior during the handshake. A: What the certificate validation is doing is validating all certificates in the chain. In order to truely do that it just contact the root store of each of those cerficates. If that's not something you want to happen you can deploy your own root store locally. A: It is not the validation part I want to change. The problem is in the initial handshake, the server transmits the message informing the client that client authentication is required (that is the CertificateRequest message). As part of this message, the server sends the names of CAs that it will accept as issuers of the client certificate. It is that list which per default contains all the Trusted Roots in the store. But if is possible to override the certificate root store for a single application, that would probably fix the problem. Is that what you mean? And if so, how do I do that?
{ "language": "en", "url": "https://stackoverflow.com/questions/53824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Checking available stack size in C I'm using MinGW with GCC 3.4.5 (mingw-special vista r3). My C application uses a lot of stack so I was wondering is there any way I can tell programatically how much stack is remaining so I can cleanly handle the situation if I find that I'm about to run out. If not what other ways would you work around the problem of potentially running out of stack space? I've no idea what size of stack I'll start with so would need to identify that programatically also. A: check if your compiler supports stackavail() A: Assuming you know the size of the full stack you could probably add some assembly code to read ESP. If you read ESP and save it aside in the main function you can compare the current ESP to the ESP you have in main and see how much ESP has changed. That'll give you an indication of how much stack you're used. A: This is a problem I have given up on. With a lot of hacking and (mostly) praying, you can get a solution that works at a given time on a given machine. But in general there seems to be no decent way to do this. You will have to obtain the stack position and size from outside your program (on Linux you might get it from /proc/<pid>/maps). In your program you must somehow test where you are at the stack. Using local variables is possible, but there is no real guarantee that they are actually on the stack. You can also try to get the value from the stack pointer register with some assembly. So now you have the location of the stack, its size and the current position and you assume you know in which direction the stack grows. When are you going in stack-overflow mode? You better not do it close to the end because your estimation (i.e. address of local variable or value from stack pointer) is probably a bit too optimistic; it's not uncommon to address memory beyond the stack pointer. Also, you have no clue about how much room on the stack any given function (and the functions it calls) need. So you'll have to leave quite some room at the end. I can only advice you not do get into this mess and try to avoid very deep recursion. You might also want to increase your stack size; on Windows you have to compile this into the executable, I believe. A: maybe this will help for Windows platform only: in the PE header (IMAGE_NT_HEADERS) of your exe there are some records such as: typedef struct _IMAGE_NT_HEADERS { DWORD Signature; IMAGE_FILE_HEADER FileHeader; IMAGE_OPTIONAL_HEADER32 OptionalHeader; } IMAGE_NT_HEADERS32, *PIMAGE_NT_HEADERS32; typedef struct _IMAGE_OPTIONAL_HEADER { ... DWORD SizeOfStackReserve; DWORD SizeOfStackCommit; ... } There is a simple way to obtain these values: using GetModuleHandle(NULL) will give you the imagebase (handle) of your module, address where you'll find a IMAGE_DOS_HEADER structure which will help you to find the IMAGE_NT_HEADERS structure (imagebase+IMAGE_DOS_HEADER.e_lfanew) -> IMAGE_NT_HEADERS, and there you'll find those fields: SizeOfStackReserve and SizeOfStackCommit. The maximum amount of space that the OS will allocate for your stack is SizeOfStackReserve. If you consider trying this, let me know and I will assist you. There is a way to obtain the size of stack used in a certain point. A: The getrusage function gets you the current usage . (see man getrusage). The getrlimit in Linux would help fetching the stack size with the RLIMIT_STACK parameter. #include <sys/resource.h> int main (void) { struct rlimit limit; getrlimit (RLIMIT_STACK, &limit); printf ("\nStack Limit = %ld and %ld max\n", limit.rlim_cur, limit.rlim_max); } Please give a look at man getrlimit. The same information could be fetched by ulimit -s or ulimit -a stack size row. Also have a look at setrlimit function which would allow to set the limits. But as the mentioned in the other answers if you need to adjust stack then probably you should re consider your design. If you want a big array why not take the memory from the heap ? A: Raymond Chen (The Old New Thing) has a good answer to this sort of question: If you have to ask, you're probably doing something wrong. Here's some Win32 details on stack allocation: MSDN. If you think you might be limited by stack space, you will almost certainly be limited by available virtual memory, in which case, you will need to find a different solution. What exactly are you trying to do? A: For windows: I've done this before using the VirtualQuery function from Kernel32.dll. I only have an example in C# but it demonstrates the technique: public static class StackManagement { [StructLayout(LayoutKind.Sequential)] struct MEMORY_BASIC_INFORMATION { public UIntPtr BaseAddress; public UIntPtr AllocationBase; public uint AllocationProtect; public UIntPtr RegionSize; public uint State; public uint Protect; public uint Type; }; private const long STACK_RESERVED_SPACE = 4096 * 16; public unsafe static bool CheckForSufficientStack(UInt64 bytes) { MEMORY_BASIC_INFORMATION stackInfo = new MEMORY_BASIC_INFORMATION(); UIntPtr currentAddr = new UIntPtr(&stackInfo); VirtualQuery(currentAddr, ref stackInfo, sizeof(MEMORY_BASIC_INFORMATION)); UInt64 stackBytesLeft = currentAddr.ToUInt64() - stackInfo.AllocationBase.ToUInt64(); return stackBytesLeft > (bytes + STACK_RESERVED_SPACE); } [DllImport("kernel32.dll")] private static extern int VirtualQuery(UIntPtr lpAddress, ref MEMORY_BASIC_INFORMATION lpBuffer, int dwLength); } BTW: This code can also be found on StackOverflow on another question which I asked when I was trying to fix a bug in the code: Arithmetic operation resulted in an overflow in unsafe C#enter link description here A: On Linux, you would call getrusage and check the returned struct rusage's ru_isrss member (integral unshared stack size). From the MINGW site and its sourceforge site's tracking of patches, I see that in May of 2008 there was some patching done around getrusage and it looks like it's been generally supported for quite a while. You should check carefully for any caveats in terms of how much of the typical Linux functionality is supported by MinGW. A: Taking the address of a local variable off the stack would work. Then in a more nested call you can subtract the address of another local to find the difference between them size_t top_of_stack; void Main() { int x=0; top_of_stack = (size_t) &x; do_something_very_recursive(....) } size_t SizeOfStack() { int x=0; return top_of_stack - (size_t) &x; } If you code is multi-threaded then you need to deal with storing the top_of_stack variable on a per-thread basis.
{ "language": "en", "url": "https://stackoverflow.com/questions/53827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: How to support multiple languages on a microcontroller? I'm currently working on upgrading a product for the Chinese market. The target is a ARMTDMI7 with a QVGA display. Most resources I've located on the net are targeted at desktop or web programming rather than embedded devices. * *Can anyone suggest some tools and resources that might be useful? *What are the best techniques for extracting literal strings and communicating with translators? A: I suggest looking at EasyGUI but that depends on what graphics controller you use. EasyGUI is a tool that simplifies design of user interfaces and comes with complete source code and driver for a varity of display controllers. For localization you can use EasyTranslate that gives the translater a graphical representation of the interface. This lets the translator see how the translated texts fit on the screen. EasyGUI is available with unicode support aswell as right to left script. A: Freetype might be good for rendering fonts. www.freetype.org A: There are many ARM microcontroller forums which will help you find what you're looking for. Atmel has a line of ARM7 processors, and they are pretty friendly to those who make a hobby out of this, so there's a lot of information on this processor. It won't be the same, but generally the tools and libraries can be used across the ARM line so you might find some help here - you'll want to focus on the AT91SAM7 series. If you have more specific questions, you will probably get some reasonable response here. -Adam A: It sounds like you need to upgrade an existing codebase to make it support multiple languages. If so, the fact that this is on a microcontroller shouldn't be an issue - I'd drop that fromt he title and focus on the language you're using (c?) and ask how to convert your program for internationalisation. This is a problem many people have solved on a variety of platforms, and the fact this you're on a microcontroller doesn't mean that the same tools and such don't apply - the relevant factor is the language you're using -Adam
{ "language": "en", "url": "https://stackoverflow.com/questions/53829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Can XpsDocuments be serialized to XML for storage in a database? And, if not, is the only other alternative a blob? A: XPS documents are zip files that contain XML. You could extract the contents of the zip file and store that in the database, but then you would need to unzip and re-zip every time data came in or out of the database. Edit: In other words, not in any practical manner.
{ "language": "en", "url": "https://stackoverflow.com/questions/53841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: How can I evaluate a C# expression dynamically? I would like to do the equivalent of: object result = Eval("1 + 3"); string now = Eval("System.DateTime.Now().ToString()") as string Following Biri s link, I got this snippet (modified to remove obsolete method ICodeCompiler.CreateCompiler(): private object Eval(string sExpression) { CSharpCodeProvider c = new CSharpCodeProvider(); CompilerParameters cp = new CompilerParameters(); cp.ReferencedAssemblies.Add("system.dll"); cp.CompilerOptions = "/t:library"; cp.GenerateInMemory = true; StringBuilder sb = new StringBuilder(""); sb.Append("using System;\n"); sb.Append("namespace CSCodeEvaler{ \n"); sb.Append("public class CSCodeEvaler{ \n"); sb.Append("public object EvalCode(){\n"); sb.Append("return " + sExpression + "; \n"); sb.Append("} \n"); sb.Append("} \n"); sb.Append("}\n"); CompilerResults cr = c.CompileAssemblyFromSource(cp, sb.ToString()); if (cr.Errors.Count > 0) { throw new InvalidExpressionException( string.Format("Error ({0}) evaluating: {1}", cr.Errors[0].ErrorText, sExpression)); } System.Reflection.Assembly a = cr.CompiledAssembly; object o = a.CreateInstance("CSCodeEvaler.CSCodeEvaler"); Type t = o.GetType(); MethodInfo mi = t.GetMethod("EvalCode"); object s = mi.Invoke(o, null); return s; } A: Old topic, but considering this is one of the first threads showing up when googling, here is an updated solution. You can use Roslyn's new Scripting API to evaluate expressions. If you are using NuGet, just add a dependency to Microsoft.CodeAnalysis.CSharp.Scripting. To evaluate the examples you provided, it is as simple as: var result = CSharpScript.EvaluateAsync("1 + 3").Result; This obviously does not make use of the scripting engine's async capabilities. You can also specify the evaluated result type as you intended: var now = CSharpScript.EvaluateAsync<string>("System.DateTime.Now.ToString()").Result; To evaluate more advanced code snippets, pass parameters, provide references, namespaces and whatnot, check the wiki linked above. A: I have written an open source project, Dynamic Expresso, that can convert text expression written using a C# syntax into delegates (or expression tree). Text expressions are parsed and transformed into Expression Trees without using compilation or reflection. You can write something like: var interpreter = new Interpreter(); var result = interpreter.Eval("8 / 2 + 2"); or var interpreter = new Interpreter() .SetVariable("service", new ServiceExample()); string expression = "x > 4 ? service.aMethod() : service.AnotherMethod()"; Lambda parsedExpression = interpreter.Parse(expression, new Parameter("x", typeof(int))); parsedExpression.Invoke(5); My work is based on Scott Gu article http://weblogs.asp.net/scottgu/archive/2008/01/07/dynamic-linq-part-1-using-the-linq-dynamic-query-library.aspx . A: If you specifically want to call into code and assemblies in your own project I would advocate using the C# CodeDom CodeProvider. Here is a list of the most popular approaches that I am aware of for evaluating string expressions dynamically in C#. Microsoft Solutions * *C# CodeDom CodeProvider: * *See How LINQ used to work and this CodeProject article *Roslyn: * *See this article on Rosly Emit API and this StackOverflow answer *DataTable.Compute: * *See this answer on StackOverflow *Webbrowser.Document.InvokeScript * *See this StackOverflow question *DataBinder.Eval *ScriptControl * *See this answer on StackOverflow and this question *Executing PowerShell: * *See this CodeProject article Non-Microsoft solutions (not that there is anything wrong with that) * *Expression evaluation libraries: * *Flee *DynamicExpresso *NCalc *CodingSeb.ExpressionEvaluator *Eval-Expression.NET *Javascript interpreter * *Jint *To execute real C# * *CS-Script *Roll your own a language building toolkit like: * *Irony *Jigsaw A: using System; using Microsoft.JScript; using Microsoft.JScript.Vsa; using Convert = Microsoft.JScript.Convert; namespace System { public class MathEvaluator : INeedEngine { private VsaEngine vsaEngine; public virtual String Evaluate(string expr) { var engine = (INeedEngine)this; var result = Eval.JScriptEvaluate(expr, engine.GetEngine()); return Convert.ToString(result, true); } VsaEngine INeedEngine.GetEngine() { vsaEngine = vsaEngine ?? VsaEngine.CreateEngineWithType(this.GetType().TypeHandle); return vsaEngine; } void INeedEngine.SetEngine(VsaEngine engine) { vsaEngine = engine; } } } A: What are the performance implications of doing this? We use a system based on something like the above mentioned, where each C# script is compiled to an in-memory assembly and executed in a separate AppDomain. There's no caching system yet, so the scripts are recompiled every time they run. I've done some simple testing and a very simple "Hello World" script compiles in about 0.7 seconds on my machine, including loading the script from disk. 0.7 seconds is fine for a scripting system, but might be too slow for responding to user input, in that case a dedicated parser/compiler like Flee might be better. using System; public class Test { static public void DoStuff( Scripting.IJob Job) { Console.WriteLine( "Heps" ); } } A: I have just written a similar library (Matheval) in pure C#. It allows evaluating string and number expression like excel fomular. using System; using org.matheval; public class Program { public static void Main() { Expression expression = new Expression("IF(time>8, (HOUR_SALARY*8) + (HOUR_SALARY*1.25*(time-8)), HOUR_SALARY*time)"); //bind variable expression.Bind("HOUR_SALARY", 10); expression.Bind("time", 9); //eval Decimal salary = expression.Eval<Decimal>(); Console.WriteLine(salary); } } A: Looks like there is also a way of doing it using RegEx and XPathNavigator to evaluate the expression. I did not have the chance to test it yet but I kind of liked it because it did not require to compile code at runtime or use libraries that could not be available. http://www.webtips.co.in/c/evaluate-function-in-c-net-as-eval-function-in-javascript.aspx I'll try it and tell later if it worked. I also intend to try it in Silverlight, but it is too late and I'm almost asleep to do it now. A: While C# doesn't have any support for an Eval method natively, I have a C# eval program that does allow for evaluating C# code. It provides for evaluating C# code at runtime and supports many C# statements. In fact, this code is usable within any .NET project, however, it is limited to using C# syntax. Have a look at my website, http://csharp-eval.com, for additional details. A: There is a nice piece of code here https://www.c-sharpcorner.com/article/codedom-calculator-evaluating-c-sharp-math-expressions-dynamica/ Download this and make it a class library which may be referenced in your project. This seems to be pretty fast and simple Perhaps this could help !
{ "language": "en", "url": "https://stackoverflow.com/questions/53844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: Java Compiler Options to produce .exe files What compiler (I'm using gcj 4.x) options should I use to generate an "exe" file for my java application to run in windows? A: To compile the Java program MyJavaProg.java, type: gcj -c -g -O MyJavaProg.java To link it, use the command: gcj --main=MyJavaProg -o MyJavaProg MyJavaProg.o and then linking to create an executable mycxxprog.exe g++ -o mycxxprog.exe mycxxprog.o
{ "language": "en", "url": "https://stackoverflow.com/questions/53845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do I tokenize a string in C++? Java has a convenient split method: String str = "The quick brown fox"; String[] results = str.split(" "); Is there an easy way to do this in C++? A: Adam Pierce's answer provides an hand-spun tokenizer taking in a const char*. It's a bit more problematic to do with iterators because incrementing a string's end iterator is undefined. That said, given string str{ "The quick brown fox" } we can certainly accomplish this: auto start = find(cbegin(str), cend(str), ' '); vector<string> tokens{ string(cbegin(str), start) }; while (start != cend(str)) { const auto finish = find(++start, cend(str), ' '); tokens.push_back(string(start, finish)); start = finish; } Live Example If you're looking to abstract complexity by using standard functionality, as On Freund suggests strtok is a simple option: vector<string> tokens; for (auto i = strtok(data(str), " "); i != nullptr; i = strtok(nullptr, " ")) tokens.push_back(i); If you don't have access to C++17 you'll need to substitute data(str) as in this example: http://ideone.com/8kAGoa Though not demonstrated in the example, strtok need not use the same delimiter for each token. Along with this advantage though, there are several drawbacks: * *strtok cannot be used on multiple strings at the same time: Either a nullptr must be passed to continue tokenizing the current string or a new char* to tokenize must be passed (there are some non-standard implementations which do support this however, such as: strtok_s) *For the same reason strtok cannot be used on multiple threads simultaneously (this may however be implementation defined, for example: Visual Studio's implementation is thread safe) *Calling strtok modifies the string it is operating on, so it cannot be used on const strings, const char*s, or literal strings, to tokenize any of these with strtok or to operate on a string who's contents need to be preserved, str would have to be copied, then the copy could be operated on c++20 provides us with split_view to tokenize strings, in a non-destructive manner: https://topanswers.xyz/cplusplus?q=749#a874 The previous methods cannot generate a tokenized vector in-place, meaning without abstracting them into a helper function they cannot initialize const vector<string> tokens. That functionality and the ability to accept any white-space delimiter can be harnessed using an istream_iterator. For example given: const string str{ "The quick \tbrown \nfox" } we can do this: istringstream is{ str }; const vector<string> tokens{ istream_iterator<string>(is), istream_iterator<string>() }; Live Example The required construction of an istringstream for this option has far greater cost than the previous 2 options, however this cost is typically hidden in the expense of string allocation. If none of the above options are flexable enough for your tokenization needs, the most flexible option is using a regex_token_iterator of course with this flexibility comes greater expense, but again this is likely hidden in the string allocation cost. Say for example we want to tokenize based on non-escaped commas, also eating white-space, given the following input: const string str{ "The ,qu\\,ick ,\tbrown, fox" } we can do this: const regex re{ "\\s*((?:[^\\\\,]|\\\\.)*?)\\s*(?:,|$)" }; const vector<string> tokens{ sregex_token_iterator(cbegin(str), cend(str), re, 1), sregex_token_iterator() }; Live Example A: You can use streams, iterators, and the copy algorithm to do this fairly directly. #include <string> #include <vector> #include <iostream> #include <istream> #include <ostream> #include <iterator> #include <sstream> #include <algorithm> int main() { std::string str = "The quick brown fox"; // construct a stream from the string std::stringstream strstr(str); // use stream iterators to copy the stream to the vector as whitespace separated strings std::istream_iterator<std::string> it(strstr); std::istream_iterator<std::string> end; std::vector<std::string> results(it, end); // send the vector to stdout. std::ostream_iterator<std::string> oit(std::cout); std::copy(results.begin(), results.end(), oit); } A: A solution using regex_token_iterators: #include <iostream> #include <regex> #include <string> using namespace std; int main() { string str("The quick brown fox"); regex reg("\\s+"); sregex_token_iterator iter(str.begin(), str.end(), reg, -1); sregex_token_iterator end; vector<string> vec(iter, end); for (auto a : vec) { cout << a << endl; } } A: Check this example. It might help you.. #include <iostream> #include <sstream> using namespace std; int main () { string tmps; istringstream is ("the dellimiter is the space"); while (is.good ()) { is >> tmps; cout << tmps << "\n"; } return 0; } A: If you're using C++ ranges - the full ranges-v3 library, not the limited functionality accepted into C++20 - you could do it this way: auto results = str | ranges::views::tokenize(" ",1); ... and this is lazily-evaluated. You can alternatively set a vector to this range: auto results = str | ranges::views::tokenize(" ",1) | ranges::to<std::vector>(); this will take O(m) space and O(n) time if str has n characters making up m words. See also the library's own tokenization example, here. A: MFC/ATL has a very nice tokenizer. From MSDN: CAtlString str( "%First Second#Third" ); CAtlString resToken; int curPos= 0; resToken= str.Tokenize("% #",curPos); while (resToken != "") { printf("Resulting token: %s\n", resToken); resToken= str.Tokenize("% #",curPos); }; Output Resulting Token: First Resulting Token: Second Resulting Token: Third A: No offense folks, but for such a simple problem, you are making things way too complicated. There are a lot of reasons to use Boost. But for something this simple, it's like hitting a fly with a 20# sledge. void split( vector<string> & theStringVector, /* Altered/returned value */ const string & theString, const string & theDelimiter) { UASSERT( theDelimiter.size(), >, 0); // My own ASSERT macro. size_t start = 0, end = 0; while ( end != string::npos) { end = theString.find( theDelimiter, start); // If at end, use length=maxLength. Else use length=end-start. theStringVector.push_back( theString.substr( start, (end == string::npos) ? string::npos : end - start)); // If at end, use start=maxSize. Else use start=end+delimiter. start = ( ( end > (string::npos - theDelimiter.size()) ) ? string::npos : end + theDelimiter.size()); } } For example (for Doug's case), #define SHOW(I,X) cout << "[" << (I) << "]\t " # X " = \"" << (X) << "\"" << endl int main() { vector<string> v; split( v, "A:PEP:909:Inventory Item", ":" ); for (unsigned int i = 0; i < v.size(); i++) SHOW( i, v[i] ); } And yes, we could have split() return a new vector rather than passing one in. It's trivial to wrap and overload. But depending on what I'm doing, I often find it better to re-use pre-existing objects rather than always creating new ones. (Just as long as I don't forget to empty the vector in between!) Reference: http://www.cplusplus.com/reference/string/string/. (I was originally writing a response to Doug's question: C++ Strings Modifying and Extracting based on Separators (closed). But since Martin York closed that question with a pointer over here... I'll just generalize my code.) A: If you're willing to use C, you can use the strtok function. You should pay attention to multi-threading issues when using it. A: For simple stuff I just use the following: unsigned TokenizeString(const std::string& i_source, const std::string& i_seperators, bool i_discard_empty_tokens, std::vector<std::string>& o_tokens) { unsigned prev_pos = 0; unsigned pos = 0; unsigned number_of_tokens = 0; o_tokens.clear(); pos = i_source.find_first_of(i_seperators, pos); while (pos != std::string::npos) { std::string token = i_source.substr(prev_pos, pos - prev_pos); if (!i_discard_empty_tokens || token != "") { o_tokens.push_back(i_source.substr(prev_pos, pos - prev_pos)); number_of_tokens++; } pos++; prev_pos = pos; pos = i_source.find_first_of(i_seperators, pos); } if (prev_pos < i_source.length()) { o_tokens.push_back(i_source.substr(prev_pos)); number_of_tokens++; } return number_of_tokens; } Cowardly disclaimer: I write real-time data processing software where the data comes in through binary files, sockets, or some API call (I/O cards, camera's). I never use this function for something more complicated or time-critical than reading external configuration files on startup. A: You can simply use a regular expression library and solve that using regular expressions. Use expression (\w+) and the variable in \1 (or $1 depending on the library implementation of regular expressions). A: Many overly complicated suggestions here. Try this simple std::string solution: using namespace std; string someText = ... string::size_type tokenOff = 0, sepOff = tokenOff; while (sepOff != string::npos) { sepOff = someText.find(' ', sepOff); string::size_type tokenLen = (sepOff == string::npos) ? sepOff : sepOff++ - tokenOff; string token = someText.substr(tokenOff, tokenLen); if (!token.empty()) /* do something with token */; tokenOff = sepOff; } A: Boost has a strong split function: boost::algorithm::split. Sample program: #include <vector> #include <boost/algorithm/string.hpp> int main() { auto s = "a,b, c ,,e,f,"; std::vector<std::string> fields; boost::split(fields, s, boost::is_any_of(",")); for (const auto& field : fields) std::cout << "\"" << field << "\"\n"; return 0; } Output: "a" "b" " c " "" "e" "f" "" A: I thought that was what the >> operator on string streams was for: string word; sin >> word; A: I know this question is already answered but I want to contribute. Maybe my solution is a bit simple but this is what I came up with: vector<string> get_words(string const& text, string const& separator) { vector<string> result; string tmp = text; size_t first_pos = 0; size_t second_pos = tmp.find(separator); while (second_pos != string::npos) { if (first_pos != second_pos) { string word = tmp.substr(first_pos, second_pos - first_pos); result.push_back(word); } tmp = tmp.substr(second_pos + separator.length()); second_pos = tmp.find(separator); } result.push_back(tmp); return result; } Please comment if there is a better approach to something in my code or if something is wrong. UPDATE: added generic separator A: This is a simple STL-only solution (~5 lines!) using std::find and std::find_first_not_of that handles repetitions of the delimiter (like spaces or periods for instance), as well leading and trailing delimiters: #include <string> #include <vector> void tokenize(std::string str, std::vector<string> &token_v){ size_t start = str.find_first_not_of(DELIMITER), end=start; while (start != std::string::npos){ // Find next occurence of delimiter end = str.find(DELIMITER, start); // Push back the token found into vector token_v.push_back(str.substr(start, end-start)); // Skip all occurences of the delimiter to find new start start = str.find_first_not_of(DELIMITER, end); } } Try it out live! A: I know you asked for a C++ solution, but you might consider this helpful: Qt #include <QString> ... QString str = "The quick brown fox"; QStringList results = str.split(" "); The advantage over Boost in this example is that it's a direct one to one mapping to your post's code. See more at Qt documentation A: Here is a sample tokenizer class that might do what you want //Header file class Tokenizer { public: static const std::string DELIMITERS; Tokenizer(const std::string& str); Tokenizer(const std::string& str, const std::string& delimiters); bool NextToken(); bool NextToken(const std::string& delimiters); const std::string GetToken() const; void Reset(); protected: size_t m_offset; const std::string m_string; std::string m_token; std::string m_delimiters; }; //CPP file const std::string Tokenizer::DELIMITERS(" \t\n\r"); Tokenizer::Tokenizer(const std::string& s) : m_string(s), m_offset(0), m_delimiters(DELIMITERS) {} Tokenizer::Tokenizer(const std::string& s, const std::string& delimiters) : m_string(s), m_offset(0), m_delimiters(delimiters) {} bool Tokenizer::NextToken() { return NextToken(m_delimiters); } bool Tokenizer::NextToken(const std::string& delimiters) { size_t i = m_string.find_first_not_of(delimiters, m_offset); if (std::string::npos == i) { m_offset = m_string.length(); return false; } size_t j = m_string.find_first_of(delimiters, i); if (std::string::npos == j) { m_token = m_string.substr(i); m_offset = m_string.length(); return true; } m_token = m_string.substr(i, j - i); m_offset = j; return true; } Example: std::vector <std::string> v; Tokenizer s("split this string", " "); while (s.NextToken()) { v.push_back(s.GetToken()); } A: Here's an approach that allows you control over whether empty tokens are included (like strsep) or excluded (like strtok). #include <string.h> // for strchr and strlen /* * want_empty_tokens==true : include empty tokens, like strsep() * want_empty_tokens==false : exclude empty tokens, like strtok() */ std::vector<std::string> tokenize(const char* src, char delim, bool want_empty_tokens) { std::vector<std::string> tokens; if (src and *src != '\0') // defensive while( true ) { const char* d = strchr(src, delim); size_t len = (d)? d-src : strlen(src); if (len or want_empty_tokens) tokens.push_back( std::string(src, len) ); // capture token if (d) src += len+1; else break; } return tokens; } A: Seems odd to me that with all us speed conscious nerds here on SO no one has presented a version that uses a compile time generated look up table for the delimiter (example implementation further down). Using a look up table and iterators should beat std::regex in efficiency, if you don't need to beat regex, just use it, its standard as of C++11 and super flexible. Some have suggested regex already but for the noobs here is a packaged example that should do exactly what the OP expects: std::vector<std::string> split(std::string::const_iterator it, std::string::const_iterator end, std::regex e = std::regex{"\\w+"}){ std::smatch m{}; std::vector<std::string> ret{}; while (std::regex_search (it,end,m,e)) { ret.emplace_back(m.str()); std::advance(it, m.position() + m.length()); //next start position = match position + match length } return ret; } std::vector<std::string> split(const std::string &s, std::regex e = std::regex{"\\w+"}){ //comfort version calls flexible version return split(s.cbegin(), s.cend(), std::move(e)); } int main () { std::string str {"Some people, excluding those present, have been compile time constants - since puberty."}; auto v = split(str); for(const auto&s:v){ std::cout << s << std::endl; } std::cout << "crazy version:" << std::endl; v = split(str, std::regex{"[^e]+"}); //using e as delim shows flexibility for(const auto&s:v){ std::cout << s << std::endl; } return 0; } If we need to be faster and accept the constraint that all chars must be 8 bits we can make a look up table at compile time using metaprogramming: template<bool...> struct BoolSequence{}; //just here to hold bools template<char...> struct CharSequence{}; //just here to hold chars template<typename T, char C> struct Contains; //generic template<char First, char... Cs, char Match> //not first specialization struct Contains<CharSequence<First, Cs...>,Match> : Contains<CharSequence<Cs...>, Match>{}; //strip first and increase index template<char First, char... Cs> //is first specialization struct Contains<CharSequence<First, Cs...>,First>: std::true_type {}; template<char Match> //not found specialization struct Contains<CharSequence<>,Match>: std::false_type{}; template<int I, typename T, typename U> struct MakeSequence; //generic template<int I, bool... Bs, typename U> struct MakeSequence<I,BoolSequence<Bs...>, U>: //not last MakeSequence<I-1, BoolSequence<Contains<U,I-1>::value,Bs...>, U>{}; template<bool... Bs, typename U> struct MakeSequence<0,BoolSequence<Bs...>,U>{ //last using Type = BoolSequence<Bs...>; }; template<typename T> struct BoolASCIITable; template<bool... Bs> struct BoolASCIITable<BoolSequence<Bs...>>{ /* could be made constexpr but not yet supported by MSVC */ static bool isDelim(const char c){ static const bool table[256] = {Bs...}; return table[static_cast<int>(c)]; } }; using Delims = CharSequence<'.',',',' ',':','\n'>; //list your custom delimiters here using Table = BoolASCIITable<typename MakeSequence<256,BoolSequence<>,Delims>::Type>; With that in place making a getNextToken function is easy: template<typename T_It> std::pair<T_It,T_It> getNextToken(T_It begin,T_It end){ begin = std::find_if(begin,end,std::not1(Table{})); //find first non delim or end auto second = std::find_if(begin,end,Table{}); //find first delim or end return std::make_pair(begin,second); } Using it is also easy: int main() { std::string s{"Some people, excluding those present, have been compile time constants - since puberty."}; auto it = std::begin(s); auto end = std::end(s); while(it != std::end(s)){ auto token = getNextToken(it,end); std::cout << std::string(token.first,token.second) << std::endl; it = token.second; } return 0; } Here is a live example: http://ideone.com/GKtkLQ A: The Boost tokenizer class can make this sort of thing quite simple: #include <iostream> #include <string> #include <boost/foreach.hpp> #include <boost/tokenizer.hpp> using namespace std; using namespace boost; int main(int, char**) { string text = "token, test string"; char_separator<char> sep(", "); tokenizer< char_separator<char> > tokens(text, sep); BOOST_FOREACH (const string& t, tokens) { cout << t << "." << endl; } } Updated for C++11: #include <iostream> #include <string> #include <boost/tokenizer.hpp> using namespace std; using namespace boost; int main(int, char**) { string text = "token, test string"; char_separator<char> sep(", "); tokenizer<char_separator<char>> tokens(text, sep); for (const auto& t : tokens) { cout << t << "." << endl; } } A: Here's a real simple one: #include <vector> #include <string> using namespace std; vector<string> split(const char *str, char c = ' ') { vector<string> result; do { const char *begin = str; while(*str != c && *str) str++; result.push_back(string(begin, str)); } while (0 != *str++); return result; } A: C++ standard library algorithms are pretty universally based around iterators rather than concrete containers. Unfortunately this makes it hard to provide a Java-like split function in the C++ standard library, even though nobody argues that this would be convenient. But what would its return type be? std::vector<std::basic_string<…>>? Maybe, but then we’re forced to perform (potentially redundant and costly) allocations. Instead, C++ offers a plethora of ways to split strings based on arbitrarily complex delimiters, but none of them is encapsulated as nicely as in other languages. The numerous ways fill whole blog posts. At its simplest, you could iterate using std::string::find until you hit std::string::npos, and extract the contents using std::string::substr. A more fluid (and idiomatic, but basic) version for splitting on whitespace would use a std::istringstream: auto iss = std::istringstream{"The quick brown fox"}; auto str = std::string{}; while (iss >> str) { process(str); } Using std::istream_iterators, the contents of the string stream could also be copied into a vector using its iterator range constructor. Multiple libraries (such as Boost.Tokenizer) offer specific tokenisers. More advanced splitting require regular expressions. C++ provides the std::regex_token_iterator for this purpose in particular: auto const str = "The quick brown fox"s; auto const re = std::regex{R"(\s+)"}; auto const vec = std::vector<std::string>( std::sregex_token_iterator{begin(str), end(str), re, -1}, std::sregex_token_iterator{} ); A: pystring is a small library which implements a bunch of Python's string functions, including the split method: #include <string> #include <vector> #include "pystring.h" std::vector<std::string> chunks; pystring::split("this string", chunks); // also can specify a separator pystring::split("this-string", chunks, "-"); A: Another quick way is to use getline. Something like: stringstream ss("bla bla"); string s; while (getline(ss, s, ' ')) { cout << s << endl; } If you want, you can make a simple split() method returning a vector<string>, which is really useful. A: Use strtok. In my opinion, there isn't a need to build a class around tokenizing unless strtok doesn't provide you with what you need. It might not, but in 15+ years of writing various parsing code in C and C++, I've always used strtok. Here is an example char myString[] = "The quick brown fox"; char *p = strtok(myString, " "); while (p) { printf ("Token: %s\n", p); p = strtok(NULL, " "); } A few caveats (which might not suit your needs). The string is "destroyed" in the process, meaning that EOS characters are placed inline in the delimter spots. Correct usage might require you to make a non-const version of the string. You can also change the list of delimiters mid parse. In my own opinion, the above code is far simpler and easier to use than writing a separate class for it. To me, this is one of those functions that the language provides and it does it well and cleanly. It's simply a "C based" solution. It's appropriate, it's easy, and you don't have to write a lot of extra code :-) A: I posted this answer for similar question. Don't reinvent the wheel. I've used a number of libraries and the fastest and most flexible I have come across is: C++ String Toolkit Library. Here is an example of how to use it that I've posted else where on the stackoverflow. #include <iostream> #include <vector> #include <string> #include <strtk.hpp> const char *whitespace = " \t\r\n\f"; const char *whitespace_and_punctuation = " \t\r\n\f;,="; int main() { { // normal parsing of a string into a vector of strings std::string s("Somewhere down the road"); std::vector<std::string> result; if( strtk::parse( s, whitespace, result ) ) { for(size_t i = 0; i < result.size(); ++i ) std::cout << result[i] << std::endl; } } { // parsing a string into a vector of floats with other separators // besides spaces std::string s("3.0, 3.14; 4.0"); std::vector<float> values; if( strtk::parse( s, whitespace_and_punctuation, values ) ) { for(size_t i = 0; i < values.size(); ++i ) std::cout << values[i] << std::endl; } } { // parsing a string into specific variables std::string s("angle = 45; radius = 9.9"); std::string w1, w2; float v1, v2; if( strtk::parse( s, whitespace_and_punctuation, w1, v1, w2, v2) ) { std::cout << "word " << w1 << ", value " << v1 << std::endl; std::cout << "word " << w2 << ", value " << v2 << std::endl; } } return 0; } A: you can take advantage of boost::make_find_iterator. Something similar to this: template<typename CH> inline vector< basic_string<CH> > tokenize( const basic_string<CH> &Input, const basic_string<CH> &Delimiter, bool remove_empty_token ) { typedef typename basic_string<CH>::const_iterator string_iterator_t; typedef boost::find_iterator< string_iterator_t > string_find_iterator_t; vector< basic_string<CH> > Result; string_iterator_t it = Input.begin(); string_iterator_t it_end = Input.end(); for(string_find_iterator_t i = boost::make_find_iterator(Input, boost::first_finder(Delimiter, boost::is_equal())); i != string_find_iterator_t(); ++i) { if(remove_empty_token){ if(it != i->begin()) Result.push_back(basic_string<CH>(it,i->begin())); } else Result.push_back(basic_string<CH>(it,i->begin())); it = i->end(); } if(it != it_end) Result.push_back(basic_string<CH>(it,it_end)); return Result; } A: Here's my Swiss® Army Knife of string-tokenizers for splitting up strings by whitespace, accounting for single and double-quote wrapped strings as well as stripping those characters from the results. I used RegexBuddy 4.x to generate most of the code-snippet, but I added custom handling for stripping quotes and a few other things. #include <string> #include <locale> #include <regex> std::vector<std::wstring> tokenize_string(std::wstring string_to_tokenize) { std::vector<std::wstring> tokens; std::wregex re(LR"(("[^"]*"|'[^']*'|[^"' ]+))", std::regex_constants::collate); std::wsregex_iterator next( string_to_tokenize.begin(), string_to_tokenize.end(), re, std::regex_constants::match_not_null ); std::wsregex_iterator end; const wchar_t single_quote = L'\''; const wchar_t double_quote = L'\"'; while ( next != end ) { std::wsmatch match = *next; const std::wstring token = match.str( 0 ); next++; if (token.length() > 2 && (token.front() == double_quote || token.front() == single_quote)) tokens.emplace_back( std::wstring(token.begin()+1, token.begin()+token.length()-1) ); else tokens.emplace_back(token); } return tokens; } A: I wrote a simplified version (and maybe a little bit efficient) of https://stackoverflow.com/a/50247503/3976739 for my own use. I hope it would help. void StrTokenizer(string& source, const char* delimiter, vector<string>& Tokens) { size_t new_index = 0; size_t old_index = 0; while (new_index != std::string::npos) { new_index = source.find(delimiter, old_index); Tokens.emplace_back(source.substr(old_index, new_index-old_index)); if (new_index != std::string::npos) old_index = ++new_index; } } A: If the maximum length of the input string to be tokenized is known, one can exploit this and implement a very fast version. I am sketching the basic idea below, which was inspired by both strtok() and the "suffix array"-data structure described Jon Bentley's "Programming Perls" 2nd edition, chapter 15. The C++ class in this case only gives some organization and convenience of use. The implementation shown can be easily extended for removing leading and trailing whitespace characters in the tokens. Basically one can replace the separator characters with string-terminating '\0'-characters and set pointers to the tokens withing the modified string. In the extreme case when the string consists only of separators, one gets string-length plus 1 resulting empty tokens. It is practical to duplicate the string to be modified. Header file: class TextLineSplitter { public: TextLineSplitter( const size_t max_line_len ); ~TextLineSplitter(); void SplitLine( const char *line, const char sep_char = ',', ); inline size_t NumTokens( void ) const { return mNumTokens; } const char * GetToken( const size_t token_idx ) const { assert( token_idx < mNumTokens ); return mTokens[ token_idx ]; } private: const size_t mStorageSize; char *mBuff; char **mTokens; size_t mNumTokens; inline void ResetContent( void ) { memset( mBuff, 0, mStorageSize ); // mark all items as empty: memset( mTokens, 0, mStorageSize * sizeof( char* ) ); // reset counter for found items: mNumTokens = 0L; } }; Implementattion file: TextLineSplitter::TextLineSplitter( const size_t max_line_len ): mStorageSize ( max_line_len + 1L ) { // allocate memory mBuff = new char [ mStorageSize ]; mTokens = new char* [ mStorageSize ]; ResetContent(); } TextLineSplitter::~TextLineSplitter() { delete [] mBuff; delete [] mTokens; } void TextLineSplitter::SplitLine( const char *line, const char sep_char /* = ',' */, ) { assert( sep_char != '\0' ); ResetContent(); strncpy( mBuff, line, mMaxLineLen ); size_t idx = 0L; // running index for characters do { assert( idx < mStorageSize ); const char chr = line[ idx ]; // retrieve current character if( mTokens[ mNumTokens ] == NULL ) { mTokens[ mNumTokens ] = &mBuff[ idx ]; } // if if( chr == sep_char || chr == '\0' ) { // item or line finished // overwrite separator with a 0-terminating character: mBuff[ idx ] = '\0'; // count-up items: mNumTokens ++; } // if } while( line[ idx++ ] ); } A scenario of usage would be: // create an instance capable of splitting strings up to 1000 chars long: TextLineSplitter spl( 1000 ); spl.SplitLine( "Item1,,Item2,Item3" ); for( size_t i = 0; i < spl.NumTokens(); i++ ) { printf( "%s\n", spl.GetToken( i ) ); } output: Item1 Item2 Item3 A: boost::tokenizer is your friend, but consider making your code portable with reference to internationalization (i18n) issues by using wstring/wchar_t instead of the legacy string/char types. #include <iostream> #include <boost/tokenizer.hpp> #include <string> using namespace std; using namespace boost; typedef tokenizer<char_separator<wchar_t>, wstring::const_iterator, wstring> Tok; int main() { wstring s; while (getline(wcin, s)) { char_separator<wchar_t> sep(L" "); // list of separator characters Tok tok(s, sep); for (Tok::iterator beg = tok.begin(); beg != tok.end(); ++beg) { wcout << *beg << L"\t"; // output (or store in vector) } wcout << L"\n"; } return 0; } A: Simple C++ code (standard C++98), accepts multiple delimiters (specified in a std::string), uses only vectors, strings and iterators. #include <iostream> #include <vector> #include <string> #include <stdexcept> std::vector<std::string> split(const std::string& str, const std::string& delim){ std::vector<std::string> result; if (str.empty()) throw std::runtime_error("Can not tokenize an empty string!"); std::string::const_iterator begin, str_it; begin = str_it = str.begin(); do { while (delim.find(*str_it) == std::string::npos && str_it != str.end()) str_it++; // find the position of the first delimiter in str std::string token = std::string(begin, str_it); // grab the token if (!token.empty()) // empty token only when str starts with a delimiter result.push_back(token); // push the token into a vector<string> while (delim.find(*str_it) != std::string::npos && str_it != str.end()) str_it++; // ignore the additional consecutive delimiters begin = str_it; // process the remaining tokens } while (str_it != str.end()); return result; } int main() { std::string test_string = ".this is.a.../.simple;;test;;;END"; std::string delim = "; ./"; // string containing the delimiters std::vector<std::string> tokens = split(test_string, delim); for (std::vector<std::string>::const_iterator it = tokens.begin(); it != tokens.end(); it++) std::cout << *it << std::endl; } A: /// split a string into multiple sub strings, based on a separator string /// for example, if separator="::", /// /// s = "abc" -> "abc" /// /// s = "abc::def xy::st:" -> "abc", "def xy" and "st:", /// /// s = "::abc::" -> "abc" /// /// s = "::" -> NO sub strings found /// /// s = "" -> NO sub strings found /// /// then append the sub-strings to the end of the vector v. /// /// the idea comes from the findUrls() function of "Accelerated C++", chapt7, /// findurls.cpp /// void split(const string& s, const string& sep, vector<string>& v) { typedef string::const_iterator iter; iter b = s.begin(), e = s.end(), i; iter sep_b = sep.begin(), sep_e = sep.end(); // search through s while (b != e){ i = search(b, e, sep_b, sep_e); // no more separator found if (i == e){ // it's not an empty string if (b != e) v.push_back(string(b, e)); break; } else if (i == b){ // the separator is found and right at the beginning // in this case, we need to move on and search for the // next separator b = i + sep.length(); } else{ // found the separator v.push_back(string(b, i)); b = i; } } } The boost library is good, but they are not always available. Doing this sort of things by hand is also a good brain exercise. Here we just use the std::search() algorithm from the STL, see the above code. A: I've been searching for a way to split a string by a separator of any length, so I started writing it from scratch, as existing solutions didn't suit me. Here is my little algorithm, using only STL: //use like this //std::vector<std::wstring> vec = Split<std::wstring> (L"Hello##world##!", L"##"); template <typename valueType> static std::vector <valueType> Split (valueType text, const valueType& delimiter) { std::vector <valueType> tokens; size_t pos = 0; valueType token; while ((pos = text.find(delimiter)) != valueType::npos) { token = text.substr(0, pos); tokens.push_back (token); text.erase(0, pos + delimiter.length()); } tokens.push_back (text); return tokens; } It can be used with separator of any length and form, as far as I've tested. Instantiate with either string or wstring type. All the algorithm does is it searches for the delimiter, gets the part of the string that is up to the delimiter, deletes the delimiter and searches again until it finds it no more. Hope it helps. A: I made a lexer/tokenizer before with the use of only standard libraries. Here's the code: #include <iostream> #include <string> #include <vector> #include <sstream> using namespace std; string seps(string& s) { if (!s.size()) return ""; stringstream ss; ss << s[0]; for (int i = 1; i < s.size(); i++) { ss << '|' << s[i]; } return ss.str(); } void Tokenize(string& str, vector<string>& tokens, const string& delimiters = " ") { seps(str); // Skip delimiters at beginning. string::size_type lastPos = str.find_first_not_of(delimiters, 0); // Find first "non-delimiter". string::size_type pos = str.find_first_of(delimiters, lastPos); while (string::npos != pos || string::npos != lastPos) { // Found a token, add it to the vector. tokens.push_back(str.substr(lastPos, pos - lastPos)); // Skip delimiters. Note the "not_of" lastPos = str.find_first_not_of(delimiters, pos); // Find next "non-delimiter" pos = str.find_first_of(delimiters, lastPos); } } int main(int argc, char *argv[]) { vector<string> t; string s = "Tokens for everyone!"; Tokenize(s, t, "|"); for (auto c : t) cout << c << endl; system("pause"); return 0; } A: I just read all the answers and can't find solution with next preconditions: * *no dynamic memory allocations *no use of boost *no use of regex *c++17 standard only So here is my solution #include <iomanip> #include <iostream> #include <iterator> #include <string_view> #include <utility> struct split_by_spaces { std::string_view text; static constexpr char delim = ' '; struct iterator { const std::string_view& text; std::size_t cur_pos; std::size_t end_pos; std::string_view operator*() const { return { &text[cur_pos], end_pos - cur_pos }; } bool operator==(const iterator& other) const { return cur_pos == other.cur_pos && end_pos == other.end_pos; } bool operator!=(const iterator& other) const { return !(*this == other); } iterator& operator++() { cur_pos = text.find_first_not_of(delim, end_pos); if (cur_pos == std::string_view::npos) { cur_pos = text.size(); end_pos = cur_pos; return *this; } end_pos = text.find(delim, cur_pos); if (cur_pos == std::string_view::npos) { end_pos = text.size(); } return *this; } }; [[nodiscard]] iterator begin() const { auto start = text.find_first_not_of(delim); if (start == std::string_view::npos) { return iterator{ text, text.size(), text.size() }; } auto end_word = text.find(delim, start); if (end_word == std::string_view::npos) { end_word = text.size(); } return iterator{ text, start, end_word }; } [[nodiscard]] iterator end() const { return iterator{ text, text.size(), text.size() }; } }; int main(int argc, char** argv) { using namespace std::literals; auto str = " there should be no memory allocation during parsing" " into words this line and you should'n create any" " contaner for intermediate words "sv; auto comma = ""; for (std::string_view word : split_by_spaces{ str }) { std::cout << std::exchange(comma, ",") << std::quoted(word); } auto only_spaces = " "sv; for (std::string_view word : split_by_spaces{ only_spaces }) { std::cout << "you will not see this line in output" << std::endl; } } A: This a simple loop to tokenise with only standard library files #include <iostream.h> #include <stdio.h> #include <string.h> #include <math.h> #include <conio.h> class word { public: char w[20]; word() { for(int j=0;j<=20;j++) {w[j]='\0'; } } }; void main() { int i=1,n=0,j=0,k=0,m=1; char input[100]; word ww[100]; gets(input); n=strlen(input); for(i=0;i<=m;i++) { if(context[i]!=' ') { ww[k].w[j]=context[i]; j++; } else { k++; j=0; m++; } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/53849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "468" }
Q: How to create Virtual COM ports I am trying to implement a software Null Modem. Any suggestion how to create virtual COM Ports? Looking for tips, strategy, known techniques, etc.. A: I have used the open-source com0com on windows for this before, and it worked well. The related com2tcp project was more challenging to get working reliably.
{ "language": "en", "url": "https://stackoverflow.com/questions/53857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Can a Linq query retrieve BLOBs from a Sql Database? Can a Linq query retrieve BLOBs from a Sql Database? And how do they come out? A: LINQ-To-SQL classes create properties of type System.Data.Linq.Binary for all binary and varbinary fields in SQL-Server. The Binary type has a .ToArray() method that returns a byte[], and its constructor can take a byte[]. Older versions of SQLMetal generated properties of type byte[], but the problem with these was that they failed in any joins. I think that's the main reason they replaced it with the IEquatable Binary type. A: If I'm not mistaken LINQ to SQL teats BLOB as System.Byte[] I recall that there was some problem with SqlMetal, it generated wrong type for BLOB, but MSVS dmbl designer should work.
{ "language": "en", "url": "https://stackoverflow.com/questions/53873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: In vim, how do I go back to where I was before a search? Programming in vim I often go search for something, yank it, then go back to where I was, insert it, modify it. The problem is that after I search and find, I need to MANUALLY find my way back to where I was. Is there an automatic way to go back to where I was when I initiated my last search? A: I use this one: nnoremap / ms/ nnoremap ? ms? Then if I search something by using / or ?, I can go back quickly by `s. You could replace the letter s to any letter you like. A: The simplest way is to set a mark, with m[letter], then go back to it with '[letter] A: I've always done by it setting a mark. * *In command-mode, press m[letter]. For example, ma sets a mark at the current line using a as the mark identifier. *To get back to the mark press ' [letter]. For example, 'a takes you back to the line mark set in step 1. To get back to the column position of the row where you marked the line, use `a (back-tick [letter]). To see all of the marks that currently set, type :marks. On a slightly unrelated note, I just discovered another nifty thing about marks. Let's say you jump to mark b by doing mb. Vim automatically sets the mark ' (that's a single-quote) to be whichever line you were on before jumping to mark b. That means you can do 'b to jump to that mark, then do '' (2 single-quotes) to jump back to wherever you were before. I discovered this accidentally using the :marks command, which shows a list of all marks. A: Ctrl+O takes me to the previous location. Don't know about location before the search. Edit: Also, `. will take you to the last change you made. A: You really should read :help jumplist it explains all of this very well. A: CTRL+O and CTRL+I, for jumping back and forward. A: Use `` to jump back to the exact position you were in before you searched/jumped, or '' to jump back to the start of the line you were on before you searched/jumped.
{ "language": "en", "url": "https://stackoverflow.com/questions/53911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "282" }
Q: How can I lay images out in a grid? I'm trying to produce sheets of photographs with captions arranged in a grid using XSLT and XSL-FO. The photo URLs and captions are produced using a FOR XML query against an SQL Server database, and the number of photos returned varies from sheet to sheet. I want to lay the photos out in four columns, filling the grid from left to right and from top to bottom. In HTML I'd do this by putting each photo and caption into a div and using "float: left" to make them flow into the grid. Is there a similarly elegant method using XSL-FO? A: To keep life simple I would normally setup a table for this, it's quite simple and will ensure that things get laid out right. If you wanted to do it similarly to how you would do it in HTML then you should layout block-container elements. However you decide to do it I would always recommend using the ZVON Reference site. Nice lookup of elements and available attributes, and while their XSL-FO doesn't include much in the way of explanation every page deep links to the standards document. A: In the end I used a table with one row and four cells for this. In each one I selected the source elements with position() mod 4 equal to 0, 1, 2 or 3 as appropriate, and then made sure that the photo and caption was always the same height so the rows lined up correctly.
{ "language": "en", "url": "https://stackoverflow.com/questions/53913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: SharePoint: Using RichHTML field type in a custom content type I'd like to use the field type RichHTML in a custom content type that I'm making. However, I think that the RichHTML type comes with MOSS Publishing so I'm unsure how to add it to my content type. Right now I've tried with this: <Field ID="{7F55A8F0-4555-46BC-B24C-222240B862AF}" Type="RichHTML" Name="NewsBodyField" DisplayName="News Body" StaticName="NewsBodyField" Hidden="False" Required="True" Sealed="False" /> <Field ID="{7F55A8F0-4555-46BC-B24C-222240B862AF}" Type="RichHtmlField" Name="NewsBodyField" DisplayName="News Body" StaticName="NewsBodyField" Hidden="False" Required="True" Sealed="False" /> I know that when I want to access this custom field using a CQWP, I can export it and add it to my CommonViews using 'RichHTML', however that doesn't work here. Any help regarding how to add a Rich Html Field to a custom content type would be much appreciated. A: I figured it out myself. The type you're looking for is "HTML" not RichHTML.
{ "language": "en", "url": "https://stackoverflow.com/questions/53935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Why can't Visual Studio run on more than one core? CPU at 25% I'm running Visual Studio 2008 with the stuff-of-nightmares awful MS test framework. Trouble is that it's sending my CPU to 100% (well 25% on a quad-core). My question is why can't Visual Studio run on more than one core? Surely M$ must have a sufficient handle on threading to get this to work. A: You can ask VS to compile multiple projects in parallel as well as compiling parallelly (!?) within a project. Tools > Options > Projects and Solutions > maximum number of parallel projects build. This will build C++ and C# in parallel as well! A: In case anyone comes across this old question, VS2012 introduced parallel builds as a standard feature. Quote from the article: Visual Studio 2010 included an option for "maximum number of parallel project builds." Although there was no indication of any restriction, this IDE option only worked for C++ projects. Fortunately, this restriction no longer applies to Visual Studio 11. Rather, there's now full support for parallel builds in other languages as well. To view this, run a copy of Process Explorer at the same time a solution with numerous projects is building. You'll see that multiple MSBuild instances are created -- as many as specified in the "maximum number of parallel project builds." A: Now that Visual Studio 2010 has been released for a bit, consider upgrading to make use of the parallelTestCount attribute in MSTest's .testsettings file, as described at How to: Run Unit Tests Faster Using a Computer with Multiple CPUs or Cores. There are a few limitations, such as: * *Only simple unit tests are supported (i.e. excludes coded UI tests and ASP.NET-hosted tests) *Tests must be thread-safe (all tests are run in the same process) *You can't collect code coverage (among other data & diagnostics) at the same time Example, using 0 to mean auto-detect (the default is 1): <?xml version="1.0" encoding="UTF-8"?> <TestSettings name="Release" id="{GUID}" xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010"> <Description> These are default test settings for a local test run. </Description> <Execution parallelTestCount="0"> (...) </Execution> </TestSettings> A few blogs have noted that you might have to close and re-open your project for Visual Studio to notice you added/changed that attribute. Also, if you edit the test settings file using the GUI, you'll probably have to re-add the parallelTestCount attribute. A: We also added multiple core support for doing multi-threaded builds on the command line for those of you with a lot of projects and long build times. Enabling multiple core support requires only a few new properties, and MSBuild manages all of the work to schedule projects efficiently and effectively. The MSBuild team has tested this ability to scale by building some projects on a 64-CPU machine. that is from somasegar blog So they sort of started doing it, well at least for the build. A: I have VS2008 running on all 4 CPUs. Just set this environment variable / project flag. /MP (It can be set in C/C++ Settings, Advanced. In project settings) Edit: The MP flag can also accept a number, e.g. /MP2 which means it will only run on 2 cores. Leaving it as just /MP means it will run on the maximum amount of cores. Edit2: The MP flag is probably for the compiler only. A: The /MP flag is only for builds, we at least it is according to this msdn Now I would love to be wrong about it, but im pretty sure its just for builds. Which of course is still very useful. A: I'm sure it's very hard. Huge existing GUI-heavy non-threaded code base to multi-threaded. Sounds like a 10 to me. But it seems to use multi-cores to me. The Intellesense seems threaded. The build system has multi-project building and for C++ multi-file building as well. You problems with these tools sounds a bit deeper then how well they use you CPUs. A: For Visual Studio 2010 Go to Tools > Options > Projects & Solutions > Build and Run. You will then see an entry to enter a number for the 'maximum number of parallel project builds'; my PC has an i7-3770 CPU, a Quad Core with HyperThreading, so it is set to 8. For information on different versions of Visual Studio go here and select your version: https://msdn.microsoft.com/en-us/library/cyhcc7zc(v=vs.100).aspx e.g. for Visual Studio 2010 this property only affects C++ builds: Specifies the maximum number of Visual C++ projects that can build at the same time. To optimize the build process, the maximum number of parallel project builds is automatically set to the number of CPUs of your computer. The maximum is 32. But for Visual Studio it's for C++ and C#: maximum number of parallel project builds Specifies the maximum number of Visual C++ and Visual C# projects that can build at the same time. To optimize the build process, the maximum number of parallel project builds is automatically set to the number of CPUs of your computer. The maximum is 32.
{ "language": "en", "url": "https://stackoverflow.com/questions/53939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is there a tool that can convert common image formats (.bmp, jpg,..) to .emf files? I'm using the GoDiagrams suite which seems to recommend .emf files for node images since they scale better on resizing. Bitmaps get all blurry. Google doesn't show up any good tools that seem to do this... So to reiterate I'm looking for a image converter (preferably free) that converts an image (in one of the common formats like Bitmaps or JPEGs or GIFs) to an .EMF File. Update: I dont need to do it via code. Simple batch-conversion of images will do. A: Inkscape works well, it was recommended to me here. A: Irfanview (http://www.irfanview.com) supports many image formats (including .emf). It's also small, fast, and very full-featured. It is free for non-commercial and educational use. I use it for all my image-conversion needs as it will work on batches of files and can rename them as it saves. A: Image Magick contains a tool called convert, that will convert from just about anything to EMF files. You can either use this as a separate application, or interface to it using an API that is available in several different languages. A: XnView (http://www.xnview.com). Very good viewer and converter. A: Try ImageConverter Plus A: Try http://autotrace.sourceforge.net/. It is opensource and it has good results. Download from here: http://sourceforge.net/project/showfiles.php?group_id=11789 A: Really funny one Microsoft. Now this might seem outlandish but it works... (I have Visio2007). Just found this out from a colleague You can drop a JPEG into Microsoft Visio (no less), Do a 'Save As' to .emf and voila! nice quality of a picture too.
{ "language": "en", "url": "https://stackoverflow.com/questions/53941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Dynamically inserting javascript into HTML that uses document.write I am currently loading a lightbox style popup that loads it's HTML from an XHR call. This content is then displayed in a 'modal' popup using element.innerHTML = content This works like a charm. In another section of this website I use a Flickr 'badge' (http://www.elliotswan.com/2006/08/06/custom-flickr-badge-api-documentation/) to load flickr images dynamically. This is done including a script tag that loads a flickr javascript, which in turn does some document.write statments. Both of them work perfectly when included in the HTML. Only when loading the flickr badge code inside the lightbox, no content is rendered at all. It seems that using innerHTML to write document.write statements is taking it a step too far, but I cannot find any clue in the javascript implementations (FF2&3, IE6&7) of this behavior. Can anyone clarify if this should or shouldn't work? Thanks. A: I created a simple test page that illustrates the problem: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <meta http-equiv="Content-type" content="text/html; charset=utf-8" /> <title>Document Write Testcase</title> </head> <body> <div id="container"> </div> <div id="container2"> </div> <script> // This doesn't work! var container = document.getElementById('container'); container.innerHTML = "<script type='text/javascript'>alert('foo');document.write('bar');<\/script>"; // This does! var container2 = document.getElementById('container2'); var script = document.createElement("script"); script.type = 'text/javascript'; script.innerHTML = "alert('bar');document.write('foo');"; container.appendChild(script); </script> </body> </html> This page alerts 'bar' and prints 'foo', while I expected it to also alert 'foo' and print 'bar'. But, unfortunately, since the script tag is part of a larger HTML page, I cannot single out that tag and append it like the example above. Well, I can, but that would require scanning innerHTML content for script tags, and replacing them in the string by placeholders, and then inserting them using the DOM. Sounds not that trivial. A: Use document.writeln(content); instead of document.write(content). However, the better method is using the concatenation of innerHTML, like this: element.innerHTML += content; The element.innerHTML = content; method will replace the old content with the new one, which will overwrite your element's innerHTML! Whereas using the the += operator in element.innerHTML += content will append your text after the old content. (similar to what document.write does.) A: In general, script tags aren't executed when using innerHTML. In your case, this is good, because the document.write call would wipe out everything that's already in the page. However, that leaves you without whatever HTML document.write was supposed to add. jQuery's HTML manipulation methods will execute scripts in HTML for you, the trick is then capturing the calls to document.write and getting the HTML in the proper place. If it's simple enough, then something like this will do: var content = ''; document.write = function(s) { content += s; }; // execute the script $('#foo').html(markupWithScriptInIt); $('#foo .whereverTheDocumentWriteContentGoes').html(content); It gets complicated though. If the script is on another domain, it will be loaded asynchronously, so you'll have to wait until it's done to get the content. Also, what if it just writes the HTML into the middle of the fragment without a wrapper element that you can easily select? writeCapture.js (full disclosure: I wrote it) handles all of these problems. I'd recommend just using it, but at the very least you can look at the code to see how it handles everything. EDIT: Here is a page demonstrating what sounds like the effect you want. A: document.write is about as deprecated as they come. Thanks to the wonders of JavaScript, though, you can just assign your own function to the write method of the document object which uses innerHTML on an element of your choosing to append the supplied content. A: Can I get some clarification first to make sure I get the problem? document.write calls will add content to the markup at the point in the markup at which they occur. For example if you include document.write calls in a function but call the function elsewhere, the document.write output will happen at the point in the markup the function is defined not where it is called. Therefore for this to work at all the Flickr document.write statements will need to be part of the content in element.innerHTML = content. Is this definitely the case? You might quickly test if this should work at all by adding a single and simple document.write call in the content that is set as the innerHTML and see what this does: <script> var content = "<p>1st para</p><script>document.write('<p>2nd para</p>');</script>" element.innerHTML = content; </script> If that works, the concept of document.write working in content set as the innerHTML of an element might just work. My gut feeling is that it won't work, but it should be pretty straightforward to test the concept. A: So you're using a DOM method to create a script element and append that to an existing element and this then causes the content of the appended script element to execute? That sounds good. You say that the script tag is part of a larger HTML page and therefore cannot be singled out. Can you not give the script tag an ID and target it? I'm probably missing something obvious here. A: In theory, yes, I can single out a script tag that way. The problem is that we potentially have dozens of situations where this occurs, so I am trying to find some cause or documentation of this behavior. Also, the script tag does not seem to be a part of the DOM anymore after it gets loaded. In our environment, my container div remains empty, so I cannot fetch the script tag. It should work, though, because in my example above the script does not get executed, but is still part of the DOM.
{ "language": "en", "url": "https://stackoverflow.com/questions/53945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: HTML to Image .tiff File Is there a way to convert a HTML string into a Image .tiff file? I am using C# .NET 3.5. The requirement is to give the user an option to fact a confirmation. The confirmation is created with XML and a XSLT. Typically it is e-mailed. Is there a way I can take the HTML string generated by the transformation HTML string and convert that to a .tiff or any image that can be faxed? 3rd party software is allowed, however the cheaper the better. We are using a 3rd party fax library, that will only accept .tiff images, but if I can get the HTML to be any image I can covert it into a .tiff. A: Here are some free-as-in-beer possibilities: You can use the PDFCreator printer driver that comes with ghostscript and print directly to a TIFF file or many other formats. If you have MSOffice installed, the Microsoft Office Document Image Writer will produce a file you can convert to other formats. But in general, your best bet is to print to a driver that will produce and image file of some kind or a windows meta-file format (.wmf) file. Is there some reason why you can't just print-to-fax? Does the third-party software not support a printer driver? That's unusual these days. A: A starting point might be the software of WebSuperGoo, which provide rich image editing products, cheap or for free. I know for sure their PDF Writer can do basic HTML (http://www.websupergoo.com/helppdf6net/source/3-concepts/b-htmlstyles.htm). This should not be too hard to convert to TIFF. This does not include the full HTML subset or CSS. That might require using Microsofts IE ActiveX component.
{ "language": "en", "url": "https://stackoverflow.com/questions/53956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }